text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Dynamics of Magnetic Fluids in Crossed DC and AC Magnetic Fields In this study, we derived the equations describing the dynamics of a magnetic fluid in crossed magnetic fields (bias and alternating probe fields), considering the field dependence of the relaxation times, interparticle interactions, and demagnetizing field has been derived. For a monodisperse fluid, the dependence of the output signal on the bias field and the probe field frequency was constructed. Experimental studies were conducted in a frequency range up to 80 kHz for two samples of fluids based on magnetite nanoparticles and kerosene. The first sample had a narrow particle size distribution, low-energy magneto dipole interactions, and weak dispersion of dynamic susceptibility. The second sample had a broad particle size distribution, high-energy magneto dipole interactions, and strong dispersion of dynamic susceptibility. In the first case, the bias field led to the appearance of short chains. In the second case, we found quasi-spherical clusters with a characteristic size of 100 nm. The strong dependence of the output signal on the particle size allowed us to use the crossed field method to independently estimate the maximum diameter of the magnetic core of particles. Introduction The behavior of magnetic fluids in crossed magnetic fields (direct current (DC) bias H 0 and alternating h = a cos ωt) features a number of peculiarities, which provide valuable information on the internal structure of colloidal solutions, including the characteristic sizes of particles and clusters. The source of information is the dependence of the electromotive force (emf) E(H 0 ), induced in the measuring coil at double frequency (output signal) on the bias field strength H 0 . The specific feature of the experiment is that the output signal changes nonmonotonically with increasing H 0 . Similar studies were conducted previously [1][2][3] in the low-frequency region corresponding to a quasistatic limit ωτ << 1, where τ is the magnetization relaxation time. In this paper, we derive equations that are valid at any value of ωτ, provided that the frequency of the probe field ω remains small compared with the frequency of ferromagnetic resonance at 10 10 Hz [4,5]. These equations are used to analyze the experimental data over a wide frequency range. We focus on the polydispersity of particles and interparticle interactions causing particle aggregation. The influence of the magnetic dipole-dipole interparticle interactions is most dramatic in weak and moderate fields. Depending on the particle concentration, the interparticle interactions may be responsible for the two-or four-fold growth in the initial magnetic susceptibility and increase in the nonlinearity of the magnetization curve [6,7]. The polydispersity of particles (i.e., broad particle-size distribution) affects practically all physical properties of magnetic fluids. Consideration of polydispersity can result in a qualitatively new interpretation of experimental data. In this work, we consider the crossed-field method as the tool for obtaining information about the biggest particles contained in a ferrofluid. The coarse fraction leads to the formation of nanoscale (tens of nanometers [8][9][10]) and drop-like (microns and tens of microns [11][12][13][14]) aggregates, and fluid separation into weakly and strongly concentrated phases. Methods and Materials We considered a sample of magnetic fluid, which had the form of a long cylinder, whose axis was aligned with the applied bias field of the intensity H 0 ( Figure 1A,B). The weak alternating field h 0 (t) = a 0 cosωt was directed normally to the cylinder axis. The measuring coil enclosed the middle part of the sample and its axis coincided with that of the sample. Simultaneous application of the bias and probing alternating fields caused the vector of the total field H to oscillate in the vertical plane. The magnetization M executed the same oscillations. Although the field projection on the z-axis was constant, the corresponding magnetization projection M z oscillated with time at double the frequency due to the nonlinear dependence M(H). These oscillations of M z induced the output signal. According to the Faraday law, the value of emf in the measuring coil is given by: where µ 0 = 4π × 10 -7 H/m, S is the area of the sample cross-section, and N is the number of turns in the measuring coil. Pshenichnikov et al. [3], in the case of a weak probe field and low frequency (ωτ << 1), showed that the output signal is described by: where M 0 = M(H 0 ). Equation (2) is applicable to any magnetic fluid, including concentrated polydisperse solutions with strong interparticle interactions. Nanomaterials 2019, 10, x FOR PEER REVIEW 2 of 17 new interpretation of experimental data. In this work, we consider the crossed-field method as the tool for obtaining information about the biggest particles contained in a ferrofluid. The coarse fraction leads to the formation of nanoscale (tens of nanometers [8][9][10]) and drop-like (microns and tens of microns [11][12][13][14]) aggregates, and fluid separation into weakly and strongly concentrated phases. Methods and Materials We considered a sample of magnetic fluid, which had the form of a long cylinder, whose axis was aligned with the applied bias field of the intensity H0 ( Figure 1A,B). The weak alternating field h0(t) = a0 cosωt was directed normally to the cylinder axis. The measuring coil enclosed the middle part of the sample and its axis coincided with that of the sample. Simultaneous application of the bias and probing alternating fields caused the vector of the total field H to oscillate in the vertical plane. The magnetization M executed the same oscillations. Although the field projection on the z-axis was constant, the corresponding magnetization projection Mz oscillated with time at double the frequency due to the nonlinear dependence M(H). These oscillations of Mz induced the output signal. According to the Faraday law, the value of emf in the measuring coil is given by: where µ0 = 4π × 10 -7 H/m, S is the area of the sample cross-section, and N is the number of turns in the measuring coil. Pshenichnikov et al. [3], in the case of a weak probe field and low frequency (ωτ << 1), showed that the output signal is described by: (A) (B) Figure 1. Measuring cell in crossed magnetic fields. The black cylinder is the test tube containing magnetic fluid, the light cylinder is the measuring coil. (A) The low frequency limit; (B) the arbitrary frequencies. Equation (2) The case using diluted solutions was of interest to the analysis because it can demonstrate the influence of particle polydispersity on the magnitude of the signal. The interparticle interactions and Nanomaterials 2019, 9,1711 3 of 17 demagnetization fields can be neglected and the magnetization curve is described by the superposition of the Langevin functions: where x and m(x) are the diameter of the magnetic kernel and magnetic moment of the colloidal particle, respectively, n is the particle number density, F(x) is the particle size distribution, ξ is the Langevin parameter, and L(ξ) is the Langevin function. Expanding the Langevin function in terms of the argument at ξ << 1, we obtain: The angle brackets in the right-hand part of Equation (4) mean averaging over the ensemble of particles using the distribution function F(x). The point deserving particular attention strongly depends on the signal of the particle dimension. Since the shape of the single domain particle is close to that of a sphere, its magnetic moment is proportional to the cubed diameter, m = π M s x 3 /6, where M s is the saturation magnetization of the magnetic kernel. So, the contribution of separate fractions to the output signal is proportional to the magnetic moment of the 4th power or the diameter of the 12th power. Then, the form of the function F(x) strongly affects the output signal and the coarse dispersed fractions are the main contributor. Here, saturation magnetization of the magnetic fluids is determined by the average magnetic moment of particles, whereas susceptibility in the Langevin approximation by the mean square of the magnetic moment is: The experimental setup is schematically shown in Figure 2. Magnetic fluid was poured into a 160 mm long test tube, with an internal diameter of 8 mm. The bias field H 0 varied from 0 to 25 kA/m. The amplitude of the probe alternating current (AC) field varied depending on the experimental conditions, but in all cases did not exceed 2 kA/m. In the experiments, we studied two samples of magnetic fluids (samples No. 1 and 2) of the magnetite-kerosene-oleic acid type obtained by diluting kerosene with base fluids FM1 and FM2, respectively. The base fluid FM1 was obtained following the standard chemical precipitation method [15] in the Institute of Continuous Media Mechanics UB RAS (Perm, Russia), FM2 fluid was prepared in the Ivanovo State Power University (Ivanovo, Russia). They differed mainly in particle size distributions. The desired particle size distribution was obtained by varying the synthesis conditions (concentration of iron and ammonia salts and pH of the solutions, temperature, solution feed rate, and mixing intensity) [16]. The free oleic acid was removed by replacing the dispersion medium [15]. 4 The volume fraction of the crystalline magnetite in the solution was calculated using the magnetic fluid density ρf under the assumption that the density of protective shells inessential differed from the density of kerosene ρk = 0.78 g/cm 3 : where ρmag = 5.24 g/cm 3 is the bulk magnetite density. The use of a more accurate formula for ϕs was unreasonable due to the lack of reliable information on the effective density of the protective shell. Magnetization curves were determined using the sweep method, in which the differential magnetic susceptibility χ(H) = dM/dH of the fluid was directly measured and the magnetization curve was found by the numerical integration [17]: A long-cooled solenoid with two galvanic isolated coaxial coils was used as the source of the magnetic field. Direct current (DC) was passed through one of the coils, and the weak AC with an infra-low frequency of 0.1 Hz was passed through the second coil. The frequency was sufficiently low, which allowed us to exclude the relaxation processes. The design of the experimental setup enabled the measurement of the amplitudes of small magnetization oscillations and the field strength. The ratio of these quantities provided the desired differential susceptibility value. The particle size distribution was determined during the magneto-granulometric analysis, described in Pshenichnikov et al. [17]. The analysis allowed us to determine the particle numerical concentration, the average magnetic moment, and its average squared value without any assumptions about the size distribution of particles. The information about the particle diameters was obtained using the two-parameter Γ-distribution with the assumption that the shape of the particles was close to spherical. The Γ-distribution has the form of an asymmetric bell and is described by: The volume fraction of the crystalline magnetite in the solution was calculated using the magnetic fluid density ρ f under the assumption that the density of protective shells inessential differed from the density of kerosene ρ k = 0.78 g/cm 3 : where ρ mag = 5.24 g/cm 3 is the bulk magnetite density. The use of a more accurate formula for ϕ s was unreasonable due to the lack of reliable information on the effective density of the protective shell. Magnetization curves were determined using the sweep method, in which the differential magnetic susceptibility χ(H) = dM/dH of the fluid was directly measured and the magnetization curve was found by the numerical integration [17]: A long-cooled solenoid with two galvanic isolated coaxial coils was used as the source of the magnetic field. Direct current (DC) was passed through one of the coils, and the weak AC with an infra-low frequency of 0.1 Hz was passed through the second coil. The frequency was sufficiently low, which allowed us to exclude the relaxation processes. The design of the experimental setup enabled the measurement of the amplitudes of small magnetization oscillations and the field strength. The ratio of these quantities provided the desired differential susceptibility value. The particle size distribution was determined during the magneto-granulometric analysis, described in Pshenichnikov et al. [17]. The analysis allowed us to determine the particle numerical concentration, the average magnetic moment, and its average squared value without any assumptions about the size distribution of particles. The information about the particle diameters was obtained using the two-parameter Γ-distribution with the assumption that the shape of the particles was close to spherical. The Γ-distribution has the form of an asymmetric bell and is described by: where Γ(α + 1) is the Γ-function, x 0 and α are the distribution parameters, and x q is the moment x of order q. In particular, the average core diameter x and the relative distribution width δ x are described by the following equations: The basic parameters (temperature at which the magnetization curve was measured, saturation magnetization M ∞ , initial susceptibility χ 0 , average magnetic moment <m>, mean square magnetic moment <m 2 >, average diameter of the magnetic core of particles <x>, volume fraction ϕ ρ of magnetite, and relative distribution width δ x ) are presented in Table 1 for the base magnetic fluids. The long tail of the distribution function in Equation (7) implies that the colloidal solution contains a number of large particles with high energy magneto dipole interactions, which significantly affect the physical properties of magnetic fluids. Primarily, this effect manifests itself in the low magnetic fields. It is described with good accuracy by the modified second-order mean field model [6]. According to this model, the initial equilibrium susceptibility χ of the magnetic fluid can be represented as a series expansion in powers of the Langevin susceptibility χ L : A comparison of the initial susceptibility of the base fluids (Table 1) with the Langevin susceptibility calculated by Equation (9) showed that the magneto dipole interactions led to an almost two-fold increase in the initial susceptibility. The magnetic properties of diluted samples directly used in the experiments are shown in Table 2. The volume fraction of the magnetic phase in the samples (defined as the ratio of the saturation magnetization of the sample to the saturation magnetization of the magnetite M s = 480 kA/m) did not exceed 1.7%, which did not diminish the marked contribution of the magneto dipole interactions. The susceptibility due to the interactions increased from 7% for sample No. 1 to 27% for sample No. 2, containing the maximum fraction of coarse particles. The dynamic susceptibility measurements were performed using the mutual induction bridge (MIB) described previously [18,19]. The complex susceptibilityχ = χ − iχ is related to the output voltages (differential ∆U and reference U 4 voltages) by the simple equation: where S 0 and S are the cross-sectional areas of the coil and the sample, respectively. Equation (10) allowed us to calculate the desired susceptibility components in terms of voltages ∆U and U 4 and the phase shift between them. The amplitudes and phases of the two voltages were measured with a dual-channel synchronous amplifier eLockIn 203 (Anfatec Instruments AG, Oelsnitz Saxony, Germany). For the experiment conditions (the sample in the form of the long cylinder), the demagnetizing factor of the sample was sufficiently small (κ = 0.0065 ± 0.0005) and was used to compute the correction for the voltage enabled determination of the maximum error of measurement of susceptibility χ , which is equal to ±(0.2 + 2χ) × 10 −2 . The measurement error for χ" was at the level of 0.01 SI units for diluted magnetic fluids and did not exceed 5% of the static susceptibility value for concentrated solutions. The coupling constant λ in Table 2 is the ratio of dipole-dipole interaction energy to thermal energy and is discussed in detail in Section 3.3. Relaxation Processes We determined the output signal at an arbitrary frequency of the probe field. In this case, the magnetization of the sample differed from the equilibrium value, the co-linearity of the vectors M and H was violated ( Figure 1B), and Equation (2) became inapplicable. To determine the components of magnetization, it was necessary to solve the relaxation equation, which contains the characteristic relaxation times τ of the magnetic moment taken as parameters. Real magnetic fluids, as a rule, have broad particle-size distributions and wide ranges of magnetization relaxation times. Considering the polydispersity of particles in the dynamic problem makes it unnecessarily cumbersome. Therefore, in this section, we restrict our discussion to the monodisperse ferrofluids, in which all particles are identical. Since, according to the experimental conditions, the AC field component was small, we used the linearized relaxation equation [20,21] where M(H) is the equilibrium magnetization corresponding to the instantaneous value of the field strength in the sample. The relaxation times τ II and τ ⊥ for the longitudinal and transverse components of magnetization, respectively, depend on the field strength; in the dilute solution they are described by [20,21]: where τ B is the Brownian time of rotational diffusion of particles in zero magnetic field, η is the ferrofluid viscosity, and V is the volume of the particle covered with a protective shell. In weak fields (ξ << 1), both relaxation times coincide with the Brownian time τ B and monotonically decrease with the growth of the magnetic field. Expanding the equilibrium magnetization, in Equation (11) in a power series of the weak ac field h, and retaining only the terms of the lowest power, we obtain: Then, substituting the explicit expression for the internal field h = a cos ωt in Equation (13), we find: Using Equation (14) and the relationship between the applied and internal probe fields given as: we find the amplitude of the probe field inside the sample: Substituting Equations (15) and (16) into Equation (1) yields the required expression for the output signal: Equation (17) determines the field and frequency dependences of output signal. It is valid for an arbitrary magnetic fluid with a rather narrow particle size distribution and negligible spread of particles in the relaxation time. To analyze the experimental data, we introduced the normalized output signal E* by dividing Equation (17) into a coefficient that included the parameters of the measuring coil, the frequency, and square of the probing field amplitude. The normalized signal thus describes only the effect of the bias field and relaxation processes: We theoretically analyzed Equation (18), considering the steric and magneto dipole-interparticle interactions. For this purpose, we used the modified effective field model [6], which adequately describes the equilibrium magnetization of concentrated ferrofluids. Samples studied in the experiment ( Table 2) had relatively low concentrations of particles; therefore, when calculating the effective field, we considered only the correction, which was linear in concentration (i.e., proportional to Langevin susceptibility): where ξ 0 = µ 0 mH 0 /kT is the Langevin parameter, which is determined in terms of the bias field strength. Substituting Equation (19) into (18) leads to the expression: The field dependence of the output signal calculated by Equation (20) is shown in Figure 3A,B. To compare the experimental results, the magnetic moments m of the particles and the Langevin susceptibility χ L of the model fluids were considered identical to those of samples No. 1 and 2 in Tables 1 and 2. Since Equations (19) and (20) do not consider the polydispersity of particles, one good quantitative agreement between the calculated and experimental data in the case of the broad size distribution of particles should not be expected. The main advantage of Equations (18) and (20) is that they consider the dynamic effects, magneto dipole-interparticle interactions, and the demagnetizing field. The last two factors compete with each other. The magneto dipole interactions increase ferrofluid magnetization, whereas the demagnetizing field causes its decrease. Their total effect remains appreciable even for moderately concentrated solutions, such as samples No. 1 and 2. In Figure 3A,B this effect is manifested in the shift of the maximum of the function E* = f (ξ) toward weak fields. For dilute magnetic fluids in which the interparticle interactions and demagnetizing fields are negligible, the Langevin parameter, corresponding to the maximum of the function E* = f (ξ), is equal to ξ* = 1.93 [2,3]. Equation (2) supports the correctness of this value in the case when the fluid magnetization is calculated in the Langevin approximation. However, the curves in Figure 3 constructed with regard for the interparticle interactions demonstrate markedly lower values of ξ*: ξ* ≈ 1.53 for sample No. 1 and ξ* ≈ 1.65 for sample No. 2. The shapes of the E* = f (ξ,ω) curves in Figure 3A,B for two model monodisperse fluids are qualitatively identical. With increasing frequency of the probe field (ωτ B ≥ 1), the output signal decreased and the maximum was smeared. In the strong bias field (ξ 0 ≥ 10), relaxation times decreased, as predicted by Equation (12), therefore, the dynamic contributions to Equations (18) and (20) decreased by almost two orders of magnitude. The process of magnetization reversal became quasi-static and the output signal was no longer dependent on the frequency, at least at ωτ B ≤ 6. Results of Dynamic Susceptibility Experiment The real ferrofluids have a wide spectrum of relaxation times due to the polydispersity of single-domain particles, the formation of the clusters, and two independent relaxation mechanisms (Brownian and Néel). The Brownian mechanism of magnetization relaxation is associated with the rotation of particles in a viscous medium, and the characteristic relaxation time is determined by Equation (12). The Néel mechanism of magnetization relaxation is associated with the rotation of the magnetic moment of the particle relative to the crystallographic axes [22][23][24]. For the uniaxial single-domain particle the most energetically favorable orientations of the magnetic moment are the two opposite directions along the easy axis. These two states are separated by the energy barrier KVm, where K is the magnetic anisotropy constant and Vm is the volume of the particle magnetic core. In the weak field, this barrier can be overcome due to thermal fluctuations within the particle itself, which corresponds to the Néel relaxation mechanism. The Néel time τN required to overcome the barrier grows exponentially with decreasing temperature, i.e., τN ~ τ0 exp σ, where τ0 ~ 10 −9 s is the damping time of the Larmor precession, and σ is the reduced barrier height (anisotropy parameter): The magnetization dynamics in the weak applied field are determined by that of two relaxation mechanisms, which ensures the shortest relaxation time. The Brownian and Néel relaxation times depend differently on the particle volume [23]. The condition τN = τB provides the characteristic magnetic core diameter x * , which corresponds to "switching" off the relaxation mechanism. If x < x*, then the Néel relaxation mechanism prevails, and if x > x * , then the Brownian mechanism is predominant. Generally, x * does not coincide with the limiting size of superparamagnetic particles: the Brownian fraction includes both magnetically hard particles and some superparamagnetic particles with τN > τB. According to estimates [25], for low-viscous magnetite ferrofluids x * ≈ 16−18 nm, τN = 10 −10 −10 −5 s, and τB = 10 −5 −10 −3 s. Thus, at frequencies up to 10 4 Hz, the dispersion of the dynamic susceptibility is specified by the particles with Brownian relaxation mechanism and at frequencies above 10 5 Hz by the particles with the Néel magnetic moment relaxation mechanism. The frequency dependence of the initial susceptibility for samples No. 1 and 2 with the lowest and highest concentrations of large particles is shown in Figure 4. The χ(ω) curves differ drastically. For sample No. 1, a weak susceptibility dispersion was observed only at frequencies of the order of Results of Dynamic Susceptibility Experiment The real ferrofluids have a wide spectrum of relaxation times due to the polydispersity of single-domain particles, the formation of the clusters, and two independent relaxation mechanisms (Brownian and Néel). The Brownian mechanism of magnetization relaxation is associated with the rotation of particles in a viscous medium, and the characteristic relaxation time is determined by Equation (12). The Néel mechanism of magnetization relaxation is associated with the rotation of the magnetic moment of the particle relative to the crystallographic axes [22][23][24]. For the uniaxial single-domain particle the most energetically favorable orientations of the magnetic moment are the two opposite directions along the easy axis. These two states are separated by the energy barrier KV m , where K is the magnetic anisotropy constant and V m is the volume of the particle magnetic core. In the weak field, this barrier can be overcome due to thermal fluctuations within the particle itself, which corresponds to the Néel relaxation mechanism. The Néel time τ N required to overcome the barrier grows exponentially with decreasing temperature, i.e., τ N~τ0 exp σ, where τ 0~1 0 −9 s is the damping time of the Larmor precession, and σ is the reduced barrier height (anisotropy parameter): The magnetization dynamics in the weak applied field are determined by that of two relaxation mechanisms, which ensures the shortest relaxation time. The Brownian and Néel relaxation times depend differently on the particle volume [23]. The condition τ N = τ B provides the characteristic magnetic core diameter x * , which corresponds to "switching" off the relaxation mechanism. If x < x*, then the Néel relaxation mechanism prevails, and if x > x * , then the Brownian mechanism is predominant. Generally, x * does not coincide with the limiting size of superparamagnetic particles: the Brownian fraction includes both magnetically hard particles and some superparamagnetic particles with τ N > τ B . According to estimates [25], for low-viscous magnetite ferrofluids x * ≈ 16−18 nm, τ N = 10 −10 −10 −5 s, and τ B = 10 −5 −10 −3 s. Thus, at frequencies up to 10 4 Hz, the dispersion of the dynamic susceptibility is specified by the particles with Brownian relaxation mechanism and at frequencies above 10 5 Hz by the particles with the Néel magnetic moment relaxation mechanism. The frequency dependence of the initial susceptibility for samples No. 1 and 2 with the lowest and highest concentrations of large particles is shown in Figure 4. The χ(ω) curves differ drastically. For sample No. 1, a weak susceptibility dispersion was observed only at frequencies of the order of 10 5 Hz; for sample No. 2, the maximum dispersion was observed at frequencies of 300-400 Hz. Such difference in the dynamics of magnetization is the direct result of differences in the particle size distributions. Sample No. 1 had a relatively narrow particle size distribution. The main contribution to the dynamic susceptibility was from superparamagnetic particles with magnetic core diameter x < x * ≈ 16 nm and rather short relaxation times (τ N < 10 −5 s). In the examined frequency range, this sample showed a quasi-static behavior, and the region of susceptibility dispersion was beyond the upper boundary of this range. Conversely, sample No. 2 had a broad particle-size distribution (Table 1) with a long tail, so the main contribution to the dynamic susceptibility was due to the Brownian particles with a magnetic core diameter x > x* and long relaxation times. Notably, the contribution of large particles to the initial susceptibility was disproportionately high; it grew as the squared magnetic moment or as the sixth power of the diameter. The hydrodynamic diameter d of the particles, which contributed the most to the susceptibility dispersion, could be estimated from the condition 2πf*τ B = 1, where f* is the frequency corresponding to the maximum on the χ"(ω) curve. Substituting f* ≈ 330 Hz into this condition and the Brownian relaxation time from Equation (12) yielded d ≈ 100 nm. This hydrodynamic diameter value is approximately three or four times greater than the maximum possible diameter of the individual particles, which in magnetite ferrofluids is determined with an electron microscope. The existence of the surfactant protective shell with a characteristic thickness slightly higher than 2 nm cannot account for such a large difference in size. This led us to conclude that the multi-particle aggregates (clusters) rather than single particles acted as independent kinetic units, which generated the spectrum of dynamic susceptibility of sample No. 2. This conclusion agrees well with the data, which we obtained earlier in similar experiments [25] and in experiments on the diffusion and magnetophoresis of particles in magnetic fluids containing coarse particles [8,26]. The same results were reported [9] using the dynamic light scattering method. Sample No. 1 had a relatively narrow particle size distribution. The main contribution to the dynamic susceptibility was from superparamagnetic particles with magnetic core diameter x < x * ≈ 16 nm and rather short relaxation times (τN < 10 −5 s). In the examined frequency range, this sample showed a quasi-static behavior, and the region of susceptibility dispersion was beyond the upper boundary of this range. Conversely, sample No. 2 had a broad particle-size distribution (Table 1) with a long tail, so the main contribution to the dynamic susceptibility was due to the Brownian particles with a magnetic core diameter x > x* and long relaxation times. Notably, the contribution of large particles to the initial susceptibility was disproportionately high; it grew as the squared magnetic moment or as the sixth power of the diameter. The hydrodynamic diameter d of the particles, which contributed the most to the susceptibility dispersion, could be estimated from the condition 2πf*τB = 1, where f* is the frequency corresponding to the maximum on the χ″(ω) curve. Substituting f* ≈ 330 Hz into this condition and the Brownian relaxation time from Equation (12) yielded d ≈ 100 nm. This hydrodynamic diameter value is approximately three or four times greater Crossed Field Experiment The results of the crossed fields experiments are shown in Figure 5A Figure 2 predict a discrepancy an order of magnitude weaker between the samples. Since Equation (20) considers all significant factors except for the polydispersity of particles, we can assume that a broad particle size distribution (most pronounced in sample No. 2) was the main cause of the observed discrepancies. The coarse particle fractions existing in magnetic fluids disproportionately contributed to the output signal; therefore, the substitution of the mean value of the magnetic moment from Table 2 into Equation (21) led to a systematic error, which increased with the width of the particle size distribution. Hz. Third, with an increase in the frequency of the probe field, the maximum of the function E*(H) for sample No. 1 shifted toward weak fields, and for sample No. 2, it shifted toward strong fields. The curves plotted in Figure 2 predict a discrepancy an order of magnitude weaker between the samples. Since Equation (20) considers all significant factors except for the polydispersity of particles, we can assume that a broad particle size distribution (most pronounced in sample No. 2) was the main cause of the observed discrepancies. The coarse particle fractions existing in magnetic fluids disproportionately contributed to the output signal; therefore, the substitution of the mean value of the magnetic moment from Table 2 into Equation (21) led to a systematic error, which increased with the width of the particle size distribution. Let us consider the problem of particle size distribution in magnetic fluids in more detail. This deserves special attention, since the size distribution of particles often plays an important role in the comparison of experimental and theoretical results. Thus, for example, when constructing an equilibrium magnetization curve, the third-and sixth order-moments should be used, i.e., <x 3 > and <x 6 > [17]. It may be necessary to use the moments of the ninth order <x 9 > when processing experimental data on birefringence in magnetic fluids, because in weak magnetic fields the output signal grows in proportion to the ninth power of the diameter [27,28]. Figure 6A,B presents curves illustrating the contribution of separate fractions to the saturation magnetization of the ferrofluid proportional to <x 3 >, Langevin susceptibility (<x 6 >), and output signal in the crossed field experiment (<x 12 >). The density of the particle size distribution (curve 1) was calculated by Equation (7) for the Γ-distribution. All curves were normalized to unity. The parameters used in calculations were taken from Tables 1 and 2. The parameters associated with sample No. 1 can be considered typical of magnetite ferrocolloids, including commercial ferrofluids. Sample No. 2 was specifically chosen to demonstrate the effects associated with polydispersity, and had a very broad particle size distribution. In Figure 6, the vertical dashed line is used to denote the maximum diameter xmax ≈ 20-25 nm of magnetite particles, which were still observable in an electron microscope in highly stable magnetic fluids and powders obtained by the chemical precipitation method [9,10,[29][30][31][32]. In real solutions, particles with magnetic cores of large diameter are absent, since the concentration of iron salts in Let us consider the problem of particle size distribution in magnetic fluids in more detail. This deserves special attention, since the size distribution of particles often plays an important role in the comparison of experimental and theoretical results. Thus, for example, when constructing an equilibrium magnetization curve, the third-and sixth order-moments should be used, i.e., <x 3 > and <x 6 > [17]. It may be necessary to use the moments of the ninth order <x 9 > when processing experimental data on birefringence in magnetic fluids, because in weak magnetic fields the output signal grows in proportion to the ninth power of the diameter [27,28]. Figure 6A,B presents curves illustrating the contribution of separate fractions to the saturation magnetization of the ferrofluid proportional to <x 3 >, Langevin susceptibility (<x 6 >), and output signal in the crossed field experiment (<x 12 >). The density of the particle size distribution (curve 1) was calculated by Equation (7) for the Γ-distribution. All curves were normalized to unity. The parameters used in calculations were taken from Tables 1 and 2. The parameters associated with sample No. 1 can be considered typical of magnetite ferrocolloids, including commercial ferrofluids. Sample No. 2 was specifically chosen to demonstrate the effects associated with polydispersity, and had a very broad particle size distribution. In Figure 6, the vertical dashed line is used to denote the maximum diameter x max ≈ 20-25 nm of magnetite particles, which were still observable in an electron microscope in highly stable magnetic fluids and powders obtained by the chemical precipitation method [9,10,[29][30][31][32]. In real solutions, particles with magnetic cores of large diameter are absent, since the concentration of iron salts in solutions and the duration of the chemical reaction are limited. Strictly, the Γ-distribution in Equation (7), which suggests the existence of particles with arbitrary large diameters, contradicts the experimental data on the existence of x max . However, Equation (7) is often used to approximate the size distribution of particles, since the systematic error associated with the tail of the distribution is inessential when calculating the moments of x of low orders, for example, <x>. Figure 6 shows that the Γ-distribution correctly describes the size distribution of the particles in both samples. Only a negligible fraction of particles had magnetic cores with diameter exceeding x max . Nanomaterials 2019, 10, x FOR PEER REVIEW 12 of 17 solutions and the duration of the chemical reaction are limited. Strictly, the Γ-distribution in Equation (7), which suggests the existence of particles with arbitrary large diameters, contradicts the experimental data on the existence of xmax. However, Equation (7) is often used to approximate the size distribution of particles, since the systematic error associated with the tail of the distribution is inessential when calculating the moments of x of low orders, for example, <x>. Figure 6 shows that the Γ-distribution correctly describes the size distribution of the particles in both samples. Only a negligible fraction of particles had magnetic cores with diameter exceeding xmax. The evaluation of high-order moments of x did not pose serious difficulties in the case when the width of the particle size distribution was rather small (sample No. 1, Figure 6A). A qualitative change in the situation was observed when calculating the high-order moments of x for a large width of the particle size distribution (δx > 0.4). Figure 6B shows that even when calculating <x 6 >, the systematic error associated with the long tail of the distribution reached 40% and became unacceptably large. The results of calculation of <x 12 > (curve 4, Figure 6B) are unreliable due to uncertainty about the concentration of large particles (with diameters x ≈ 25 nm and higher). Notably, replacing the Γ-distribution with the lognormal distribution, which is often used to analyze experimental data [27,28], does not resolve the issue. The lognormal distribution has a longer tail than the Γ-distribution and the systematic error in calculating high-order moments will be even higher. We used the E*(H) calculated curves for the model monodisperse liquids plotted in Figure 3 and the experimental curves from Figure 5 to estimate the magnetic moments me of particles in the coarse fraction, responsible for the appearance of the maximum on the E*(H) curve and contributing the most to the normalized signal. To this end, we equated the Langevin parameter ξ*, corresponding to the maximum of the signal in Figure 3, to the Langevin parameter determined in terms of the effective magnetic moment me and the value of the bias field H* in Figure 5: where Ms = 480 kA/m is the saturation magnetization of the magnetite. For sample No. 1 we obtained ξ* = 1.53 and me = 6.2 × 10 −19 A·m 2 , which is a value three times higher than the average magnetic moment <m> = 2.08 × 10 −19 A·m 2 in Table 1. The maximum diameter of the magnetic core of particles The evaluation of high-order moments of x did not pose serious difficulties in the case when the width of the particle size distribution was rather small (sample No. 1, Figure 6A). A qualitative change in the situation was observed when calculating the high-order moments of x for a large width of the particle size distribution (δ x > 0.4). Figure 6B shows that even when calculating <x 6 >, the systematic error associated with the long tail of the distribution reached 40% and became unacceptably large. The results of calculation of <x 12 > (curve 4, Figure 6B) are unreliable due to uncertainty about the concentration of large particles (with diameters x ≈ 25 nm and higher). Notably, replacing the Γ-distribution with the lognormal distribution, which is often used to analyze experimental data [27,28], does not resolve the issue. The lognormal distribution has a longer tail than the Γ-distribution and the systematic error in calculating high-order moments will be even higher. We used the E*(H) calculated curves for the model monodisperse liquids plotted in Figure 3 and the experimental curves from Figure 5 to estimate the magnetic moments m e of particles in the coarse fraction, responsible for the appearance of the maximum on the E*(H) curve and contributing the most to the normalized signal. To this end, we equated the Langevin parameter ξ*, corresponding to the maximum of the signal in Figure 3, to the Langevin parameter determined in terms of the effective magnetic moment m e and the value of the bias field H* in Figure 5: where M s = 480 kA/m is the saturation magnetization of the magnetite. For sample No. 1 we obtained ξ* = 1.53 and m e = 6.2 × 10 −19 A·m 2 , which is a value three times higher than the average magnetic moment <m> = 2.08 × 10 −19 A·m 2 in Table 1. The maximum diameter of the magnetic core of particles was x max = 13.5 nm, and the hydrodynamic diameter of the particles was d = 18 nm. The corresponding Brownian relaxation time was τ B = 3ηV/kT = 2 × 10 −6 s. This implies that the dynamic effects in weak fields should be observed at frequencies higher than 80 kHz, which is substantiated by the experimental data on the dynamic susceptibility in Figure 4A. The dispersion of dynamic susceptibility at frequencies up to 80 kHz is rather weak, since particles with the diameter of magnetic cores close to the average value contribute the most to the susceptibility, at which the quasi-static condition ωτ B << 1 is fulfilled. In general, the dynamics of sample No. 1 in the weak bias field (up to 2 kA/m) were consistent with the above statements. In this case, the E*(H) curves had approximately the same slope for all frequencies except for the highest frequency of 80 kHz. The signal dispersion was insignificant due to the absence of large particles in the solution and the short Brownian relaxation time (ωτ B << 1). With increasing bias field strength, the relaxation times decrease, according to Equation (12), and the signal dispersion should decrease additionally, as shown in Figure 3A. However, in the experiment, we observed an opposite effect. With the growth of the bias field, the signal dispersion also increased. For H 0 ≥ 5 kA/m the frequency dependence was observed at a frequency of 9 kHz, which implies a three-to four-fold increase in the Brownian relaxation time. In our opinion, this paradoxical behavior of sample No. 1 can only be explained by the formation of short chains in the magnetic fluid at the cost of anisotropic dipole-dipole interparticle interactions and bias field. As is known [23,33], the probability of the formation of chains in magnetic fluids is determined by the value of the dipolar coupling constant λ, which is the ratio of dipole-dipole interaction energy to thermal energy. At λ < 1, the effect of aggregates on the properties of magnetic fluids is insignificant, but at λ ≥ 2, the number of aggregates increases exponentially. For polydisperse fluid, the value of λ can be estimated in terms of the Langevin susceptibility χ L and the hydrodynamic concentration ϕ of particles, which were determined from the results of independent experiments: λ = χ L /(8ϕ) [6,19]. The values of the dipolar coupling constant for the examined samples calculated by this formula are provided in Table 2. For sample No. 1, λ = 0.6, and in the weak bias field, the influence of aggregates can be neglected. The application of the stronger field corresponding to the Langevin parameter ξ ≥ 1 stimulated the growth of chains and increased the relaxation time of the magnetization due to an increase in the volume of the chain and its form-factor. So, the dispersion of the signal observed in Figure 5A at frequencies higher than 9 kHz is the consequence of the change in the internal structure of the magnetic fluid. A different situation was observed when analyzing the data for sample No. 2 with the broad particle size distribution in Figure 5B. For this sample, the coupling constant λ = 2.1 and the majority of large particles were combined into chains or quasi-spherical clusters already in a zero-bias field. As in the case of linear susceptibility, the strong signal dispersion was already observed at frequencies of 100-300 Hz, which is one more indication of the presence of aggregates with a characteristic size of the order of 100 nm. According to Rosensweig [15], for typical magnetic fluids that are stabilized by oleic acid, the height of the energy barrier associated with the steric repulsion is close to 20 kT. This barrier ensures the negligible rate of irreversible particle aggregation and high stability of the magnetic fluid for many years. Experiments devoted to studying the rheology and diffusion of particles in magnetic fluids [26], dynamic magnetic susceptibility [19], and dynamic light scattering method [9] suggest that in the presence of large-sized particles, quasi-spherical aggregates with a characteristic size up to 100 nm have a high probability of occurrence. Our results confirm this inference. Magneto dipole interactions are not the only reason for the aggregation of particles in ferrofluids. Van der Waals attractive forces and defects in the protective shells [26] can play an important or even a key role in this process. In this case, the application of the bias field did not change the structure of the colloidal solution. The nanoscale aggregates with the uncompensated magnetic moment behaved like single particles. This is why the experimental curves in Figure 5B do not qualitatively differ from the model curves for monodisperse liquid in Figure 2B. The quantitative discrepancies between the families of curves representing different samples are large and related to the width of the particle size distribution. First, from the maximum condition in Equation (22) for sample No. 2 (ξ* = 1.65), we obtained an estimate for the effective moment of particles, which contribute the most to the normalized output signal: m e = 54 × 10 −19 A m 2 . This value exceeds the average magnetic moment <m> = 2.31 × 10 −19 A m 2 already by a factor of 23. The diameter of the magnetic core of such particles should be close to the maximum possible value of x max . An estimate using Equation (22) provided x max ≈ 28 nm, which is only slightly higher than the maximum possible value corresponding to the dashed line in Figure 6, but is significantly smaller than the diameter x max ≈ 41 nm obtained with the use of the standard Γ-distribution in Equation (7). This result demonstrates once again the need to replace the Γ-distribution by another distribution characterized by the absence of a long tail. The easiest way to replace the Γ-distribution is to cut the tail. The distribution in Equation (7) is replaced by the equation: in which the normalization constant A differs from the unit by less than 1%. The truncated Γdistribution in Equation (23) was used previously [28] for processing the birefringence results and by Aref'ev et al. [34], for calculating high-order moments describing the nonlinear susceptibility of magnetic fluids. In both cases, the use of the truncated Γ-distribution enabled a significant reduction of the discrepancy between the experimental and calculated results. The results reported by Aref'ev et al. [34] revealed a correlation between the numerical value x max and the Γ-distribution parameters, which was valid, at least for the magnetite colloids obtained by the chemical precipitation method. According to Aref'ev et al. [34]: For sample No. 2, the calculations using Equation (30) resulted in x max = 29 nm, which practically coincides with the estimate x max = 28 nm found by Equation (22). Estimates for sample No. 1 were x max = 13.6 and 13.5 nm for Equations (30) and (22), respectively. Thus, the two methods for evaluating the maximum size of particles existing in magnetite colloids agree well for both samples, despite these methods being based on different experimental techniques. Discussion and Conclusions This paper presented the results of experimental and analytical investigation into the dynamics of the magnetic fluid in crossed fields. We examined a situation in which the constant magnetic field H 0 directed along the sample and the weak alternating field normal to its axis act on the sample of magnetic fluid in the form of the long cylinder. The axis of the measuring coil was oriented along the sample and the induction of emf E(H 0 ,ω) was realized at double the frequency. The characteristic feature of the experiments was a strong dependence of the output signal (emf) on the particle size distribution in the weak bias field and the pronounced maximum in the region of the Langevin parameter ξ ≈ 1.5-1.9. Our attention was focused on the dependence of the output signal on the probe field frequency and the use of crossed field experiments for obtaining information about the largest particles in magnetic fluids. We derived equations, describing the dynamics of the magnetic fluid in the crossed magnetic fields, taking into account the magneto dipole interactions, demagnetizing fields, and the field dependence of magnetization relaxation times. For the monodisperse fluid, the dependences of the output signal on the bias field and the probe field frequency were constructed. As can be seen from Equation (20) and Figure 3, in weak and moderate fields, the dynamic effects manifested themselves already at ωτ B ≈ 1, but the degree of their influence on the output signal depended on the initial susceptibility of the solution. The higher the susceptibility, the stronger the dynamic effects. With an increase in the bias field, the relaxation times decreased according to formula (12), and at ξ 0 > 10 the dynamic effects became negligible. The experimental studies were conducted at room temperature in the frequency range from 37 Hz to 80 kHz for two samples of magnetic fluids based on colloidal magnetite and kerosene, which differed in the width of the particle size distribution. The first sample had a narrow particle size distribution, low-energy magnetic dipole interactions, and a low probability of aggregate formation. The weak dispersion of the dynamic susceptibility at the frequencies up to 80 kHz seems to be due to the absence of aggregates. The dynamics of the sample in weak crossed fields (up to 2 kA/m) corresponded to the above statements. The slopes of the E*(H) curves were approximately the same for all frequencies except for the highest frequency of 80 kHz. This result looks natural: the signal dispersion is insignificant due to the absence of large particles in the solution and the small Brownian relaxation time (ωτ B << 1). With increasing bias field strength, the relaxation times decreased according to Equation (12), and the signal dispersion should decrease further, as shown in Figure 3A. However, the experiment demonstrated an opposite effect. With increasing bias field, the signal dispersion did not reduce, but increased. Thus, at H 0 ≥ 5 kA/m, the frequency dependence is already fixed at the frequency of 9 kHz, which implies an increase in the Brownian relaxation time by three to four times. In our opinion, such paradoxical behavior of sample No. 1 can only be explained by the formation of short chains in the magnetic fluid due to anisotropic dipole-dipole interparticle interactions and the bias field. The second sample had a broad particle-size distribution, high-energy magneto dipole interactions, and a strong dispersion of dynamic susceptibility already at frequencies of several hundred hertz ( Figure 4). The characteristic sizes of the fluctuating particles estimated according to Equation (12) show that the function of such particles is performed by quasi-spherical clusters with a characteristic size of 100 nm. These sizes agree with the results obtained by other methods reported previously [8][9][10]25,26]. For the second sample in the crossed fields, an increase in the signal amplitude by a factor of 14 could be observed compared to sample No. 1, and a simultaneous eight-fold decrease in the field, corresponding to the maximum on the E(H) curve. A strong dependence of the output signal on the particle size allows the crossed field method to be applied for evaluating the maximum diameter x max of the magnetic core of particles: x max = 13 nm for sample No. 1 and x max = 28 nm for sample No. 2. The same values of the maximum diameter (within the experimental error) were obtained in our study using the correlation between x max and the moments x of the third and sixth orders [34]. These results demonstrate the necessity of re-evaluating the applicability of lognormal and Γ-distributions to computation of high-order moments. In the case of a broad particle size distribution, such calculations are incorrect due to long tails. This problem can be solved by truncation of the tails according to Equation (23). However, this reduction makes simple formulas like Equation (7) inapplicable for arbitrary moments x, requiring a numerical calculation of the corresponding integrals.
12,037.2
2019-11-30T00:00:00.000
[ "Physics" ]
The Bipartite Rac1 Guanine Nucleotide Exchange Factor Engulfment and Cell Motility 1/Dedicator of Cytokinesis 180 (Elmo1/Dock180) Protects Endothelial Cells from Apoptosis in Blood Vessel Development* Background: The endothelium requires stabilization factors for efficient blood vessel formation. Results: Engulfment and cell motility 1/dedicator of cytokinesis 180 (Elmo1/Dock180) expression reduces endothelial cell apoptosis in vitro and in vivo. Conclusion: Elmo1/Dock180 protects endothelial cells from apoptosis and acts as a stabilization factor for the endothelium. Significance: First report revealing a cell-intrinsic, pro-survival function of Elmo1/Dock180 in the endothelium. Engulfment and cell motility 1/dedicator of cytokinesis 180 (Elmo1/Dock180) is a bipartite guanine nucleotide exchange factor for the monomeric GTPase Ras-related C3 botulinum toxin substrate 1 (Rac1). Elmo1/Dock180 regulates Rac1 activity in a specific spatiotemporal manner in endothelial cells (ECs) during zebrafish development and acts downstream of the Netrin-1/Unc5-homolog B (Unc5B) signaling cascade. However, mechanistic details on the pathways by which Elmo1/Dock180 regulates endothelial function and vascular development remained elusive. In this study, we aimed to analyze the vascular function of Elmo1 and Dock180 in human ECs and during vascular development in zebrafish embryos. In vitro overexpression of Elmo1 and Dock180 in ECs reduced caspase-3/7 activity and annexin V-positive cell number upon induction of apoptosis. This protective effect of Elmo1 and Dock180 is mediated by activation of Rac1, p21-activated kinase (PAK) and AKT/protein kinase B (AKT) signaling. In zebrafish, Elmo1 and Dock180 overexpression reduced the total apoptotic cell and apoptotic EC number and promoted the formation of blood vessels during embryogenesis. In conclusion, Elmo1 and Dock180 protect ECs from apoptosis by the activation of the Rac1/PAK/AKT signaling cascade in vitro and in vivo. Thus, Elmo1 and Dock180 facilitate blood vessel formation by stabilization of the endothelium during angiogenesis. Engulfment and cell motility 1/dedicator of cytokinesis 180 (Elmo1/Dock180) is a bipartite guanine nucleotide exchange factor for the monomeric GTPase Ras-related C3 botulinum toxin substrate 1 (Rac1). Elmo1/Dock180 regulates Rac1 activity in a specific spatiotemporal manner in endothelial cells (ECs) during zebrafish development and acts downstream of the Netrin-1/Unc5-homolog B (Unc5B) signaling cascade. However, mechanistic details on the pathways by which Elmo1/ Dock180 regulates endothelial function and vascular development remained elusive. In this study, we aimed to analyze the vascular function of Elmo1 and Dock180 in human ECs and during vascular development in zebrafish embryos. In vitro overexpression of Elmo1 and Dock180 in ECs reduced caspase-3/7 activity and annexin V-positive cell number upon induction of apoptosis. This protective effect of Elmo1 and Dock180 is mediated by activation of Rac1, p21-activated kinase (PAK) and AKT/protein kinase B (AKT) signaling. In zebrafish, Elmo1 and Dock180 overexpression reduced the total apoptotic cell and apoptotic EC number and promoted the formation of blood vessels during embryogenesis. In conclusion, Elmo1 and Dock180 protect ECs from apoptosis by the activation of the Rac1/PAK/AKT signaling cascade in vitro and in vivo. Thus, Elmo1 and Dock180 facilitate blood vessel formation by stabilization of the endothelium during angiogenesis. Angiogenesis is a pivotal process, e.g. during embryonic development, and includes various tightly regulated steps. Stimulation of pre-existing blood vessels with angiogenic growth factors promotes degradation of the basal membrane, detachment of mural cells, selection of tip and stalk cells, and proliferation and migration of endothelial cells (ECs). 2 Eventually new blood vessels are built by anastomosis and lumen formation (1,2). Finally, vessel stabilization and pruning are required to form a mature and hierarchically structured vasculature (1,2). Pruning or vessel regression is an active, tightly controlled process due to changes in hemodynamics and includes endothelial cell-cell contact reorganization, cell retraction and induction of apoptosis (3)(4)(5)(6)(7). The decision whether a newly formed vessel remains or undergoes pruning is made by several factors, such as survival factors, establishment and stabilization of cell-cell contacts, either of ECs or pericytes, deposition of basal membrane, and initiation of blood flow (1,8,9). Particularly, ECs are dependent on survival and stabilization factors like VEGF, delta-like ligand 4 (Dll4), fibroblast growth factor (FGF), or angiopoietin-1 (Ang-1) to prevent vessel regression (6, 10 -15). The monomeric GTPase Rac1 is a member of the Ras homologue (Rho)-family of small G proteins and regulates critical actions during angiogenesis (16,17). It is involved in actin cytoskeleton reorganization, lamellipodia formation, and thus migration (18,19). Furthermore, Rac1 regulates cell cycle progression, EC morphology, capillary survival, and pruning by cell retraction (3,17). Endothelial loss of Rac1 in mice is embryonic lethal at embryonic day (E) 9.5 due to malformation of major vessels, such as branchial arch arteries and cardiac defects (19). This highlights the important function of this monomeric GTPase in angiogenesis. Rac1, like other small G proteins, is regulated by guanine nucleotide exchange factors (GEFs). These GEFs mediate the exchange from GDP to GTP and hence activate the monomeric GTPases (20). Usually a multitude of GEFs is available for one small G protein and facilitate cell and context specific actions of the protein (16). An unusual GEF for Rac1 is the protein complex Elmo1/Dock180. Whereas canonical GEFs interact with GTPases of the Rho family via their Dbl homology-pleckstrin homology (DH-PH) domains, Dock180 does not possess such a domain. It interacts with Rac1 by its Docker domain (20). The binding of Dock180 to Elmo1 stabilizes the interaction with nucleotide-free Rac1 and additionally localizes the active protein complex at the plasma membrane (21)(22)(23). Recent morpholino-based data identified Elmo1 as an important GEF in zebrafish blood vessels acting downstream of the Netrin-1/Unc5B signaling cascade (35). Biochemical data and experiments in zebrafish further identified the activation of the Elmo1/Dock180/Rac1 signaling cascade by Netrin-1; yet, functional experiments in cultured ECs failed to show an activation of EC sprouting in response to Netrin-1 (35). This raised the question about the precise role of Elmo1 and Dock180 in the endothelium. In this study, we have identified a novel, cell intrinsic survival function of Elmo1/Dock180 in ECs. Overexpression of Elmo1/ Dock180 reduces EC apoptosis in vitro and activates the prosurvival signaling cascade phosphoinositide 3-kinase (PI3K)/ AKT. Furthermore, Elmo1 and Dock180 overexpression in zebrafish embryos reduces EC apoptosis and thereby inducing angiogenesis. Thus, our findings highlight the importance of Elmo1/Dock180 in angiogenesis by maintaining cell survival during embryonic blood vessel development. Transfection and Transduction of HUVECs-Transfection of HUVECs with siRNAs targeting Elmo1 or Dock180 was performed as described before (35). For adenoviral transduction of HUVECs, full-length cDNA of human Elmo1 or Dock180 was cloned into the adenoviral shuttle vector according to the Vira Power Adenovirus Expression Systems protocol (Invitrogen). Adenovirus particles of dominant-negative (RacN17) and constitutive-active (RacV12) Rac1 were kindly provided by Dr. Mauro Cozzolino (Santa Lucia Foundation, Rome, Italy). Efficient transduction was confirmed by Western blot analysis or immunofluorescence staining of HUVECs. In-gel Sprouting Assay-HUVECs were transfected or adenovirally infected 48 h before the assay. The assay was performed as previously described (39). The spheroids were stimulated upon embedding in collagen with 25 ng/ml VEGF for 24 h. The cumulative sprouting length was set in relation to control and is displayed in percent. Caspase-3/7 and [pSer 139 ]Histone H2AX Assay-To measure caspase 3 and 7 activity in HUVECs the Caspase-Glo 3/7 Assay System (Promega) was used according to the manufacturer's protocol. For activity measurement upon staurosporine stimulation, HUVECs were seeded 48 h after transduction in a 96-well plate (1 ϫ 10 4 cells/well) and cultivated in ECGM for 22 h, followed by stimulation with 2.5 M staurosporine in ECGM for 2 h. For activity measurement upon serum starvation, HUVECs were seeded in a 96-well plate (5 ϫ 10 3 cells/ well) ϳ32 h after transduction and cultivated in ECGM overnight. Then cells were starved in endothelial basal medium (ECBM, PromoCell, 0% FCS) for 24 h. For the inhibition of PI3K using LY 294002, medium was changed after 8 h of serum starvation and replaced by fresh ECBM complemented with 10 M LY (or DMSO in an appropriate concentration in control wells), followed by further 16 h of incubation. For graphical presentation, the caspase-3/7 activity was set in relation to control and is displayed in percent. Detection of [pSer 139 ]Histone H2AX activity was performed after Dock180 siRNA transfection, followed by 4 h of serum starvation in ECBM and subsequent Western blot analysis. FACS Analysis of Annexin V-positive Cell Fractions-To analyze HUVECs undergoing apoptosis by FACS, cells were stained with APC-conjugated annexin V (BD Pharmingen). To this end, HUVECs were seeded 48 h after adenoviral transduction in 6-well plates (3 ϫ 10 5 cells/well) and cultivated in ECGM for 20 h, followed by stimulation with 2.5 M staurosporine in ECGM for 4 h. For starvation experiments, 1.5 ϫ 10 5 HUVECs were seeded per 6 well and cultivated in ECGM for 24 h. Subsequently, medium was changed to ECBM (0% FCS), and cells were starved for 24 h. At the end of the incubation time, cells were harvested, washed twice with PBS, and resuspended in 1ϫ Binding Buffer (Bender MedSystems). Cell number was adjusted to 1 ϫ 10 6 cells/ml and incubated with annexin V-APC and 7-AAD (Beckman-Coulter). FACS analysis was performed by the Mannheim Cell Sorting Core Facility of the Medical Faculty Mannheim using a BD FACSCanto II. For quantification, the annexin V-positive and 7-AAD-negative cell fraction was considered as the apoptotic cell fraction. The apoptotic cell fraction was set in relation to control (apoptotic cell fraction of pAd-GFP cells) and is displayed in percent. Western Blot Analysis-Western blot analysis was performed as previously described (40) using serum-starved HUVECs (overnight, ECBM, 0%FCS). HUVECs were adenovirally infected 48 h prior to the assay. For inhibition of PI3K by LY 294002 or PAK1 by IPA3, ECBM was changed 30 min (LY 294002) or 1 h (IPA3) before lysis and replaced by ECBM complemented with inhibitor (1 M LY 294002 or 20 M IPA3, respectively). Control cells were incubated with ECBM complemented with solvent in appropriate concentrations. For analysis of AKT activity upon Netrin-1 stimulation, BAECs were seeded in a 6-well plate (2.5 ϫ 10 5 cells/well). Approximately 32 h later, cells were starved (ECBM, 2.5% FCS) for 16 h. Cells were treated with 100 ng/ml Netrin-1 for 5 min. Three independent experiments were performed, and a representative Western blot for each experiment is shown. Rac1 Pull-down Assay-Analysis of Rac1 activity was performed using the Rac1 binding domain of PAK1 fused to GST (GST-PBD) as described before (41). Briefly, HUVECs were seeded in a 10-cm dish (2 ϫ 10 6 cells/dish) and adenovirally transduced. 32 h after transduction, cells were starved overnight (ECBM, 0% FCS) followed by lysis, Rac1 pull-down, and Western blot analysis. Incubation of control lysate with 100 M GTP␥S (30 min, 37°C) served as a positive control for Rac1 activation. Three independent experiments were performed, and a representative Western blot is shown. TUNEL Staining of Zebrafish Embryos-For whole-mount TUNEL staining of zebrafish embryos the ApopTag Red In Situ Apoptosis Detection Kit (Chemicon) was used. For this purpose, 30 hpf zebrafish embryos were fixed for 1 h in 4% PFA/PBS, washed in PBST (1ϫ PBS/0.1% Tween-20), followed by dehydration in increasing concentrations of methanol, and stored in 100% methanol overnight at Ϫ20°C. Embryos were rehydrated in increasing concentrations of PBST and treated with Proteinase K (10 g/ml, Roche) for 10 min. Embryos were washed in PBST and postfixed in 4% PFA/PBS for 20 min. After further washing with PBST, embryos were incubated in equilibration buffer for 1 h, followed by overnight incubation with working strength TdT enzyme at 37°C. The reaction was stopped by incubation in working strength stop/wash buffer for 3 h at 37°C. After washing in PBST, embryos were incubated in rhodamine antibody solution overnight at 4°C followed by washing with PBST overnight at 4°C. The embryos were further fixed in 4% PFA/PBS for 30 min, followed by washing in PBS. For quantification of apoptotic cells, embryos were embedded in 1% low melting agarose (Promega) and analyzed by confocal microscopy. Imaging of Zebrafish Embryos-For confocal microscopy, the fixed embryos were embedded in 1% low melting point agarose (Promega) and analyzed using a DM6000 B confocal microscope with Leica TCS SP5 DS scanner (Leica Microsystems). Quantification and Statistics-For quantification of apoptotic cells in zebrafish embryos, confocal images were further processed using ImageJ/Fiji. To exclude counting of nonspecific stained cells in the yolk sac, trunk tissue was selected using the polygon selection tool, and the selected area was measured. Note: the selected area for counting of total apoptotic cell number and apoptotic ECs remained the same in the respective embryo. Total apoptotic cell number was determined by the analyze particles tool of ImageJ/Fiji. Apoptotic ECs were counted manually. Counted apoptotic ECs were marked with the point tool to prevent double counting. The number of total apoptotic or apoptotic ECs per m 2 was calculated. For a clearer graphical presentation, values are set in relation to control group (mOrange RNA) and displayed as total apoptotic cells or apoptotic ECs in percent (n ϭ 15-17 at 30 hpf of at least three independent experiments). For quantification of mean intersomitic vessel (ISV) length in 30 hpf zebrafish embryos, length of ISVs was measured using the segmented line tool of ImageJ/Fiji, and the mean number of ISV length per fish was calculated. Results are displayed as mean ISV length/fish in m (n ϭ 12-14 of three independent experiments). Quantification of Western blot signals was performed using Gel-Pro Analyzer 6.0 software (Media Cybernetics). Sample signals were set in relation to their respective loading controls and are displayed in percent. All results in this publication are expressed as means Ϯ S.E. Comparisons between groups were analyzed by Student's t-test for in vitro assays and Mann-Whitney-U-Test for in vivo assays (SPSS 21). p values Ͻ0.05 were considered statistically significant. Elmo1 and Dock180 Protect ECs from Apoptosis-The function of Elmo1 in zebrafish vascular development acting downstream of the Netrin-1/Unc5B signaling cascade has recently been described (35). However, in contrast to the in vivo data, Netrin-1 stimulation of cultured ECs did not induce sprouting in an in-gel sprouting assay (35). Since Netrin-1's identified downstream effector, the small GTPase Rac1 (24,35), is well known to mediate migration of ECs (18,19,43), this raised the question of the precise function of Elmo1 and Dock180 in the endothelium. Since Rac1 has already been described to act Elmo1/Dock180 in Endothelial Cell Survival downstream of the VEGFR2 signaling cascade (18,44), we aimed to further identify the importance of the Rac1 activator Elmo1/Dock180 on EC function downstream of VEGFR2. To this end, we silenced Elmo1 and Dock180 expression in ECs by siRNA transfection (35) and performed an in-gel sprouting assay upon VEGF stimulation (Fig. 1, A-C). The stimulation with VEGF led to an increase of the relative cumulative sprouting length of control cells up to 300%. However, the knockdown of Elmo1 or Dock180 did not alter the VEGF-increased sprouting. We further analyzed the sprouting response in an Elmo1 gain-of-function experiment. To this end, we generated an adenovirus overexpressing Elmo1. The functionality of the virus was confirmed by elevated Elmo1 protein level after adenoviral transduction in ECs (Fig. 1D) as well as by an enhanced activity of the Rac1 downstream target extracellularsignal regulated kinases 1/2 (Erk1/2) (44) (Fig. 1E). Interestingly, the overexpression of Elmo1 in ECs significantly increased the basal cumulative sprouting length. Yet, increased Elmo1 expression had no additional effect on the VEGF-induced sprouting response (Fig. 1F). Thus, although Rac1 has been described to mediate migration downstream of VEGFR2 signaling, loss-of-function and gain-of-function analysis showed no involvement of its activator Elmo1/Dock180 in the VEGF-mediated endothelial sprouting process in vitro (Fig. 1). This finding suggests an additional and new function for Elmo1/Dock180, rather than regulating VEGF-induced cell migration in the endothelium. Recently, an anti-apoptotic and pro-angiogenic function for Netrin-1 and its receptor Unc5B has been identified in the endothelium (45). Since Elmo1 regulates vascular development downstream of Netrin-1 in the zebrafish (35), we addressed the question if Elmo1 and its complex partner Dock180 maintain cell survival in the endothelium. To this end, ECs were transduced with an adenovirus to overexpress Elmo1 and/or Dock180 (Figs. 1D and 2A). To determine apoptosis, the activity of caspases-3/7 was assessed. Induction of apoptosis by treatment of ECs with staurosporine significantly enhanced caspase-3/7 activity in control cells up to 400% (Fig. 2B). However, a significant reduction in caspase-3/7 activity upon staurosporine incubation was detected in Elmo1, Dock180, and Elmo1/Dock180-overexpressing cells compared with control cells. Surprisingly, combined overexpression of Elmo1 and Dock180 did not result in a further reduction of caspase-3/7 activity as compared with single transduction using Elmo1 or Dock180 alone. This suggested that the overexpression of one of the complex partners alone already induced a maximal pro- Prior to FACS analysis, HUVECs were starved for 24h (0% FCS). HUVECs grown under standard cultivation conditions (growth medium ϩ 10% FCS) served as control. Right: starvation-induced caspase-3/7 activity is significantly reduced in Elmo1 and/or Dock180 overexpressing ECs. After adenoviral transduction, HUVECs were serum starved (0% FCS) for 24 h, and caspase-3/7 activity was measured. Shown is the relative caspase-3/7 activity in % set in relation to control (pAd-GFP). E, starvation-induced phosphorylation of H2AX monitoring activation of apoptosis in HUVEC was enhanced after siRNA-mediated expression silencing of Dock180. Actin served as loading control. n ϭ 3 each experiment. *, p Ͻ 0.05; #, p Ͻ 0.05 versus pAd-GFP ϩ staurosporine. Error bars indicate S.E. Elmo1/Dock180 in Endothelial Cell Survival tective effect which is likely due to the fact that Elmo1 prevents degradation of its complex partner (46). Alternatively, the overexpression of the catalytic active Dock180 alone might be sufficient for the maximum activation of Rac1 by binding to endogenously expressed Elmo1 and vice versa. A reduction in the number of apoptotic cells was also observed when HUVECs were harvested after staurosporine treatment and stained with annexin V for FACS analysis, a protein which binds phosphatidylserine and therefore marks apoptotic cells (47) (Fig. 2C). The EC fraction positive for annexin V in the staurosporine-treated control group was elevated up to 400%. The overexpression of Elmo1, Dock180, and Elmo1/Dock180 significantly reduced the apoptotic cell number upon staurosporine treatment (Fig. 2C). To further confirm these findings, apoptosis was induced by serum starvation, which more accurately simulates physiological conditions. To this end, ECs were serum starved for 24 h followed by the determination of caspase-3/7 activity. Consistently, enhanced protein levels of Elmo1, Dock180, and Elmo1/ Dock180 were able to reduce starvation-induced increase of caspase-3/7 activity (Fig. 2D). In contrast, siRNA-mediated Dock180 loss-of-function experiments increased apoptosis as monitored by increased [pSer 139 ]Histone H2AX phosphorylation (Fig. 2E). This further strengthens the data shown in Fig. 2, B and C, uncovering a new, cell intrinsic pro-survival function for Elmo1/Dock180 in ECs. Elmo1 and Dock180 Maintain Survival of ECs via the Rac1/ AKT Signaling Cascade-To identify the downstream signaling cascade which mediates the endothelial protective function of Elmo1/Dock180 we interfered with known mediators of the activity of one of the key survival factors in ECs, AKT (48). Thus, serum-starved ECs were additionally treated with the PI3K inhibitor LY 294002 (49) to blunt the PI3K-dependent AKT signaling. Although caspase-3/7 activity in Elmo1 or/and Dock180-overexpressing ECs is strongly reduced (Fig. 3A, black bars), the inhibition of PI3K activity in Elmo1 or/and Dock180 expressing cells led to a significant increase in caspase activity (Fig. 3A, gray bars). To further verify the activation of AKT in Elmo1 or/and Dock180 expressing ECs, AKT phosphorylation was determined by Western blot analyses. Overexpression of Elmo1 and/or Dock180 led to an enhanced activation of AKT and, concordantly, inhibition of PI3K activity by LY 294002 strongly attenuated this effect (Fig. 3B). This highlights the dependence of the protective role of Elmo1 and Dock180 in ECs on PI3K and AKT function. We further aimed to address the question, if the activation of AKT by Elmo1 and Dock180 is mediated by Rac1 signaling, as AKT has recently been described acting downstream of Rac1 (48,50). To this end, Rac1 activity was determined by Rac1 pull-down assays in ECs which showed a strong increase in Rac1 activation when Elmo1 and Dock180 protein levels were enhanced (Fig. 4A). Consequently, to demonstrate Rac1-dependent AKT activation, Western blot analyses for phosphorylated AKT in ECs, which overexpress Elmo1 and/or Dock180 were performed. Yet, to block Rac1 signaling, a dominant-negative form of Rac1 (RacN17) (51) was additionally overexpressed. As already shown before (Fig. 3B), the overexpression of Elmo1, Dock180, and Elmo1/Dock180 alone increased AKT activation. Inhibition of endogenous Rac1 activity by dominant-negative RacN17 expression strongly reduced this effect (Fig. 4B). Since Rac1 is known to mediate PI3K/AKT signaling via activation of its downstream effector PAK1 (48,52,53), AKT activity in ECs was further determined in presence of the PAK1 inhibitor IPA3 (Fig. 4C). Inhibition of PAK1 led to a strong reduction of Elmo1 and/or Dock180-mediated AKT phosphorylation, which demonstrates the activation of AKT via PAK1. Thus, Elmo1/Dock180 protects ECs from apoptosis by the activation of the Rac1/AKT signaling cascade in vitro. Recent data suggested a pro-survival function of Netrin-1 via its receptor Unc5B by decreasing caspase activity in ECs (45). To link the protective action of Netrin-1 to the survival function of Elmo1/Dock180 we performed Western blot analyses to analyze AKT activation upon Netrin-1 stimulation in ECs (Fig. 4D). Netrin-1 stimulation of ECs resulted in enhanced AKT activation. Thus, Netrin-1 activates the same protective machinery, which now has been shown for Elmo1/Dock180 (Fig. 3B). Elmo1 and Dock180 Maintain EC Survival in the Zebrafish Embryo-To study the protective function of Elmo1/Dock180 in a physiological context, the vascular development in tg(fli1: EGFP) zebrafish embryos was analyzed. This transgenic line expresses endothelial-specific EGFP and therefore permits the observation of blood vessel formation. For the enhancement of Elmo1 and Dock180 protein expression in zebrafish, the respective mRNAs were injected into 1-cell stage embryos (Fig. 5A). To induce apoptosis, zebrafish embryos were incubated with staurosporine starting at 24 hpf for 6 h. At 30 hpf, embryos were fixed, and TUNEL staining was performed to visualize total apoptotic and endothelial apoptotic cells in the zebrafish trunk. In control zebrafish embryos, which were injected with mOrange mRNA, only few cells undergo apoptosis without staurosporine treatment (Fig. 5B). However, stimulation with staurosporine results in a strong increase in total apoptotic as well as endothelial apoptotic cell number (Fig. 5, B-D) up to 500%. Apoptotic ECs are localized in all trunk vessels, such as in the dorsal aorta, in the posterior cardinal vein, in intersomitic vessels (ISVs), and in the dorsal longitudinal anastomotic vessel. Importantly, overexpression of Elmo1 or Dock180 in zebrafish embryos strongly reduced the number of apoptotic cells in toto and most important, of apoptotic ECs (Fig. 5, B-D) to a similar extent. In addition to the apoptotic cell phenotype, vascular alterations were identified in the zebrafish embryos. Staurosporine treatment reduced the mean ISV length in control embryos (Fig. 5, B and E). This reduction in ISV length was significantly attenuated when Elmo1 and Dock180 were overexpressed. Therefore, the in vivo data are in good agreement with the data from cultured ECs (Figs. 2-4) and demonstrate that Elmo1/Dock180 exerts a protective vascular function in a living organism. Taken together, our results demonstrate a so far unknown, cell intrinsic function of the Rac1 guanine nucleotide exchange factor Elmo1/Dock180 in maintaining EC survival in vitro and in vivo (Fig. 6). DISCUSSION In this study we have identified a novel protective function for the Rac1 activator Elmo1/Dock180 in the endothelium, which is mediated by PAK1, PI3K, and AKT (Fig. 6). First, overexpression of Elmo1 and Dock180 in ECs reduces apoptosis upon staurosporine treatment or serum starvation in a PI3K-, Rac1-, PAK1-, and AKT-dependent manner. Second, in zebrafish embryos, Elmo1 and Dock180 overexpression reduced the number of apoptotic ECs after apoptosis induction and rescued vascular malformation. Thus, Elmo 1/Dock180 apparently act as survival factor during early vascular development. Rac1 regulates apoptosis in several cell types (17,54). Yet, controversial results have so far been obtained whether Rac1 has a pro-or anti-apoptotic function (17, 54 -59). Monomeric GTPases mediate different biological functions due to their cell type, spatial, temporary, and context-dependent regulation by GEFs (16). As this has also been shown for Rac1 (16), it seems very likely that Rac1 pro-or anti-apoptotic function is mediated by the presence of specific activators or inhibitors. The Rac1 GEF Dock180 mediates survival in glioblastomas, epiblasts, and cardiomyocytes (60 -62). So far little is known about apoptosis regulation in the vasculature by Rac1 and the involved Rac1 GEFs (63). Nevertheless, the expression of Elmo1/Dock180 is spatially and temporarily regulated in the zebrafish vasculature during embryonic development. In early stages of vessel formation in zebrafish embryos, Elmo1 is highly expressed in the dorsal aorta, in the posterior cardinal vein and in the ISVs. However, at later stages vascular expression of Elmo1 diminishes, which suggested a specific and transient function in early processes of vascular development (35). During angiogenesis, newly formed blood vessels require stabilizing and survival factors, otherwise they are prone to apoptosis and vessel regression (3)(4)(5)(6)(7)9). Elmo1 expression in the zebrafish vasculature (35), its protective function in cultured ECs and its pro-survival function in zebrafish embryos suggest that Elmo1 and its complex partner Dock180 stabilize newly formed blood vessels. In accordance, the enhanced EC apoptosis induced by staurospo- Elmo1/Dock180 in Endothelial Cell Survival rine treatment was reduced by overexpression of Elmo1 or/and Dock180 in vivo and in vitro. Furthermore, induction of apoptosis in zebrafish embryos caused a reduction of the mean ISV length at 30 hpf. These vascular malformations were diminished when Elmo1 or Dock180 were overexpressed, highlighting their critical role in fostering EC survival, thereby promoting angiogenesis. In consideration of their spatiotemporal expression in the zebrafish vasculature (35), this strongly supports the concept that expression of Elmo1 and Dock180 is necessary to stabilize the endothelium especially during early phases of vascular development when newly formed blood vessels are still not stabilized and therefore are very sensitive to destabilizing factors. Because Elmo1 and Dock180 also protect non-ECs in the zebrafish embryos, this additionally suggests a survival role in other cells in which these proteins are expressed (25,33,35,64). Activation of Rac1 is also known to be required for endothelial migration in response to VEGF. This activation is mediated by the VEFG receptor type 2 and the GEF Vav2 (65). Elmo1/ Dock180 is apparently not contributing to VEGF-induced Rac1 activation as gain-of-function and loss-of-function for Elmo1 and Dock180 in ECs did not alter the VEGF-dependent sprouting response. These findings therefore highlight the dependence of Rac1's different functions in ECs attributed to the spatially and temporally resolved activation of the GTPase by specific GEFs during vascular development. Although Rac1 has already been described to act downstream of the PI3K/AKT signaling cascade in the endothelium, these data are rather linked to the activation of afore-mentioned VEGFR2-dependent migratory pathways (66). This study provided clear evidence that the Elmo1/Dock180-induced Rac1 activity regulates EC survival via a PAK1-and PI3K- Overexpression of Elmo1 or Dock180 significantly reduces the total apoptotic and the apoptotic endothelial cell number. For quantification, the apoptotic cell number/m 2 was determined and set in relation to control. Values are displayed in %. n ϭ 15-17 embryos per group. E, staurosporine treatment reduces the mean ISV length in zebrafish embryos at 30 hpf. Elmo1 and Dock180 overexpression rescues (#) this effect. Upon 6 h staurosporine treatment, ISV length in the zebrafish trunk was measured, and mean ISV length/fish in m was calculated. n ϭ 12-14 embryos per group. *, p Ͻ 0.05 versus control; #, p Ͻ 0.05 versus mOrange mRNA ϩ staurosporine. Error bars indicate S.E. dependent activation of AKT (Fig. 6). Thus, the protective effect of Elmo1/Dock180 overexpression was significantly attenuated when PAK1 and PI3K activity was inhibited by IPA3 and LY 294002. In accordance, Elmo1/Dock180 overexpression enhanced AKT activation in a Rac1-, PAK1-, and PI3K-dependent manner. This finding classifies the survival factors PI3K and AKT downstream of Rac1 and PAK1 in ECs and is consistent with recently published studies (50,67,68). However, inhibition of PI3K did not result in a complete blockage of the Elmo1/Dock180 protective function in the caspase-3/7 assay. Since Rac1 is known to mediate survival by the regulation of reactive oxygen species (ROS), direct interaction with pro-survival protein Bcl-2, phosphorylation of Bad or transcriptional up-regulation of eNOs (69 -72), an additional PI3K/AKT-independent protective role cannot be completely excluded in ECs and will be addressed in subsequent experiments. Elmo1 has recently been described to mediate angiogenesis downstream of Netrin-1/Unc5B. Although it did not induce angiogenic sprouting in in-gel sprouting assays in vitro, Netrin-1 acted as a guidance factor regulating the pro-angiogenic function of Elmo1 in zebrafish embryos (35). Yet, the exact biochemical mechanisms underlying this pathway remained unclear (35). Some data in the literature suggest an anti-angiogenic function of Netrin-1 and its endothelial receptor Unc5B (73,74), which is in conflict with recently published data showing a pro-angiogenic and pro-survival function of this signaling cascade (35,45,75). However, this controversy can be explained by the dependence receptor function of the Netrin receptors. They induce apoptosis if the ligand is not bound, but mediate survival and other processes if the ligand is present (45,76). Thus Netrin-1 was shown to mediate survival in ECs (45). Nevertheless the exact signaling cascade has not been elucidated so far. In this study, Western blot analysis of AKT activation in ECs upon Netrin-1 stimulation further supports this pro-survival effect of the Netrin-1/Unc5B signaling cascade. In addition to the identification of Elmo1 as a downstream effector of Netrin-1 (35), the data obtained in this study now explain how Elmo1/Dock180 regulate EC survival and therefore present a pathway by which the Netrin-1/Unc5B/Elmo1/Dock180/ Rac1 cascade acts as a pro-survival and pro-angiogenic factor in vascular development (Fig. 6). In conclusion, this study identified Elmo1/Dock180 as a novel protective factor in ECs. This survival function is mediated by the activation of Rac1 and its downstream signaling cascade. Thus Elmo1/Dock180 acts as a Rac1 GEF that regulates EC survival downstream of Netrin-1.
6,900.6
2015-01-13T00:00:00.000
[ "Biology", "Medicine" ]
Towards an automated analysis of video-microscopy images of fungal morphogenesis Fungal morphogenesis is an exciting field of cell biology and several mathematical models have been developed to describe it. These models require experimental evidences to be corroborated and, therefore, there is a continuous search for new microscopy and image analysis techniques. In this work, we have used a Canny-edge-detector based technique to automate the generation of hyphal profiles and calculation of morphogenetic parameters such as diameter, elongation rates and hyphoid fitness. The results show that the data obtained with this technique are similar to published data generated with manualbased tracing techniques and that have been carried out on the same species or genus. Thus, we show that application of edge detector-based technique to hyphal growth represents an efficient and accurate method to study hyphal morphogenesis. This represents the first step towards an automated analysis of videomicroscopy images of fungal morphogenesis. Introduction Fungi are microorganisms that growth as tubular cells named hyphae.Hyphae extend by a vesiclebased process of apical growth and the mechanisms of this process appear to be the same among all groups of fungi in spite of their evolutionary origin (Bartnicki-Garcia & Lippman, 1969;Bartnicki-Garcia, 1973; Bartnicki-Garcia & al., 1989;Bartnicki-Garcia, 1990, 1996, 2000, 2002;Diéguez-Uribeondo & al., 2004).Apical growth involves a continuous transformation of the highly curved cell wall surface of the tip of the hyphae to the milder curvature of the subapical region surface.Consequently, to understand the mechanism of the apical growth and fungal morphogenesis, it is important to have a precise knowledge of hyphal-tipgeometry. Most of what is known on hyphal morphogenesis has been based on experimental evidences involving application of new microscopy and image analysis techniques.Thus, development of enhanced videomicroscopy had made possible to provide real-time digital contrast and to study growing cells with high resolution (Bartnicki-Garcia & al., 1989;López-Franco & al., 1994).Manual measurements from microscopy image-frames have been performed by development of a Windows application name "fungus simulator" that interfaces with Argus-10 Hamamat-su® image processor (Bartnicki-Garcia & al., 1994).This application and additional measurements tracing options from commercial computer programs such as ImagePro Plus® for Windows® had allowed describing several crucial parameters of fungal morphogenesis with fine detail, i.e. elongation rates (López-Franco & al., 1994), growth directionality (Riquelme & al., 1998), describe the relation of a structure operating as vesicle supply center, i.e. the Spitzenkörper (Bartnicki-Garcia & al., 1995a,b;Reynaga-Peña & al., 1997), and identifying the nature of forces driving fungal cell wall expansion (Bartnicki-Garcia, 2002).Recently, two Windows®-based computer routines have been devised to make quantitative comparisons of hyphal shapes (Diéguez-Uribeondo & al., 2004). Thus, image analysis studies have been used as a tool to provide evidences to support or question previous mathematical models describing hyphal growth and shape (see review by Bartnicki-Garcia, 2002).The most plausible model is a two-dimensional model, the socalled "VSC model" (Bartnicki-Garcia & al., 1989;Bartnicki-Garcia & al., 2000;Bartnicki-Garcia, 2002), which explains the mechanisms of hyphal tip growth based on a simple mathematical equation y = x cot (x V/N).The equation relates the y of each x,y coordinate pair of the hyphal profile with the number of wall-building vesicles (N) randomly released from a vesicle supply center (VSC) per unit time, and the rate of advancement of the VSC (V).The relationship between the maximal diameter of a hypha and the VSC distance to the pole is defined by the equation D = 2dπ, where D is the maximal diameter of the hyphae and d is the distance of the VSC to the apical pole.This allows standardizing the shape of all hyphae and make comparison of hyphal shapes among species.The regions of an idealized shape of an hypha can be divided into three main regions (Fig. 1): apical region from pole to 2d, subapical region from 2d to 20d and mature region beyond 20d (Bartnicki-Garcia, 1990;López-Franco & Bracker, 1996).Thus, research on hyphal morphogenesis is based on accurate descriptions of hyphal profiles and measurements of parameters such as location of apical pole and VSC, diameter at a certain distance to the pole, elongation rates, etc.New outstanding questions on hyphal tip morphogenesis are being addressed (see review by Bartnicki-Garcia, 2002). The current techniques developed for these studies had clearly allowed retrieving relevant information.However, much efficient programs for analyzing and interpreting data could be achieved if new techniques, i.e. image and data processing, data based, etc., are incorporated to his studies and also automated in one application.Automated identification of hyphal contours instead of manual tracing represents the first step towards the development of an efficient userfriendly image analysis tool.Edge detection is a fundamental operation in image processing and, therefore, crucial to obtain this goal.Currently methods for edge detection are based on the application of wavelets operators (Placidi & al., 2003;Song & al., 2003;Turiel & al., 2003;Wang & al., 2003;Dai & al., Fig. 1.Regions in the idealized shape of a hypha according to the hyphoid model (Bartnicki-García,1990;López-Franco & Bracker, 1996).The hypha is divided into three main regions: apical region from 0 to 2 d , subapical region from 2 to 20 d , and mature region beyond 20 d. 2004;Duccottet & al., 2004;Gleich & al., 2004;Junxi & al., 2004).Traditional used operators for edge detection were based on Fourier series.However, one of the main advantages of wavelets is that they are well suited for approximating data with sharp discontinuities and easily characterize local regularity, which is an important property for biological images (Graps, 1995).Wavelets have been mainly developed in fields of mathematics, quantum physics, electrical engineering, and seismic geology (Graps, 1995), and have been mainly applied to image compression, turbulence, human vision, radar, and earthquake prediction.The applications of wavelets represent an unexplored and exciting field.Wavelet techniques have not been throughly worked out in applications such as image analysis of biological systems. In this work, we describe the application of an edge detector, i.e.Canny's edge detector, equivalent to detecting modulus maxima in a wavelet transform (Mallat, 1992), to video-microscopy images of hyphal growth.Thus, the goal of this study was to developed the initial requirement of an automate application for image analysis of video-microscopy images.This may allow obtaining crucial measurements for studying hyphal morphogenesis such as elongation rates, hyphal shape and diameter in real time and in an efficient and rapid way. Cell images The fungal strain used in the experiment was Saprolegnia parasitica Coker (SpT) kindly provided by Dr. Kenneth Söderhäll, Department of Comparative Physiology, University of Uppsala, Sweden.Fungal strains were maintained at 22 ± 2 °C in potato dextrose agar (PDA), at pH 5.5.For microscopic observations and recordings, fungi were inoculated in Petri dishes containing thin layers ca 0.5 mm of potato dextrose agar (PDA).For the final analysis, we grew the fungi on PDA, a medium where branching was less frequent and longer lengths of primary hyphae could be measured.The inoculum consisted of a ca 5 mm plug excised from a colony edge, and placed in the center of agar layer.The fungi were allowed to grow for 24-36 h before observations.Colonies that had grown at least 2 cm on PDA were selected for analysis.Before observation, a drop of Difco potato dextrose broth was added to the edge of the colony and carefully covered with a square cover slip (22 × 0.1 mm thick: Carolina Biological Supply Co).For the study, only randomly selected hyphae from the growing edge of young colonies were used. Video-microscopy Petri dishes were placed on the stage of an Olympus Vanox microscope and hyphal growth monitored with bright-field optics (40× objective and 25× WF eyepiece) (American optical).Hyphal shapes were imaged with a Hamamatsu® C2400-07 video camera; images were enhanced with an Argus-10 digital processor (Hamamat-su® Photonic Systems, Bridgewater, NJ), recorded on S-VHS videotapes and displayed on a 12-inch black and white monitor (Sony® Model PVM-122).The analogical-video sequence consisting in 30 frames per second was digitalized into AVI-files using a JVC HR-XVS20 video player connected to a computer by a mvDELTA-BNC frame grabber (http://www.matrix-vision.com/products/hardware/mvdelta.php?lang=en).The analogical signal was processed by a plugin of the software I+Solex® (Image Solex Inc.) and converted into a video sequence on an AVI format.The image format used was 8 bits gray-scale. Image processing Image processing of the digitalized video sequences was performed in the following steps: (1) Selection of area of analysis.This consisted in selecting sequences of video with straight growing hyphae and that hyphal tip was focused in the entire sequence.(2) Edge detection.The edge of the profile of interest was the limit of the hyphae.The criterion to define the limit of the hyphae was the inner boundary of the cell wall.The methodology followed to detect the limits was a method similar to the Canny-Deriche edge operator (Canny, 1986) described in http://bigwww.epfl.ch/demo/jedgedetector/index.html.Basically, the processing consisted in a smoothing (using a Gaussian smoothing operator), gradient computation, a non-maximum suppression, and a hysteresis threshold.(3) Diameter calculation, the algorithm used was programmed to detect the apex of the hyphae.The diameters were calculated at a distance of 1d, 2d, and 5d from the apex.These distances could be change according to experimental samples.The Java based computer program I+Solex® (Image Solex Inc.) was used for this purpose. Data analysis The x,y coordinate pairs were obtained as txt files.Diameter and elongation rate calculations were processed and graphed with Microsoft Excel.Hyphal profiles were studied with two computer routines designed to study diameter fluctuations and percentage of hyphoid fitness, i.e. "diameter tool" and "hyphoid fitness" (Diéguez- Uribeondo & al., 2004) and available at http://www.is-si.com/fungal-morphogenesis/oomycetes.htm. Experimental validations To test the validity of the edge detector applied, several morphogenetic parameters calculated with the profiles obtained with this technique were compared to published data on the same parameters and on the same fungal genus (López-Franco & al., 1994) and species (Diéguez-Uribeondo & al., 2004) that were obtained with manually tracing techniques.Ten hyphal profiles from different time frames were analysed. The parameters used for these comparisons were diameter, elongation rates, and concordance to the theoretical hyphal shape, i.e. hyphoid fitness.The increase in diameter from the pole towards the end of the hypha was studied by using the computer routine "diameter tool".The elongation rates were calculated as described above.The calculation of the hyphoid fitness requires a previous step consisting in the straightening of the actual profile of the hyphae.This process is carried out by the computer routine "hyphoid fitness" (Diéguez- Uribeondo & al., 2004). Results The process of image analysis is summarized in Fig 2 .The source of images used for testing the edge detector was a sequence of 764 images equivalent to a period of growth of about 28 s of videotaping (Figs.2a-c).The profiles generated by the edge detector approximated the inner boundary region of the fungal cell wall as shown in Figs.2d-f.The hyphal profiles and the lengths of the diameter at a distance from the pole of 1d, 2d and 5d, are shown in Figs.2g-i.The validation of the technique used was done by studying the hyphal morphogenetic parameters: diameter, hyphoid fitness and elongation rate. Diameter The computer routine "diameter tool" allowed studying the variation of diameter of hyphal profiles.The profiles obtained with the edge detector approximate the theoretical hyphoid shape of a hypha (Fig. 3).The profiles automatically obtained with the edge detector exhibited similar diameter properties to profiles obtained using manual-based tracing applications (Diéguez-Uribeondo & al., 2004).The progressive increase in diameter from the pole towards the mature region of the hyphal tube was not entirely even but was punctuated by irregular fluctuations (Fig. 3).The amplitude of the fluctuations ranged between ca.0.1 to 0.6 µm (1% to 5 % of the mean diameter in the hyphal region). The possibility of obtaining automated hyphal profiles from this edge detector technique, and conse- quently x,y coordinate pairs, allowed an easy calculation of changes in diameter at a fixed distance from the pole and during short periods of time, e.g.1/30 s.The diameter at a fixed distance from the pole fluctuated irregularly during growth.This was observed at three regions of the hypha corresponding to the apical region and subapical region diameters.The fluctuations in diameter were simultaneous and proportional in intensity in the three regions of the hypha (Fig. 4). Hyphoid fitness The computer routine "fitness tools" was used in order to compare shape differences among the edge detector profiles and the hyphoid theoretical profile (Fig. 5).The percentages of hyphoid fitness were of 97.88%, 97.93%, and 97.91% at distances from the pole of 1d, 2d and 5d, respectively.This is similar to previously described hyphoid fitness for S. parasitica hyphal profiles that: 1) were grown and videotaped under the same conditions, 2) obtained with manualbased applications (Diéguez-Uribeondo & al., 2004). Elongation rate fluctuations Under our experimental conditions, the mean elongation rate was 0.24 µm/s (Fig. 6).The elongation rate was not steady but pulsated continuously (Fig. 6), as was shown before for hyphae of different fungal species with manual tracing techniques (López-Franco & al., 1994).These pulsate growth could be observed in all time periods studied, i.e. 1, 2 s.However, when we forced the calculation of the elongation rates to the limit of the technique, i.e. 1/30 s, we could only detected a minimum progression of the pole of ca. 1 pixel (0.1384 µm).When elongation rates were calculated for this time period, i.e. 1/30, we observed displacements both forward and backwards of the advancing hyphal pole (Fig. 6c). Discussion In this study, a Canny's-edge detector based technique was applied to quantify and characterize the profiles of fungal hyphae and to relate this to other parameters of crucial relevance in fungal morphogenesis studies.Video-microscopy image sequences that were previously studied with manual-based tracing techniques were used to test the validity of our approach.The profiles of the hyphae and their morphogenetic parameters and properties, i.e. diameter variation, hyphoid fitness, and pulsated growth, obtained with the edge detector were as accurate as those obtained with manual-based techniques. The application of edge detection-based techniques on fungal growth represents the first step to- wards an automated generation of hyphal profiles and quantification of morphogenetic parameters.This is important because it provides a more efficient way of carrying out studies on morphogenesis and also allows the possibility of retrieving data at real-time.Currently, morphogenetic studies involve several manual-based computer programs, which made this type of studies very tedious and time consuming.The possibility of retrieving data at real time will also facilitate studying cell growth by simplifying the selection of video sequences and the comparison of fluctua-tions of several parameters at the same time, i.e. diameter and elongation rates, diameters at different distances from the pole, movements of VSC and their consequences on hyphal shape, etc. However, this technique, at its current development, has some drawbacks detailed as follows: (1) The technique is limited to sequences of fungal growth in a straight axis.Hyphae often meander during growth (Riquelme & al., 1998;Bartnicki-Garcia, 2002) and, consequently, the position of the hyphal pole changes and the calculation of all polebased parameters.New functions efficiently detecting the pole and axis of growth need to be developed for fully automated analysis of any growing hyphae. (2) Because hyphae grow, the characterization of its growth is also limited by the optical field of view.This needs to be moved and, therefore, the corresponding coordinates require to be adjusted.Thus, studies on extended cell growth will require the used of motorized microscopes and the development of programs allowing the automatic movement of the platform of the microscope and the correction of coordinates.(3) Errors in edge detection can occur as consequence of manual focusing of hyphal growth.These focusing variations may be responsible for negative values of elongation rates at times intervals of 1/30s, and/or for the observed simultaneous variations of diameter at fixed distances from the pole.Focusing variations may be due to a probable meandering growth of hyphae in the z axis.The improvement of image focusing by incorporating new autofocusing techniques will allow testing whether some of the observed properties, i.e. negative elongation rates, simultaneous variations of diameter at different distances from the pole are due to artifacts of the technique or represent true biological events such as retractile type of growth, or variations in elongations rates respectively.These aspects will be also the subjects of future studies. Currently, there is a need for development of automated techniques for image analysis.The development and application of these techniques in fungal morphogenesis represents an important challenge for researchers in this field.These techniques will ease the generation of accurate morphometric and morphogenetic data and will allow studying properties of growth at much shorter intervals of time than in previous studies based on manual tracing techniques.This work represents the first attempt to automate edge detection in video-microscopy of fungal cells that may allow automating many studies that are currently being based on manual, tedious and dispersed techniques. Fig. 2 . Fig. 2. Process of application of a Canny's based-edge detector to generate hyphal profiles.a-c, source of images corresponding to time frames (a, frame 1; b, frame 315; c, frame 764); d-f, the obtained hyphal profiles overimposed to the corresponding images of the hyphae at same time frames as above; g-i, generated profiles for each image frame and the calculation of their corresponding diameters at fixed distances from the pole 1d, 2d, and 5d.(Bar = 5 µm). Fig. 5 . Fig. 5. Concordance of the hyphal profile generated with an edge detector (----) to the theoretical hyphal shape (-----), i.e. hyphoid fitness.The calculation of the hyphoid fitness requires a previous step consisting in the straightening of the actual profile of the hyphae.This process is carried out by the computer routine "hyphoid fitness" (Diéguez-Uribeondo & al., 2004).
4,034.8
2005-06-30T00:00:00.000
[ "Biology", "Computer Science" ]
Field penetrations in photonic crystal Fano reflectors We report here the field and modal characteristics in photonic crystal (PC) Fano reflectors. Due to the tight field confinement and the compact reflector size, the cavity modes are highly localized and confined inside the single layer Fano reflectors, with the energy penetration depth of only 100nm for a 340 nm thick Fano reflector with a design wavelength of 1550 nm. On the other hand, the phase penetration depths, associated with the phase discontinuity and dispersion properties of the reflectors, vary from 2000 nm to 4000 nm, over the spectral range of 1500 nm to 1580 nm. This unique feature offers us another design freedom of the dispersion engineering for the cavity resonant mode tuning. Additionally, the field distributions are also investigated and compared for the Fabry-Perot cavities formed with PC Fano reflectors, as well conventional DBR reflectors and 1D sub-wavelength grating reflectors. All these characteristics associated with the PC Fano reflectors enable a new type of resonant cavity design for a large range of photonic applications. ©2010 Optical Society of America OCIS codes: (140.3948) Fabry-Perot cavity; (999.999) Fano resonance; (230.0230) Optical devices; (050.5080) Phase shift; (999.999) Photonic crystal. References and links 1. S. Boutami, B. Benbakir, X. Letartre, J. L. Leclercq, P. Regreny, and P. Viktorovitch, “Ultimate vertical FabryPerot cavity based on single-layer photonic crystal mirrors,” Opt. Express 15(19), 12443–12449 (2007). 2. M. Sagawa, S. Goto, K. Hosomi, T. Sugawara, T. Katsuyama, and Y. Arakawa, “40-Gbit/s Operation of Ultracompact Photodetector-Integrated Dispersion Compensator Based on One-Dimensional Photonic Crystals,” Jpn. J. Appl. Phys. 47(8), 6672–6674 (2008). 3. A. Chutinan, N. P. Kherani, and S. Zukotynski, “High-efficiency photonic crystal solar cell architecture,” Opt. Express 17(11), 8871–8878 (2009). 4. O. Kilic, M. Digonnet, G. Kino, and O. Solgaard, “External fibre Fabry-Perot acoustic sensor based on a photonic-crystal mirror,” Meas. Sci. Technol. 18(10), 3049–3054 (2007). 5. J. D. Joannopoulos, S. G. Johnson, J. N. Winn, and R. D. Meade, Photonic Crystals: Molding the Flow of Light, 2nd ed. (Princeton University Press, 2008). 6. U. Fano, “Effects of Configuration Interaction on Intensities and Phase Shifts,” Phys. Rev. 124(6), 1866–1878 (1961). 7. R. Magnusson, and S. S. Wang, “New principle for optical filters,” Appl. Phys. Lett. 61(9), 1022 (1992). 8. S. Fan, and J. D. Joannopoulos, “Analysis of guided resonances in photonic crystal slabs,” Phys. Rev. B 65(23), 235112 (2002). 9. D. K. Jacob, S. C. Dunn, and M. G. Moharam, “Flat-top narrow-band spectral response obtained from cascaded resonant grating reflection filters,” Appl. Opt. 41(7), 1241–1245 (2002). 10. S. T. Thurman, and G. M. Morris, “Controlling the spectral response in guided-mode resonance filter design,” Appl. Opt. 42(16), 3225–3233 (2003). 11. C. F. R. Mateus, M. C. Y. Huang, L. Chen, C. J. Chang-Hasnain, and Y. Suzuki, “Broadband mirror (1.12-1.62 μm) using single-layer sub-wavelength grating,” IEEE Photon. Technol. Lett. 16(7), 1676–1678 (2004). 12. W. Suh, and S. Fan, “All-pass transmission or flattop reflection filters using a single photonic crystal slab,” Appl. Phys. Lett. 84(24), 4905 (2004). 13. S. Boutami, B. B. Bakir, H. Hattori, X. Letartre, J.-L. Leclercq, P. Rojo-Rome, M. Garrigues, C. Seassal, and P. Viktorovitch, “Broadband and compact 2-D photonic crystal reflectors with controllable polarization dependence,” IEEE Photon. Technol. Lett. 18(7), 835–837 (2006). #128072 $15.00 USD Received 6 May 2010; revised 9 Jun 2010; accepted 12 Jun 2010; published 16 Jun 2010 (C) 2010 OSA 21 June 2010 / Vol. 18, No. 13 / OPTICS EXPRESS 14152 14. Y. Ding, and R. Magnusson, “Resonant leaky-mode spectral-band engineering and device applications,” Opt. Express 12(23), 5661–5674 (2004). 15. M. L. Wu, Y. C. Lee, C. L. Hsu, Y. C. Liu, and J. Y. Chang, “Experimental and Theoretical demonstration of resonant leaky-mode in grating waveguide structure with a flattened passband,” Jpn. J. Appl. Phys. 46(No. 8B), 5431–5434 (2007). 16. R. Magnusson, and M. Shokooh-Saremi, “Physical basis for wideband resonant reflectors,” Opt. Express 16(5), 3456–3462 (2008). 17. W. Zhou, Z. Ma, H. Yang, Z. Qiang, G. Qin, H. Pang, L. Chen, W. Yang, S. Chuwongin, and D. Zhao, “Flexible photonic-crystal Fano filters based on transferred semiconductor nanomembranes,” J. Phys. D. 42(23), 234007 (2009). 18. L. Coldren, and S. Corzine, Diode lasers and photonic integrated circuits, (Wiley New York, 1995). 19. J. H. Kim, L. Chrostowski, E. Bisaillon, and D. V. Plant, “DBR, Sub-wavelength grating, and Photonic crystal slab Fabry-Perot cavity design using phase analysis by FDTD,” Opt. Express 15(16), 10330–10339 (2007). 20. D. Babic, and S. Corzine, “Analytic expressions for the reflection delay, penetration depth, and absorptance of quarter-wave dielectric mirrors,” IEEE J. Quantum Electron. 28(2), 514–524 (1992). 21. C. Sauvan, J. Hugonin, and P. Lalanne, “Difference between penetration and damping lengths in photonic crystal mirrors,” Appl. Phys. Lett. 95(21), 211101 (2009). 22. M. Born, E. Wolf, and A. Bhatia, Principles of optics: electromagnetic theory of propagation, interference and diffraction of light: Cambridge Univ Pr, 1999. Introduction Ultra-compact dielectric broadband reflectors are essential elements in the design of laser cavities, light trapping microcavities, cavity QEDs, nonlinear optics, and quantum computing systems [1][2][3][4].Traditionally, they can be realized by using the metal films or the stacked dielectric thin films (i.e., distributed Bragg reflectors, DBRs).Metal films can offer larger reflection bandwidth but are limited by the intrinsic absorption losses.Stacked dielectric thin films can achieve very low losses.But they typically require many individual layers with the stringent refractive index and thickness tolerances for each layer.Two dimensional photonic crystal slab (2D PCS) broadband reflectors can be realized for in-plane directions based on the photonic bandgap principle [5].Recently more attention has been paid to the single layer patterned broadband reflectors for surface-normal incident direction operation, based on the guide mode resonance, or Fano resonance [6][7][8] principles, for single layer one-dimensional sub-wavelength grating (1D SWG) structures, or 2D PCS structures [8][9][10][11][12][13][14][15][16][17]. Under surface-normal incidence, DBR, 1D SWG, and 2D PCS reflectors can all exhibit similar reflection properties with extremely high reflection and broad reflection spectral band.However, reflection mechanisms are different.For 1D SWG and 2D PC mirrors, the incident wave couples to the in-plane guided-mode based on phase matching conditions.The wave then reradiates at one edge with a zero phase difference and at another edge with a π phase difference.Consequently, these constructive and destructive interferences result in high reflection and low transmission, respectively [16].While for DBR, the high reflectivity arises from the multiple reflections with constructive interference among these reflected waves.Due to the large index difference, Bragg mirrors possess a broad reflection spectral band [18].For 1D SWG and 2D PCS mirrors, the broad reflection spectral band most likely originates from the cooperating of the several adjacent guided mode resonances [16]. In addition to the reflector spectral amplitude properties, such as high reflectivity R and broad reflection band, it is equally important to understand the phase discontinuity and dispersion behavior (reflection phase shift, Φ r ), the field/energy penetration depths (L p , L e ), and the field/cavity modal characteristics in cavities formed by these types of patterned single layer dielectric reflectors.However, most attentions so far have been paid to the spectral reflection amplitude properties.Very little work has been reported on phase/dispersion and mode/energy characteristics.In Ref [19], the authors reported excellent work on the phase discontinuity Φ r and the energy penetration depth L e , which were estimated from the mode spacing in the Fabry-Perot (FP) cavity.In this work, we investigate the phase discontinuity and the dispersion properties in 2D PCS Fano reflectors, for applications in multi-wavelength cavity design.We will also discuss the distinctively different behavior for phase and energy penetration depths.The phase penetration depth L p is related to the reflection delay.A large phase penetration depth will lead to longer photon lifetime and longer cavity resonance.The energy penetration depth L e is related to the energy decay.A smaller energy penetration depth will lead to more modal confinement [18,20,21].In addition, the different reflection mechanisms, guided mode resonance in PC mirrors and constructive interferences in DBRs, result in distinctively different field distribution profiles inside the cavity, which is another important factor to be considered in the laser cavity design.In what follows, we first introduce the cavity configurations of three types of surface-normal dielectric reflectors.Secondly, we investigate and compare the two penetration depths, L p and L e according to the calculated Φ r and R values, based on the 3D finite difference time domain (FDTD) technique.We then compare the field distributions of the resonant modes in FP cavity formed by these three types of dielectric reflectors.Finally, a conclusion is given. Dielectric mirrors configuration and corresponding FP cavities Here, we consider three types of dielectric mirrors and their corresponding FP cavities, as shown in Fig. 1.They all consist of two kinds of materials, Si and SiO 2 .Here Si and SiO 2 are assumed to be lossless and dispersion-free, with the refractive indexes are of 3.48 and 1.48, respectively, over the spectral range of interest around 1550 nm.In Fig. 1(a), the top and the bottom mirrors of the FP cavity (denoted as "Cavity I") are 1D Si SWG structures with the same lattice parameters, with the Si layer thickness h of 0.46µm, the grating period Λ of 0.7µm, and the air slit width w of 0.25Λ.The bottom SWG mirror is formed on a silicon-oninsulator (SOI) substrate, with the buffered oxide (BOX) layer thickness of 0.83µm.These two 1D SWG reflectors exhibit a high reflection over 1.1-2.06µmwavelength band for TM polarization only (H field is parallel to the air slit, y direction) [16].Shown in Fig. 1(b) is the second FP cavity ("Cavity II") formed with top and bottom 2D PCS Fano reflectors, where the Si slab layers with a thickness h = 0.34um are patterned with 2D square lattice air hole arrays.The PC lattice constant Λ is equal to 0.98µm.Again, the bottom reflector is processed on a SOI wafer, with the BOX layer thickness of 1µm.To enable the reflection bands of the top and bottom mirrors with large spectral overlap, their radius of air hole are set to r t = 0.26Λ and r b = 0.28Λ, respectively.The resulting overlapping reflection range of these two 2D PC mirrors is over 1.49-1.58µmwavelength band for both TE and TM polarizations (Broader and more flat band could be found through carefully optimizing the structure parameters).For comparison, we also consider the third FP cavity ("Cavity III"), based on two classical high index contract Si/SiO 2 DBRs, as shown in Fig. 1(c).The DBRs consist of 4-pairs of Si/SiO 2 stacked layers, where the thickness of Si and SiO 2 are chosen to be 0.11 and 0.27µm, respectively.Such a DBR with a very large index difference possesses a wide reflection band over 1.22-2.07µm.Note all the parameters chosen here are optimized for broadband reflectors with peak reflection around 1550nm. Phase penetration depth and energy penetration depth In dielectric mirrors, the reflection is not an instantaneous process.It includes a reflection time delay (τ) and energy storage in the mirrors.The reflection delay increases the laser cavity round-trip time or the photon lifetime.The energy storage in mirrors decreases the modal volume and the confinement factor in small cavity in which the cavity round-trip time and the cavity volume have a comparable magnitude as the reflection delay and the mirror storage [20,21].The reflection delay is directly related to the slope of the reflection phase shift Φ r .The relation can be expressed as / r τ ω = ∂Φ ∂ .The phase penetration depth, L p , is defined as the half-distance that light propagates in the incident medium during this delay time, where v g is the group velocity of the incident wave.The energy storage is always associated to the parameter of energy penetration depth, L e .It is the length that the field intensity decays to 1/e of its maximum from the edge of cavity into the mirrors.However, this method is not suitable to calculate L e of the PC mirrors we discuss here, because the guided modes are excited inside the mirrors.But L e can be estimated from the mirror transmission or reflection based on the following equation, where T is the transmission and h is the mirror thickness [18,21].While, for DBRs, L e can be obtained [18]: 4 where m eff = tanh(2mr)/(2r) is the effective period number seen by the incident light.r = (n 1n 2 )/(n 1 + n 2 ) and m is the actual period number in DBRs.n 1 and n 2 are the refractive index of two materials in DBR, respectively.In the following we will numerically investigate the phase and energy penetration depth according to the above definitions. Reflection and the phase shift First, we utilize FDTD simulation method to get R and Φ r of these dielectric mirrors.A Gaussian temporal pulse excitation is used to simulate the reflectivity R. In order to calculate Φ r , a continuous plane wave of a single wavelength (λ) is vertically incident on the dielectric mirrors and Φ r is extracted from the stable reflected field.Here the calculated Φ r is set in the range of [0, 2π].In order to validate our numerical simulation method, we compare the simulated R and Φ r of the top DBR based on FDTD with the theoretical calculated values according to the multiple thin film matrix theory [22].The calculated results completely overlap with the theoretical ones.Plotted in Fig. 2(a) are the simulated R and Φ r values for both top and bottom DBRs.In the high reflection (R>0.95) spectral band, covering from 1.2 to 2.1µm, Φ r slowly increases from 0.815π to 1.19π.Shown in Fig. 2(b) are the simulated R and Φ r spectra of top and bottom 1D SWG reflectors.The high reflection TM (R>0.95) spectral band spans from 1.1 to 2.06µm.Although both DBR and 1D SWG reflector have similar broad high reflection spectra bands, their Φ r changes are very different.Φ r of 1D SWG reflector rapidly varies in the range of [0, 2π], much faster than those of DBRs.The calculated R and Φ r spectra of for the top and bottom 2D PCS reflectors are plotted in Fig. 2(c), where the overlapping high reflection spectra band covers from 1.49 to 1.58µm.Owing to the different air fill factors (r t = 0.26Λ < r b = 0.28Λ), high reflection spectral band for the bottom mirror is narrower than that of the top mirror.In this high reflection band range, the phase shift of top mirror Φ rt varies in range of (0.91π, 1.17π), and the phase shift of bottom mirror Φ rb changes in range of (0.7π, 1.14π).For all these three types of mirrors, Φ r varies over the high reflection spectral band at drastically different change rates, which can be found by comparing the phase penetration depth L p . Phase penetration depth and energy penetration depth The phase penetration depth L p can then be calculated based on Eq. ( 1) and the simulated Φ r shown in Fig. 2 The results are plotted in Fig. 3 This not only results in a much longer reflection delay time in PC mirrors, it also leads to different resonance cavity locations.Such a long phase delay may be due to the guided mode excitation, even if 2D PCS reflectors are very thin.To verify this point, we record the dynamic process of the reflected field with λ = 1.540µm at one monitor above the 2P PCS reflector and the DBR, as shown in Fig. 3(b).One can find, for the PCS reflector based on guided mode Fano resonance, the reflected field reaches stable condition only after a long 200fs period, while it only takes about 60fs for DBR to reach the stable condition.So, it is very clear these two different reflection mechanisms result in very different L p values in PC reflectors and DBRs.The energy penetration depth L e can be obtained based on T = 1-R, and Eqs. ( 2) and ( 3).The results are sown in Fig. 4 for the bottom DBR, 1D SWG, and 2D PCS reflectors, respectively.Different from the phase penetration characteristics discussed earlier, both L e, 2DPCS (~100 nm) and L e, 1DSWG (~60 nm) are much lower than L e, DBR (~220 nm).And all energy penetration depths have much less wavelength dependence, as compared to that of phase penetration depths.It is worth noting that much higher energy penetration depths are expected in DBRs with smaller index contrast (e.g.GaAs/AlGaAs, InGaAsP DBRs).Such a small energy penetration length in PCS mirrors can attribute from the tight mode confinement.This is also very favorable in achieving better modal confinement inside the cavity.For classical DBRs, the phase penetration depth is typically smaller or similar to the energy penetration depth, due to smaller index contract and larger energy penetration depth [18,22].However, for 1D SWG and 2D PCS reflectors discussed here, L p is much larger than L e , consistent with inplane 2D PCS reflectors, as reported in Ref. 18.This could be due to the large phase discontinuities originated from the modal interaction between in-plane guided modes and vertical radiation modes.On the other hand, the energy penetration depth can be very small and localized with the thin PCS layer. FP cavities and field distributions Finally, for the resonant FP cavity modes, we can investigate the field distribution properties in these FP cavities shown in Fig. 1.To have resonance cavity modes with similar spectral locations, the cavity lengths are chosen as L c1 = 5.4µm, L c2 = 5.2 and 5.4µm, L c3 = 5.4µm in three different cavities.We chose the two resonant modes in each cavity: λ = 1.629µm and 1.510µm in Cavity I, λ = 1.504µm and 1.540µm in Cavity II, λ = 2.076µm and 1.540µm in Cavity III. Figure 5(a) represents the field intensity of two resonant modes in DBR Cavity III, λ = 2.076µm and 1.540µm, where the dielectric index profile is also plotted with a black thin line.For a classical DBR cavity, the field intensity inside the cavity is always larger than that in mirrors, and the field gradually decays into the mirrors.However, it is not the case for Cavity I and II based on Fano or guided mode resonances.In Cavity I at L c1 = 5.4µm, for the two resonant modes (λ = 1.629µm and 1.510µm) shown in Fig. 5(b), the magnetic field intensity inside cavity is much smaller than that in mirrors.It is because the guide modes are excited into the mirrors.Outside the cavity, the field intensity decays rapidly, which is in consistent with the very small L e obtained earlier.Figure 5(c) and Fig. 5(d) correspond to the case for Cavity II.It can be seen that the electric field intensity of the two resonant modes (λ = 1.504µm and 1.540µm) inside cavity is much larger than that in reflector slabs and the field outside cavity also decays very fast.Cases shown in Figs.5(b)-5(d) can exist in both 1D SWG and 2D PCS based FP cavities.However, the ratio of the intensity inside the reflector slabs to the intensity inside the cavity (between two reflector slabs) is still larger than that of Cavity III, mostly due to nature of Fano or guide mode resonance excitation inside reflector slabs.Nevertheless, it is expected a very high modal confinement inside the cavity with high field intensity. Conclusion In conclusion, we have numerically investigated phase and energy penetration depths, and field distributions of 1D SWG and 2D PCS reflectors based on Fano or guided mode resonances.Comparing to the DBR reflectors, these new types of single layer ultra-compact broadband reflectors can have more complicated larger phase delays and smaller energy penetration properties, which can be engineered via dispersion engineering for large spectral dependent phase delays, and ultra-small energy penetration depths.The work reported here is mostly based on one set of optimized design parameters for 1550nm band reflectors.Following similar procedures, other design parameters can be used for reflectors with different reflection requirements, as well as different phase delays, energy penetrations, and field distributions.All the results and conclusions can be very helpful for the design of resonant cavities for a wide range of photonic applications. Fig. 1 . Fig. 1.Sketches of different dielectric reflectors and the corresponding Fabry-Perot cavities: (a) cross section (xz plane) of Cavity I consisting of top and bottom reflectors based on 1D single Si-layer sub-wavelength grating (SWG) with the same pattern parameters; (b) overview of Cavity II consisting of top and bottom mirrors based on 2D PCS patterns with a square air holes lattice; (c) cross section (xz plane) of Cavity III consisting of top and bottom mirrors based on 4-pairs of Si/SiO2 (0.11/0.27µm)DBR stacked layers. Fig. 2 . Fig. 2. Calculated reflection R (blue solid and black dash lines) and phase shift Φr (red dash-dot and green dot lines) spectra of top and bottom (a) DBR reflectors, (b) 1D SWG reflectors and (c) 2D PCS reflectors. Fig. 4 . Fig. 4. The energy penetration depths for three types of bottom reflectors.
4,823
2010-06-21T00:00:00.000
[ "Physics" ]
The Effect of Realistic Mathematics Education on Academic Achievement, Motivation and Retention of Fifth Grade Students i In this study, the effect of Realistic Mathematics Education (RME) on fifth grade students’ academic achievement, motivation and retention in "Data Processing" learning domain was investigated. In the research, quasi-experimental design with pre-test post-test control group was used. The research carried out in two secondary schools in the district of Bayat in Afyonkarahisar province in Turkey in the second term of 2018-2019 academic year was conducted with a total of 41 students, 19 of whom were in the experimental group and 22 of whom were in the control group. While the implementations were organized with the RME approach in the experimental group, the control group received instruction in line with the activities included in the Ministry of National Education secondary school mathematics curriculum. In this study, "Evaluation Form of Learning Outputs in Data Processing Learning Domain" and "Mathematical Motivation Scale" were used as data collection tools. The instruments were applied to the experimental and control groups as pre-test, post-test and retention test. As a result of the analysis, there were not any significant differences between achievement, motivation and retention scores of the experimental and control groups. Introduction Education is a significant concept that shapes the life of individuals and society. Education enables social advances and improves individuals' quality of life. It forms the basis of the developments of the advanced countries. The knowledge, skills, attitudes and behaviors required in the new century can only be transferred to individuals through education [1]. Being a universal language, mathematics is a significant area for individuals, society, scientific research and technological developments in the constantly developing world. That we come across mathematics in a number of areas makes it compulsory to learn this course. In the mathematics curriculum in Turkey, it is aimed to train students as individuals who can produce knowledge, use that information they produce in their daily life, solve problems, think critically, be entrepreneurial, have communication skills, empathize, contribute to society and culture [2]. In the constructivist approach adopted in MoNE curricula, students take an active role in the learning-teaching process. By turning abstract mathematical expressions into concrete teaching materials, students' exploratory and independent thinking skills are developed [3]. In a constructivist lesson, first it is aimed to draw the attention of students, then a problem is presented and the prior knowledge of students is elicited. To come up with a solution to the problem, the students examine the problem in cooperation, produce hypotheses, develop solutions, share their ideas with their friends, listen to their friends' ideas, and review their own thoughts. Then each student decides what needs to be done to improve their knowledge structures. The teacher guides students and helps them in the thinking process [4]. Thanks to the contents that students can associate with their lives, the mathematics lesson attracts more attention and students grasp the importance of mathematics to a greater extent. There are international examinations that give us the chance to assess the efficiency of education system in Turkey, and make comparisons with other countries. The results of the international exams such as TIMSS and PISA, which the International Association for the Evaluation of Educational Achievement (IEA) organizes every four years, are important indicators of the failure in mathematics. Turkey's performance in PISA 2018 increased compared to the PISA 2015. With 454 points in mathematics, Turkey rose from 50th place to 42nd place; however, this score is below the OECD average [1]. In the same vein, regarding the distribution of 8th grade mathematics achievements of the countries in TIMMS 2015, Turkey got 458 points and ranked 24th among 39 countries and scored below 500 points which is scale mean [2]. The data presented here prove that Turkish students' achievement in mathematics course is not satisfactory. More significance should be attached to mathematics teaching to enable improvement in Turkey's success in these types of performance measures. Realistic Mathematics Education (RME) is employed through taking prominent practices in mathematics teaching into consideration. RME was first used by Freudenthal Institute in Holland as an approach in mathematics education. This theory which has been in use for about thirty years in Holland is also adopted in countries such as England, Spain, Germany, Denmark, the U.S.A., Brazil and Japan [5]. According to Freudenthal Institute, mathematics is a human activity, and it is not discovered but invented [6]. According to Gravemeijer (1994), in RME, instruction begins with real life problems and the need to do mathematics is taken as a basis [5]. Experiential cases should be presented to students to have them feel the need for mathematics and get them into a meaningful mathematical process. In RME, students should be sure of them and construct their own products [7]. In RME, the problem is not first presented with abstract principles, mathematical knowledge or rules. A problem is introduced and it is aimed to solve the problem, knowledge is organized and re-arranged and then concretized in order to understand the subject to a greater extent [8]. Freudenthal (1991) named the process which involves starting from real life problems and reaching the mathematical concept as "mathematization" [9]. Mathematization is reaching the mathematical concept, that is, formal knowledge, from the concept acquired through daily life. The first step of mathematization is horizontal mathematization. Horizontal mathematization is separating schemas, formulating, finding different ways of the given problem; that is, a transition from everyday life to the world of symbols. The second step of mathematization is vertical mathematization, which is defined as moving within the world of symbols and reaching concepts and formulas from symbols [5]. The RME approach is based on horizontal and vertical mathematization. It is reported that RME approach increases students' achievement, and real life problems attract students' attention and change classroom atmosphere positively for learning [10]. It is also highlighted by educators that instruction with real life problems has a positive effect on increasing students' motivation and interest towards mathematics [11]. RME is a functional approach to achieving permanent learning through supporting students to make sense of concepts and make abstraction with the help of real life problems. In RME, students are motivated, and they feel that they are successful since learning occurs at their own pace. The motivated student then actively participates in activities. Students who solve problems successfully start to enjoy what they do. In RME, knowledge is acquired through discovery and making sense of real life through mathematization increases students' interests in lesson and thereby their motivation [12]. When the literature is examined, in the study conducted by [25] with secondary school 7th grade students, it was seen that the RME approach was more effective in increasing student achievement than the traditional approach in teaching multiplication with integers. In his study, [23] concluded that in the teaching of "Numbers and Operations, Algebra" unit outcomes, RME supported teaching applied to the experimental group increased the academic success of the students. [33] concluded that RME supported mathematics teaching is more effective than traditional methods in teaching the subject of 'Volume Measurement and Liquid Measurement Units' in 6th grades. [8], [34], [35], and [36] have also found that RME is effective in increasing student achievement. It is seen that the teaching method based on the RME approach is mostly applied to the students at the secondary school level, but the studies conducted at the 5th grade level of secondary school are very few. In the literature, there is no study related to the "Data Processing" learning area. For this reason, the use of a teaching method based on the RME approach in the "Data Processing" learning field in the 5th grades of secondary school constitutes the originality of the research. It is thought that the research will contribute to the literature and will provide an exemplary application for mathematics teachers for RME. Purpose of the Study This research study aims to examine the effect of instructional activities designed in line with Realistic Mathematics Education on fifth grade students' academic achievement, motivation and retention in "Data Processing" learning domain. The study seeks to facilitate attaining the learning outcomes in "Data Processing" learning domain through associating them with students' daily life activities with the help of instruction based on RME. Since real life scenarios are at the forefront in the learning process, it is aimed to increase retention of the knowledge acquired in this process. An increase in students' motivation towards mathematics is expected, as well. The problem statement of the current study is as follows: Is there a significant difference in terms of academic achievement, motivation and retention between the fifth grade students receiving instruction in line with RME approach and the fifth grade students receiving instruction in line with MoNE curriculum? Research Model This study employs quasi-experimental design with pretest-posttest control group out of quantitative research methods. In this design, available groups are arbitrarily assigned as control group and experimental group, and groups are compared through measurements before and after intervention. The difference between this design and experimental design is that the groups are not formed randomly in the beginning. Of the available groups, one is arbitrarily assigned as the experimental group and the other is assigned as the control group [13]. Participants The participants of the study were formed through convenience sampling method. Convenience sampling method provides practicality and pace for research studies. In this method, the researcher selects a case which is close to the researcher and easy to access [14]. The schools in which this study was carried out were selected since the researcher was working in these schools, which provided easy access. The population of the study consists of the fifth grade students studying Afyonkarahisar province of Turkey in 2018-2019 academic year. According to the organization of Turkish Educational System, fifth grade is the first year of secondary school term. Therefore, the students participating in the study are studying in the first year of secondary school and are 11 or 12 years old. The sample of the study consists of the fifth grade students studying in two lower secondary schools in Bayat district of Afyonkarahisar in the spring term of 2018-2019 academic year. In the sample, there are 41 students in total, 19 in the experimental group and 22 in the control group. Equalization of the Groups In the study, post-test achievement scores were made equal due to the fact that the significant difference between pre-test and post-test scores of the experimental and control groups would cause statistically incorrect results. Achievement scores attained by students in the experimental and control groups from pre-test and post-test were compared with independent samples t-test and the results are provided in Table 1 and Table 2. There is a significant difference in the means of achievement scores of the experimental and control groups in the pre-test, as evident in Table 1. As shown in Table 2, there is a significant difference in the means of achievement scores of the experimental and control groups in the post-test. Dependent variable (post-test) achievement scores adjusted according to control variable (post-test) of the experimental and control groups by single-factor analysis of covariance are presented in Table 3. It is given in Table 3 that the difference between post-test achievement scores of students in experimental and control groups is 16.25. The difference between post-test achievement scores which are corrected according to pre-test achievement scores is 0.35. In the final step, it can be argued that post-test achievement scores of experimental and control groups are equivalent to each other [17]. Dependent variable (retention test) achievement scores adjusted according to the control variable (post-test) of the experimental and control groups are presented in Table 4. It is provided in Table 4 that the difference between retention test scores of students in experimental and control groups is 17.69. The difference between retention test scores which are corrected according to post-test achievement scores is 5.10. In the final step, it can be argued that retention test scores of experimental and control groups are equivalent to each other. The students participating in the research should be made equal in terms of other variables in order to control the independent variables that are aimed to be tested in the study in the experimental groups. The purpose in controlling the variables is to increase the internal validity of the study and make sure that the result of the study stems from only the tested independent variable [17]. To this end, post-test achievement scores and retention test scores of the experimental and control groups were made equal. Since pre-test motivation scores of the experimental and control groups were equal to each other, no further analysis was needed to equalize them. Data Collection Tool In order to collect data, "Evaluation Form of Learning Outputs in Data Processing Learning Domain" (EFLODP) and "Mathematical Motivation Scale" (MMS) were used as pre-test and post-test. EFLODP was used again four weeks later to measure retention of knowledge. Evaluation Form of Learning Outputs in Data Processing Learning Domain (EFLODP): The researchers first examined the fifth grade mathematics curriculum and determined learning outputs with regard to "data processing" learning domain in order to develop EFLODP to be used as pre-test, post-test and retention test in the study. In line with the learning outputs, table of specifications was prepared. A form of 27 questions was prepared out of learning output comprehension tests of TIMSS, PISA, The State Scholarship Exam and tests published by MoNE. The questions in these tests are open-ended and the students should interpret and solve the questions. The draft form of the instrument was presented to two mathematics teachers and three mathematics specialists to get their opinions about the form. Based on the feedback from them, the form was revised and the final form of EFLODP included 18 questions. Mathematical Motivation Scale (MMS): "Mathematical Motivation Scale" was used in the current study to identify the effect of RME on students' mathematics motivation [15]. Permission to use the scale in the present study was obtained from the researchers who had developed the scale. MMS is in five-point Likert type and there are not any reverse-coded items in it. The items include the options of "I totally don't agree", "I don't agree", "I am not sure", "I agree", and "I totally agree". "I totally agree" scored 5 points and "I totally don't agree" scored 1 point. The internal consistency coefficient of the scale ranged from .85 to .94 and the item-total correlation values ranged from .62 to .89. Collection of Data While the experimental group received mathematics instruction through lesson plans developed in line with RME, the control group received instruction with the existing mathematics curriculum. There were three learning outputs in the data processing learning domain of the fifth grade mathematics curriculum and 10 hours of lessons were allocated to units. The implementation took two weeks. The students in the experimental and control groups took the pre-tests one week before the implementation. The students then took the post-tests one week after the implementation. The study lasted for four weeks in total. Four weeks after the implementation, the students took the retention test. Data Analysis The quantitative data obtained through the instruments were analyzed through SPSS 22.00 (The Statistical Package for The Social Sciences) in line with the research problem and sub-problems. The scores of students in the experimental and control groups obtained in the achievement pre-test, achievement post-test and retention test showed normal distribution. Therefore, in the comparison of the two groups' means, paired sample t-test and independent samples t-test were used out of parametric tests. Pre-test and post-test scores of the students in the experimental and control groups in motivation pre-test and post-test did not have a normal distribution. Therefore, parametric tests were not used in the comparison of the means of the two groups, and Mann-Whitney U test, out of non-parametric tests, was preferred [16]. A significant difference was identified between the scores of the experimental and control groups in achievement pre-test. To differentiate the effect of RME from the effects of other variables, one-way analysis of covariance (ANCOVA) was performed to compare post-test scores. Similarly, ANCOVA was used to compare retention scores of experimental and control groups [17]. The assumptions of ANCOVA are as follows:  The sample subject to comparison of means is independent.  The distribution of scores related to the dependent variable for each of the groups formed according to a group is normal.  In a randomized design, the relationship between dependent variable (Y) and the covariate (X) is linear.  Within-group regression slopes (regression coefficients) are homogenous. Validity and Credibility Reliability is the degree of error-free measurement of results [18]. Since subjective effects could interfere with the scoring of open ended questions in EFLODP, it was evaluated by the researcher and another rater separately with the help of a holistic scoring key developed by Cansız (2015). Each question in EFLODP could be scored with 0, 1, 2, 3 and 4 points. For inter-rater reliability, Cohen's Kappa inter-rater agreement analysis was carried out [16]. Cohen's Kappa (K) coefficients for the scores given to 18 questions in EFLODP pre-test, post-test and retention test by two raters are provided in Table 5. Landis and Koch (1977) suggest the interpretation of the Kappa value as follows: "≤0.20 weak; 0.21-0.40 medium; 0.41-0.60 good; 0.61-0.80 very good; 0.81-1.00 perfect" [16]. According to data obtained from the holistic evaluation key used in the assessment of EFLODP, the related coefficient is .685 for the pre-test, .551 for the post-test and .657 for the retention test. These values reveal that there are good and very good agreements among the two raters' scores in pre-test, post-test and retention test. Learning and Teaching Processes in Experimental and Control Groups The activities prepared for the research were applied to the experimental group for two weeks, five hours per week. The students were divided into heterogeneous groups within the framework of the cooperation principle of RME. The fact that the researcher is also the mathematics teacher of the students and knows the students well provided an advantage in the formation of heterogeneous groups. Care was taken to ensure that all students actively take part in group work. The distribution of the applications made in the experimental group during the research process is as follows:  The students were informed about RME and the application process. Pre-tests (EFLODP and MMS) were applied.  The class was divided into groups and the rows were arranged in cluster order. Cinema, School Trip, Waste of Bread, School Representative, and Environmental Awareness activities were implemented. Activity sheets were distributed to the groups in order. With these activities, it is aimed that students discover how a research question should be. After the activities, evaluation questions created by the researcher were asked.  Election, Birthday, Book Fair, How Many Letters in Your Name? activities have been implemented. Activity sheets were distributed to the groups in order. With these activities, it is aimed that the students discover ways to interpret the data obtained with the research questions more easily. It is aimed to explore the column chart and frequency table using the scoreboard they have learned before. After the activities, evaluation questions created by the researcher were asked.  Weather, Broadcast Flow, Let's Compare the Heights 1, Compare the Heights 2, Sheep Production activities were applied. Activity sheets were distributed to the groups in order. With the given activities, it is aimed that the students understand the problems given with the column chart or frequency table and determine the situations that cause misinterpretations. After the activities, evaluation questions created by the researcher were asked.  Number of Students, Passing Grade and Cycling Trip activities were applied. Activity sheets were distributed to the groups in order. In the applied activities, it is aimed that the students understand the problems given with the column chart or frequency table and reach the solution of the problem. After the activities, evaluation questions created by the researcher were asked.  Post-tests and retention tests (EFLODP and MMS) were applied. In the control group, while the lessons continued in their normal course, the textbook was used for evaluation and the exercises were provided by the students. Applications made in the control group during the research process are as follows:  Students were informed about what to do during the lesson. Pre-tests (EFLODP and MMS) were applied.  The lessons were taught with the activities in the mathematics textbook, which was prepared according to the current curriculum. Let's apply what we learned section used for evaluation.  Post-tests and retention tests (EFLODP and MMS) were applied. Findings Regarding Achievement The achievement scores obtained by the students in the experimental and control groups from the pre-test and post-test were compared through paired samples t-test and the results are presented in Table 6 and Table 7. As is given in Table 6, a statistically significant difference was found between the pre-test mean score (¯X = 23.75) and post-test mean score (¯X= 39.80) of the students in the experimental group [t (17) = -5.305, p<.05]. The paired samples t-test reveals whether there is a significant difference between the means of two groups; however, it does not inform us about the size of this difference, that's why effect size is separately calculated in this study [16]. In t-test, the effect size (d) can be calculated with the formula below (after applying the formula, the absolute value of the result is taken): The sign before the effect size is not significant and it may get any value. In general, d value is considered as small for 0.2, medium for 0.5, big for 0.8 and very big for above 1 [17]. d value obtained in this study is 1.25, which means that the difference between pre-test and post-test scores of the students in the experimental group is very high. As is given in Table 7, a statistically significant difference was found between the pre-test mean score (¯X = 6.94) and post-test mean score (¯X= 19.67) of the students in the control group [t (19) = -3.909, p < .05]. The effect size was calculated as there was a significant difference. d = (-3,909)/√20 = 0.87 The value obtained in this analysis (d = 0.87) shows that the difference between pre-test and post-test scores of the students in the control group is very high. ANCOVA was performed to compare achievement post-test scores of the students in the experimental and control groups when the effect of achievement pre-test scores was controlled. Whether the assumptions of ANCOVA were met or not was checked before moving on to the analysis. First, the groups of comparison should be independent. As each student was in only one group in this study, this assumption was met. Second, scores regarding the dependent variable should have a normal distribution in each of the groups of comparison. Results of normality test regarding achievement post-test scores, determined as the dependent variable, are provided in Table 8. Assuming that data with skewness and kurtosis coefficients in the range of -2.0 to +2.0 have normal distribution [32], both the normality tests and Skewness and Kurtosis values of the students' post-test means suggest a normal distribution. Third, there should be a linear relationship between post-test scores, which is the dependent variable, and pre-test scores, which is the control variable. The linearity of the relationship can be visually checked with the scatter plot. This process should be performed separately for each group [17]. The scatter plots are provided in Figure 1 and Figure 2. In the next step, the degree and significance of this relationship can be examined through simple linear regression test by considering pre-test achievement score as predictive variable and post-test achievement score as predicted variable. The scatter plot and regression analysis results suggest that the third assumption is met. Forth, the regression coefficients in the groups (regression slopes) need to be homogenous (equal). To meet this assumption, analysis regarding "arbitrary adjusted model" involving the mutual interaction between pre-test achievement score, which is the control variable, and Realistic Mathematics Education, which is the independent variable, was performed. The results of this analysis are presented in Table 10. Table 10 reveals that the value of significance in the row where there are two variables (Group*Pre-test) is .202 > .05, and therefore, there is not a significant difference between regression slopes. This finding suggests that the forth assumption is met. After meeting the assumptions, ANCOVA was performed. The results are presented in Table 11. According to these results, there is not a significant difference between the post-test achievement score means adjusted according to the groups' pre-test achievement scores [F(1-35)=30.484, p>.05]. In other words, the implemented RME has not been effective on students' achievement. Findings regarding Motivation The motivation scores obtained by the students in the experimental group from the pre-test and post-test were compared with paired samples t-test and the results are presented in Table 12. Pre-test motivation score mean (¯X = 116.94) of the students in the experimental group is higher than their post-test score mean (¯X= 116.15). There was very little change in the motivation score of the experimental group before and after the implementation. It can be said that the mean was stable and there was not a significant difference [t (18) = .384, p > .05]. The motivation scores obtained by the students in the control group from the pre-test and post-test were compared with paired samples t-test and the results are presented in Table 13. Pre-test motivation score mean (¯X = 117.86) of the students in the control group is higher than their post-test score mean (¯X= 111.90). Motivation scores of the control group decreased after the implementation, which means their motivation changed negatively; however, this difference is not significant [t (21) Post-test motivation scores of students in the experimental and control groups were compared with Mann-Whitney U test and the results are presented in Table 14. Gain scores of the students in the experimental and control groups are examined in Table 14. Gain scores were calculated by taking the difference between motivation post-test and pre-test scores. There was not a significant difference between the gain scores of students in the experimental and control groups (U=167.50, p>0.05) [16]. Findings regarding Retention The post-test scores and retention scores of the students in the experimental group were compared with paired samples t-test and the results are provided in Table 15. Post-test score mean (¯X = 39.80) of the students in the experimental group is higher than their retention test score mean (¯X= 36.50). The scores of the retention test which was taken by the students four weeks after the implementation was reduced when compared to post-test scores. This suggests that the knowledge was not retained by the students at the expected level. However, this decrease is not significant [t (17) The post-test scores and retention scores of the students in the control group were compared with paired samples t-test and the results are provided in Table 16. Post-test score mean (¯X = 23.78) of the students in the control group is higher than their retention test score mean (¯X= 18.81). The scores of the retention test which was taken by the students four weeks after the implementation reduced when compared to post-test scores. This suggests that the knowledge was not retained by the students at the expected level. However, this decrease is not significant [t (17) ANCOVA was performed to compare retention post-test scores of students in the experimental and control groups when the effects of achievement post-test scores were controlled. Whether the assumptions of ANCOVA were met or not was checked before moving on to the analysis. First, the groups of comparison should be independent. As each student was in only one group in this study, this assumption was met. Second, scores regarding the dependent variable should have a normal distribution in each of the groups of comparison. Results of normality test regarding retention test scores, determined as the dependent variable, are provided in Table 17. Assuming that data with skewness and kurtosis coefficients in the range of -2.0 to +2.0 have a normal distribution [32], both the normality tests and Skewness and Kurtosis values of the students' retention test means suggest a normal distribution. Third, there should be a linear relationship between retention test scores, which is the dependent variable, and post-test achievement scores, which is the control variable. The linearity of the relationship can be visually checked with the scatter plot. This process should be performed separately for each group [17]. The scatter plots are provided in Figure 3 and Figure 4. In the next step, the degree and significance of this relationship can be examined through simple linear regression test by considering post-test achievement score as predictive variable and retention test score as predicted variable. The scatter plot and regression analysis results suggest that the third assumption is met. Forth, the regression coefficients in the groups (regression slopes) need to be homogenous (equal). To meet this assumption, analysis regarding "arbitrary adjusted model" involving the mutual interaction between post-test achievement score, which is the control variable, and Realistic Mathematics Education, which is the independent variable, was performed. The results of this analysis are presented in Table 19. Table 19 reveals that the value of significance in the row where there are two variables (Group*Post-test) is .054 > .05, and therefore, there is not a significant difference between regression slopes. This finding suggests that the forth assumption is met. After meeting the assumptions, ANCOVA was performed. The results are presented in Table 20. According to these results, there is not a significant difference between the retention test score means adjusted according to the groups' post-test achievement scores [F(1-34)=78.885, p>.05]. In other words, the implemented RME has not been effective on students' achievement. Conclusions According to the results of the analyses, a significant difference was identified between achievement post-test and pre-test score means of the students in the experimental and control groups. Although the effect size was high in the experimental group, it was not high in the control group. A significant difference was not found between post-test scores adjusted by pre-test scores of the students in the experimental and control groups. [19], [20] and [21] report similar results. Yet, [22] and [23] report contrary results. There was not a significant difference between motivation pre-test and post-test score means of the students in the experimental group. There was a decrease in the motivation pre-test and post-test means of the students in the control group; however, there was not a statistically significant difference. There was not a significant difference between the gain scores of the two groups, as well. These results are in parallel with the results reported by [24] and [25]. On the other hand, [26], [27] and [28] found adverse results. It was revealed that the achievement post-test score means of the students in the experimental and control groups were higher than their retention test score means; however, this difference was not statistically significant. The gain scores were calculated with the difference between retention test scores and post-test scores. There was not a statistically significant difference between gain scores of the both groups. Studies by [29] and [30] lend their support to this finding. However, retention was positively affected in the studies by [26] and [23]. Based on the results of the current study, it cannot be solidly argued that instruction supported with RME is more effective than instruction based on existing curriculum in increasing students' academic achievement in "Data Processing" learning domain. It is thought that factors such as that duration of implementation was short, that effect was not apparent in this particular learning domain, and that the participating students were in the first cycle of lower secondary school should be taken into consideration in interpreting the results. Fifth grade students may not have performed as expected since they were in the process of adapting to different teachers for different lessons. Also, hence, the MoNE switched to constructivism after 2005, the teaching and activities in the current program are in line with the constructivist logic. The reason why there was no significant difference between the scores of the students in the experimental and control groups is the effects of this situation. This study put forth that instruction supported with RME did not affect students' motivation in either positive or negative way. This may be accounted for with the fact that there are not apparent changes in the motivations of young learners in short periods of time [31]. Despite the fact that the fifth grade students do not have a culture of discussion, the students participating in the current study had an exchange of ideas through cooperating with their peers. The students who did not have interest in the lesson, got bored during the instruction and did not want to participate in the activities in the instruction process based on the existing curriculum participated actively in lessons in which RME was employed. Each student could solve the problems in line with their paces since they had in-group and within-group cooperation. It may be suggested that if students receive instruction supported with RME for longer periods of time, their love for mathematics and thereby their achievement in mathematics might increase. In education, the aim is not to provide students with the ready-made knowledge but to have them realize that they can learn with their own efforts. When students learn pieces of knowledge off their own bat, their knowledge will be current and real, which RME provides to students. Suggestions  Qualitative studies should be carried out to thoroughly examine the effects of RME.  The studies in the literature dwelling on RME approach are often related to a single learning domain. RME can be employed with multiple learning domains.  Studies can be conducted to examine how much RME is represented in the current official Maths curriculum.  Factors related to students, teachers, and learning environment that affect the implementation of RME can be investigated.  Conceptual studies can be carried out to develop RME with computer-assisted teaching.
7,946.4
2022-03-01T00:00:00.000
[ "Mathematics", "Education" ]
Serial sectioning of grain microstructures under junction control: An old problem in a new guise In the present work the importance of 3D and 4D microstructure analyses are shown. To that aim, we study polycrystalline grain microstructures obtained by grain growth under grain boundary, triple line and quadruple point control. The microstructures themselves are obtained by mesoscopic computer simulations, which enjoy a far greater control over the kinetic and thermodynamic parameters affecting grain growth than can be realized experimentally. In extensive simulation studies we find by 3D respectively 4D microstructure analyses that metrical and topological properties of the microstructures depend strongly on the microstructural feature controlling the growth kinetics. However, the differences between the growth kinetics vanish when we look at classical 2D sections of the 3D ensembles making a differentiation of the controlling grain feature near impossible. Introduction Today there still exist many gaps in our fundamental understanding of polycrystalline microstructures and in predicting their behavioral changes due to recrystallisation and grain growth. Closing this gap is of utmost importance since the grain size of a material determines its properties. Common stereological 2D investigations [1], e.g., using the mean linear intercept method, yield information on the average grain size, but under certain assumptions regarding the morphology of grains. Hence, only the 3D data provide the necessary metrical and topological properties of polycrystals. To that aim, in recent years much effort has been put into the global task to develop new, advanced materials, which has led to new characterization techniques enabling materials scientists and engineers to analyze bulk materials and thin films in 3D and 4D by X-rays, electrons, and neutrons with high resolutions in real time [2]. An analysis of grain microstructures by means of focused ion beam (FIB) tomography allows the creation of serial sections in an automated manner. After reconstruction the shapes and sizes of individual grains in the network can be described (compare, e.g., [3]). The combination of a focused ion beam and scanning electron microscope with the aim of a tomographic orientation analysis is already developed for about 10 years, but the use in the nanocrystalline grain size range, where the grain boundary junction can control the microstructural evolution, is not trivial as it leads the 3D tomographic Electron Backscattering Diffraction method (3D EBSD) method in conjunction with FIB sectioning to their resolution limit concerning the determination of the local orientation [4]. Then again, while recently the spatial resolution in 3D for the tomographic EBSD was about 50×50×50 nm 3 [5], today resolutions well below 50 nm (see [3]) can be achieved using low-voltage methods. For example Balach et al. [6] succeeded in generating 10 nm thin slices in FIB-SEM tomography experiments. Experiments in nanocrystalline iron [7] have shown that microstructures with an average grain radius of well below 100 nm show anomalous behavior (e.g., a linear average growth law). Hence, such microstructures are important to investigate more closely. Using mesoscopic computer simulations [8] and analytic mean-field theories [9] it has been found that such behavior can be associated with triple junction drag leading among other things to a very strong increase in small grains. With an average grain size of less than 100 nm as in the experiments this yields a high number of small grains with radii as small as 10 nm. These can, however, not be portrayed with the resolution necessary to describe their morphology correctly. Hence, for such nanocrystalline grain microstructures we still might have to use stereological 2D investigations. In the present work we study polycrystalline grain microstructures obtained by grain growth under grain boundary, triple line and quadruple point control. In extensive simulation studies we find by 3D respectively 4D microstructure analyses that metrical and topological properties of the microstructures depend strongly on the microstructural feature controlling the growth kinetics. Those differences between the growth kinetics become blurred when we look at classical 2D sections of the 3D ensembles making a differentiation of the controlling grain feature very difficult. Grain growth under grain boundary junction control Unlike conventional materials, metals and alloys of nanocrystalline size have quite different mechanical properties like high values of hardness, yield and fracture strength and superplastic behaviour at low temperatures implying a size-effect. They show stable grain sizes even up to relatively high temperatures and linear or even exponential growth kinetics in clear contradiction to parabolic normal grain growth [7,10]. Such investigations of the stability of nanocrystalline materials during grain growth are, of course, of intense technological interest because an increase in grain size from nm to μm can result in a loss of important materials properties making them unusable in applications. Already in 1997 Malow and Koch [11] summarised significant works concerning the stabilization of nanocrystalline grain structures in many materials and the number of factors influencing the grain boundary mobility in nanocrystalline alloys, like grain boundary segregation, solute drag, pore drag, second phase (Zener) drag and chemical ordering. A universal explanation has not been found yet, but the discussion is still on. In particular, Gottstein and Shvindlerman [12][13][14] proposed that grain growth can be controlled by grain boundary junction mobility. The established structures are rather stable, in particular, for ultra-fine grained and nanocrystalline materials. Streitenberger and Zöllner [9,15] considered grain growth as a dissipative process driven by the reduction of Gibbs free interface and junction energy. A general grain evolution equation has been derived separating into nine types of growth kinetics. The corresponding self-similar grain size distributions were in agreement with first results from modified MC simulations considering size effects in triple and quadruple junction limited grain growth. Details on the simulation method as used in the present work can be found in [8,16]. Evolution of the average grain size: 3D versus 2D measurements If the grain microstructural evolution is controlled solely by the mobility of one particular structural feature (grain boundary, triple line or quadruple point) it has been shown [8,9,14] that the average growth law of the 3D structure follows in each case a unique time law. Under grain boundary (gb) control the classical parabolic growth law, t R    , holds (compare Fig. 1a). Under triple line (tl) control (Fig. 1b) a linear growth law results, while grain growth controlled by the quadruple points (qp) yields the exponential increase in annealing time as shown in Fig. 1c. In each case it can be seen that there is also an initial period of time that is not described by the respective laws depending on the initial microstructure. In addition, for each of the three cases the temporal development of the average grain size calculated from 2D sections, , is also shown, from which it is evident that the growth laws follow the same general behavior as their 3D counterpart, while showing fluctuations that are due to the limited number of observed grains in the sections. Hence it seems that the microstructural feature controlling grain growth can be deducted from the observation of the growth law in its 3D or 2D version. However, this is not entirely true as Figure 2 shows, where the three kinetics (gb controlcircles; tl control -squares; qp controltriangles) are shown again as , only now each together with a linear least-squares fit. It is evident that due to the limited number of measured data points a linear relation gives a good representation independent of the microstructural feature controlling the growth kinetics allowing no conclusions! More frequent or longer observations can solve this problem, and although this is particularly important, it is not always possible to detect more observation times, since, e.g., for long time annealing the growth kinetics changes for example under triple line control from linear to parabolic (see [8]) restricting the observation range. In order to allow nevertheless a reliable deduction of the microstructural feature that is in control of the growth kinetics, we show in the following that both, the metrical and topological properties of the grain network have to be investigated very thoroughly. Grain size distribution in sections and real Each of the above growth kinetics of a 3D polycrystal (as presented in Fig. 1) is associated with a distinct scaled grain size distribution f(x) [8,9], where x is defined as grain radius of a grain divided by the average radius of the ensemble. This is shown indeed in Figure 3a-c, where it can be seen that grain growth under limited triple line mobility with a linear increase in the average grain size (Fig. 3b) as well as under a limited quadruple point mobility with exponential growth law (Fig. 3c) are characterized by a remarkably high number of small grains compared to normal, parabolic grain growth (Fig. 3a). All three types of grain size distributions are found to be self-similar (i.e. time-independent) and in agreement with theoretical predictions (black lines in Fig. 3a-c; compare [9,15]). Then again, it is well known that sectioning of 3D polycrystals results in deviating grain size distributions compared to their 3D counterpart. For normal grain growth the scaled size distribution f(y), where y is the scaled grain size of the section, is shifted to smaller grain sizes, while the distribution becomes broader and less peaked (Fig. 3d). Similar changes can also be observed for triple line controlled grain growth (Fig. 3e) and grain growth under quadruple point control (Fig. 3f). However, only the latter is distinctively different from normal grain growth allowing clear conclusions regarding the associated exponential growth regime. The differences between the distributions in Figs. 3d and 3e are basically negligible giving us nearly no indication, which distribution belongs to parabolic normal grain growth and which to linear triple junction controlled growth. The only real difference is the maximum grain size: Under grain boundary control we can observe grains as big as 2.5 times the average size, whereas under triple junction control grains growth as large as 3.0 times the average size giving the distribution (Fig. 3e) a slightly larger tail. Topological investigations of 2D sections In addition to metrical properties, polycrystalline grain boundary networks are also characterized by topological properties like the number of faces and edges of the grains. It is shown particularly in Figure 4a that the grain size y of the grains in the 2D sections is strongly correlated to the number of edges. While it has been shown previously in [8] that the number of faces as a function of the grain size of the 3D microstructures allows a clear deduction of the microstructural feature that is in control of the growth kinetics, Fig. 4a shows that the 2D sections show virtually no differences for small and average grains. Only for large grains with many faces the triple and quadruple junction controlled kinetics differ significantly from normal grain growth. Then again there exist only very few of such large grains as the distributions of the number of edges show in Fig. 4b. In contrast the latter also illustrate the strong increase in the number of grains with few faces.
2,549.4
2015-04-24T00:00:00.000
[ "Materials Science" ]
Subdiffusive fractional Black–Scholes model for pricing currency options under transaction costs A new framework for pricing European currency option is developed in the case where the spot exchange rate follows a subdiffusive fractional Black– Scholes. An analytic formula for pricing European currency call option is proposed by a mean self-financing delta-hedging argument in a discrete time setting. The minimal price of a currency option under transaction costs is obtained as time-step Δt 1⁄4 tα 1 ΓðαÞ 1 2 π 1 2H k σ 1 , which can be used as the actual price of an option. In addition, we also show that time-step and long-range dependence have a significant impact on option pricing. Subjects: Science; Mathematics & Statistics; Applied Mathematics; Statistics & Probability Introduction The standard European currency option valuation model has been presented by Garman and Kohlhagen ðG À KÞ (Garman & Kohlhagen, 1983). However, some papers have provided evidence of the mispricing for currency options by the G À K model. PUBLIC INTEREST STATEMENT Subdiffusion refers to a well-known and established phenomenon in statistical physics. One description of subdiffusion is related to subordination, where the standard diffusion process is time-changed by the so-called inverse subordinator. According to the features of the subdiffusion process and the fractional Brownian motion, we propose the new model for pricing European currency options by using the fractional Brownian motion, subdiffusive strategy, and scaling time in discrete time setting, to get behavior from financial markets. Motivated by this objective, we illustrate how to price a currency options in discrete time setting for both cases: with and without transaction costs by applying subdiffusive fractional Brownian motion model. By considering the empirical data, we will demonstrate that the proposed model is further flexible in comparison with the previous models and it obtains suitable benchmark for pricing currency options. Additionally, impact of the parameters on our pricing formula is investigated. model may not be entirely satisfactory could be that currencies are different from stocks in important respects and the geometric Brownian motion cannot capture the behavior of currency return (Ekvall, Jennergren, & Näslund, 1997). Since then, many methodologies for currency option pricing have been proposed by using modifications of the G À K model (Garman & Kohlhagen, 1983;Ho, Stapleton, & Subrahmanyam, 1995). All this research above assumes that the logarithmic returns of the exchange rate are independent identically distributed normal random variables. However, in general, the assumptions of the Gaussianity and mutual independence of underlying asset log returns would not hold. Moreover, the empirical research has also shown that the distributions of the logarithmic returns in the financial market usually exhibit excess kurtosis with more probability mass near the origin and in the tails and less in the flanks than would occur for normally distributed data (Dai & Singleton, 2000). That is to say the features of financial return series are non-normality, non-independence, and nonlinearity. To capture these non-normal behaviors, many researchers have considered other distributions with fat tails such as the Pareto-stable distribution and the Generalized Hyperbolic Distribution. Moreover, self-similarity and long-range dependence have become important concepts in analyzing the financial time series. There is strong evidence that the stock return has little or no autocorrelation. As fractional Brownian motion ðFBMÞ has two important properties called self-similarity and long-range dependence, it has the ability to capture the typical tail behavior of stock prices or indexes (Borovkov, Mishura, Novikov, & Zhitlukhin, 2018;Shokrollahi & Sottinen, 2017). The fractional Black-Scholes ðFBSÞ model is an extension of the Black-Scholes ðBSÞ model, which displays the long-range dependence observed in empirical data. This model is based on replacing the classic Brownian motion by the fractional Brownian motion ðFBMÞ in the Black-Scholes model. That iŝ (1:1) where μ; and σ are fixed, andB H ðtÞ is a FBM with Hurst parameter H 2 ½ 1 2 ; 1Þ. It has been shown that the FBS model admits arbitrage in a complete and frictionless market (Cheridito, 2003;Sottinen & Valkeila, 2003;Wang, Zhu, Tang, & Yan, 2010;Xiao, Zhang, Zhang, & Wang, 2010). Wang (2010) resolved this contradiction by giving up the arbitrage argument and examining option replication in the presence of proportional transaction costs in discrete time setting (Mastinšek, 2006). Magdziarz (2009a) applied the subdiffusive mechanism of trapping events to describe properly financial data exhibiting periods of constant values and introduced the subdiffusive geometric Brownian motion V α ðtÞ ¼ VðT α ðtÞÞ; (1:2) as the model of asset prices exhibiting subdiffusive dynamics, where V α ðtÞ is a subordinated process (for the notion of subordinated processes please refer to Refs. Weron (1993, 1995), Kumar, Wyłomańska, Połoczański, and Sundar (2017), Piryatinska, Saichev, and Woyczynski (2005), in which the parent process VðτÞ is a geometric Brownian motion and T α ðtÞ is the inverse α-stable subordinator defined as follows: Here, Q α ðtÞ is a strictly increasing α-stable subordinator with Laplace transform: E e ÀηQαðτÞ À Á ¼ e Àτη α , 0 < α < 1, where E denotes the mathematical expectation. Magdziarz (2009a) demonstrated that the considered model is free-arbitrage but is incomplete and proposed the corresponding subdiffusive BS formula for the fair prices of European options. Subdiffusion is a well-known and established phenomenon in statistical physics. The usual model of subdiffusion in physics is developed in terms of FFPE (fractional Fokker-Planck equations). This equation was first derived from the continuous-time random walk scheme with heavy-tailed waiting times (Metzler & Klafter, 2000). It provides a useful way for the description of transport dynamics in complex systems (Magdziarz, Weron, & Weron, 2007). Another description of subdiffusion is in terms of subordination, where the standard diffusion process is time-changed by the so-called inverse subordinator (Gu, Liang, & Zhang, 2012;Guo, 2017;Janczura, Orzeł, & Wyłomańska, 2011;Magdziarz, 2009b, Magdziarz et al., 2007Scalas, Gorenflo, & Mainardi, 2000Yang, 2017). The objective of this paper is to study the European call currency option by a mean self financing delta hedging argument. The main contribution of this paper is to derive an analytical formula for European call currency option without using the arbitrage argument in discrete time setting when the exchange rate follows a subdiffusive FBS (1:4) S 0 ¼Vð0Þ > 0: We then apply the result to value European put currency option. We also provide representative numerical results. Making the change of variable, B H ðtÞ ¼ (1:5) This formula is similar to the Black-Scholes option pricing formula, but with the volatility being different. We denote the subordinated process W α;H ðtÞ ¼ B H ðT α ðtÞÞ, here the parent process B H ðτÞ is a FBM and T α ðtÞ is assumed to be independent of B H ðτÞ. The process W α;H ðtÞ is called a subdiffusion process. Particularly, when H ¼ 1 2 , it is a subdiffusion process presented in Karipova and Magdziarz (2017), Kumar et al. (2017), and Magdziarz (2010). Figure 1 shows typically the differences and relationships between the sample paths of the spot exchange rate in the FBS model and the subdiffusive FBS model. The rest of the paper proceeds as follows: In Section 2, we provide an analytic pricing formula for the European currency option in the subdiffusive FBS environment and some Greeks of our pricing model are also obtained. Section 3 is devoted to analyze the impact of scaling and long-range dependence on currency option pricing. Moreover, the comparison of our subdiffusive FBS model and traditional models is undertaken in this section. Finally, Section 4 draws the concluding remarks. The proof of Theorems are provided in Appendix. Pricing model for the European call currency option In this section, we derive a pricing formula for the European call currency option of the subdiffusive FBS model under the following assumptions: (i) We consider two possible investments: (1) a stock whose price satisfies the equation: 1Þ, α þ αH > 1, and r d ; and r f are the domestic and the foreign interest rates, respectively. (2) A money market account: ( 2:2) where r d shows the domestic interest rate. (ii) The stock pays no dividends or other distributions, and all securities are perfectly divisible. There are no penalties to short selling. It is possible to borrow any fraction of the price of a security to buy it or to hold it, at the short-term interest rate. These are the same valuation policy as in the BS model. (iii) There are transaction costs that are proportional to the value of the transaction in the underlying stock. Let k denote the round trip transaction cost per unit dollar of transaction. Suppose U shares of the underlying stock are bought ðU > 0Þ or sold ðU < 0Þ at the price S t , then the transaction cost is given by k 2 U j jS t in either buying or selling. Moreover, trading takes place only at discrete intervals. (iv) The option value is replicated by a replicating portfolio Å with UðtÞ units of stock and riskless bonds with value FðtÞ. The value of the option must equal the value of the replicating portfolio to reduce (but not to avoid) arbitrage opportunities and be consistent with economic equilibrium. (v) The expected return for a hedged portfolio is equal to that from an option. The portfolio is revised every Δt and hedging takes place at equidistant time points with rebalancing intervals of (equal) length Δt, where Δt is a finite and fixed, small time-step. Let C ¼ Cðt; S t Þ be the price of a European currency option at time t with a strike price K that matures at time T. Then, the pricing formula for currency call option is given by the following theorem. Theorem 2.1. C ¼ Cðt; S t Þ is the value of the European currency call option on the stock S t satisfied (1.5) and the trading takes place discretely with rebalancing intervals of length Δt. Then, C satisfies the partial differential equation with boundary condition CðT; S T Þ ¼ max S T À K; 0 f g . The value of the currency call option is Cðt; S t Þ ¼ S t e Àr f ðTÀtÞ Φðd 1 Þ À Ke Àr d ðTÀtÞ Φðd 2 Þ; (2:6) and the value of the put currency option is Pðt; S t Þ ¼ Ke Àr d ðTÀtÞ ΦðÀd 2 Þ À S t e Àr f ðTÀtÞ ΦðÀd 1 Þ; (2:7) where (2:8) (2:9) where Φð:Þ is the cumulative normal distribution function. In what follows, the properties of the subdiffusive FBS model are discussed, such as Greeks, which summarize how option prices change with respect to underlying variables and are critically important to asset pricing and risk management. The model can be used to rebalance a portfolio to achieve the desired exposure to certain risk. More importantly, by knowing the Greeks, particular exposure can be hedged from adverse changes in the market by using appropriate amounts of other related financial instruments. In contrast to option prices that can be observed in the market, Greeks cannot be observed and must be calculated given a model assumption. The Greeks are typically computed using a partial differentiation of the price formula. Letting α " 1, from Equation (2.9), we obtain Remark 2.3. The modified volatility under transaction costs is given bŷ ( 2:19) that is in line with the findings in Wang (2010). Empirical studies The objective of this section is to obtain the minimal price of an option with transaction costs and to show the impact of time scaling Δt, transaction costs k, and subordinator parameter α on the subdiffusive FBS model. Moreover, in the last part, we compute the currency option prices using our model and make comparisons with the results of the G À K and FBS models. Moreover, the option rehedging time interval for traders to take is Δt ¼ t αÀ1 ΓðαÞ À1 2 π À Á 1 2H k σ À Á1 H . The minimal price C min ðt; S t Þ can be used as the actual price of an option. ΓðαÞ shows that fractal scaling Δt has not any impact on option pricing if a mean selffinancing delta-hedging strategy is applied in a discrete time setting, while subordinator parameter β has remarkable impact on option pricing in this case. In particular, from Equations (3.4) and ( ! , which displays that the fractal scaling Δt and sabordinator parameter α have a significant impact on option pricing. Furthermore, for kÞ0, from Equation (2.8), we know that option pricing is scaling dependent in general. Now, we present the values of currency call option using subdiffusive FBS model for different parameters. For the sake of simplicity, we will just consider the out-of-the-money case. Indeed, using the same method, one can also discuss the remaining cases: in-the-money and at-the-money. First, the prices of our subdiffusive FBS model are investigated for some Δt and prices for different exponent parameters. The prices of the call currency option versus its parameters H; Δt; α and k are revealed in Figure 2. The selected parameters are S t ¼ 1:4; K ¼ 1:5; σ ¼ 0:1; r d ¼ 0:03; r f ¼ 0:02; T ¼ 1; t ¼ 0:1; Δt ¼ 0:01; k ¼ 0:01; H ¼ 0:8; α ¼ 0:9: Figure 2 indicates that the option price is an increasing function of k and Δt, while it is a decreasing function of H and α. For a detailed analysis of our model, the prices calculated by the G À K, FBS and subdiffusive FBS models are compared for both out-of-the-money and in-the-money cases. The following parameters are chosen: S t ¼ 1:2; σ ¼ 0:5; r d ¼ 0:05; r f ¼ 0:01; t ¼ 0:1; Δt ¼ 0:01; k ¼ 0:001, and H ¼ 0:8, along with time maturity T 2 ½0:1; 2, strike price K 2 ½0:8; 1:19 for the in-the-money case and K 2 ½1:21; 1:4 for the out-of-the-money case. Figures 3 and 4 show the theoretical values difference by the G À K, FBS, and our subdiffusive FBS models for the in-the-money and out-of-the-money, respectively. As indicated in these figures, the values computed by our subdiffusive FBS model are better fitted to the G À K values than the FBS model for both in-the-money and out-of-the money cases. Hence, when compared to these figures, our subdiffusive FBS model seems reasonable. Conclusion Without using the arbitrage argument, in this paper, we derive a European currency option pricing model with transaction costs to capture the behavior of the spot exchange rate price, where the spot exchange rate follows a subdiffusive FBS with transaction costs. In discrete time case, we show that the time scaling Δt and the Hurst exponent H play an important role in option pricing with or without transaction costs and option pricing is scaling dependent. In particular, the minimal price of an option under transaction costs is obtained. Figure 3. Relative difference between the G À K, FBS, and subdiffusive FBS models for the in-the-money case.
3,490.2
2018-01-01T00:00:00.000
[ "Economics", "Mathematics" ]
Planar-Target-Based Structured Light Calibration Method for Flexible Large-Scale 3D Vision Measurement The calibration of a structured light plane is the key technique in structured light 3D vision measurement. In this paper, a novel method of structured light plane calibration is presented. A simple 2D planar target is used, which can be freely moved to different positions in the visible space of a camera when the target and the structured light plane intersect with each other. The corresponding world coordinate is set for each target in different positions. With data processing, the equations of intersecting lines obtained at different positions are unified at the camera coordinate system automatically. The least squares method is used to fit the equation of the structured light plane. An indoor experiment under a large-scale measurement situation has been performed. The experimental results show that the proposed method is simple, universal, and of high efficiency. Introduction Structured light 3D vision measurement technology can achieve fast and noncontact measurement of 3D morphology characteristics. It is widely used in reverse engineering and online detection. The calibration of the structured light plane is the key technique in structured light 3D vision measurement, which has been studied by many scholars. Liu et al. (1) proposed a calibration method based on the tooth shape target and onedimensional table. Zhou and Zhang (2) introduced a calibration method that takes two cameras and a planar target which is placed along the structured light plane. However, these methods require special equipment such as a special calibration target or an extra camera, and an inefficient and complex procedure that is not suitable for field calibration. Xu and Zhang, (3) Sun et al., (4) and Zhang (5) proposed calibration methods based on cross ratio invariability. These two methods only need to freely move a 2D planar target to different positions in the visible space of the camera, which is much more suitable for infield application than the previous methods. However, the data processing procedure of these two methods is still of low efficiency. In this paper, a novel method of structured light plane calibration is proposed. A simple 2D planar target is used. The target can be freely moved to different positions in the visible space of the camera if only the target and the structured light plane can intersect with each other. The corresponding world coordinate system is set for each target in different positions as well as the transformation relationships between world coordinates, camera coordinates, and image coordinate systems. With data processing, the equations of intersecting lines obtained at different positions are unified at the camera coordinate system automatically. The least squares method is used to fit the structured light plane's equation. Structured Light System Model In the structured light 3D vision measurement system, the perspective imaging process can often be approximated as a pinhole imaging model. (6) A schematic of the pinhole perspective transformation for the camera is shown in Fig. 1. The world coordinate system o w x w y w z w is set as a vertex of the target to be the origin point. The plane o w x w y w coincides with the target plane, and the axis o w z w is perpendicular to the target plane. The origin point o c of the camera coordinate system o c x c y c z c is the optical main point of the camera. The ideal image coordinate system o I x I y I is set as the intersection point of the axis o c z c and the image plane to be the origin point o I . Because in the computer the origin point of an image is usually its vertex, the computer image coordinate system o 0 u 0 v 0 is set as the top-left vertex of the image to be the origin point o 0 . Take line structured light for example; the structured light plane projected by the light source intersects the target in the line L. P w is a point on the line L, which has the coordinates P w (x w , y w , z w ) in the world coordinate system o w x w y w z w , P c (x c , y c , z c ) in the camera coordinate system ocxcyczc, P I (x I , y I ) in the ideal image coordinate system o I x I y I , and P 0 (x 0 , y 0 ) in the computer image coordinate system o 0 u 0 v 0 . The conversion of the world coordinate system and computer image coordinate system is as follows: where s is the scale factor and the matrix [R T] contains all the external parameters. R and T denote the relationships of rotation and displacement between the world coordinate system and camera coordinate system, respectively. A stands for the matrix of internal parameters while (u 0 , v 0 ) is the coordinate of the point o I in the computer image coordinate system, which is the origin point of the ideal image coordinate system. The term α (α = ƒ/dx) is the normalized focal length of the camera in the x-direction, where ƒ is the focal length of the camera and dx is the physical size of each pixel in the x-direction. Likewise, the term β (β = ƒ/dy) is the normalized focal length of the camera in the y-direction, where ƒ is the focal length of the camera and dy is the physical size of each pixel in the y-direction. The term c is the nonperpendicular factor between the x-axis and y-axis of the camera, which depends on the structure of the camera's lightsensitive devices. Considering the coordinate systems created above, the coordinate z w of the target in the world coordinate system is zero. Let r i be the i-th column of the rotation matrix R, the translation between the world coordinate system and computer image coordinate system can be simplified as follows: The 3×3 matrix H is called the homography matrix from the world coordinate system to the computer image coordinate system. Calibration of Structured Light Plane To calibrate the parameters of the structured light plane, it is necessary to solve three problems: the solution of the homography matrix H, the restoration of image distortion, and the extraction of the feature point's coordinates. Solution of the homography matrix H Since the homography matrix H has eight degrees of freedom, the matrix H with the scale factors can be solved by eq. (2), as long as more than four non-collinear corresponding points are obtained. Make the definition as follows: According to the solution of eq. (3) described in eq. (4), the coefficients of the matrix A can be solved from the matrix B. However, the result of eq. (4) is often less than ideal in actual situations because of the noise. Reference 7 gives the optimization method, which takes the linear result of eq. (4) as the initial value and nonlinearly optimizes the coefficients of the matrix A by the Levenberg-Marquardt algorithm. Once the internal parameter matrix A has been solved, the external parameter matrix [R T] can be decomposed from the matrix H. Let h i be the i-th column of the matrix R, the terms of the external parameter matrix are given by eq. (5). Owing to the effect of noise, the rotation matrix generally does not meet the constraint of the orthogonal properties. However, it can be orthogonalized by singular-value decomposition. Image distortion model The models above are all approximate when the ideal image point coincides with the actual image point. In the real world, a difference exists between the ideal image point and the actual image point owing to the optical distortion of the camera lens. To improve the measurement accuracy, it is necessary to introduce the distortion model of the camera lens. There are three main kinds of distortion: radial distortion, bias distortion and thin prism distortion. (8) Radial distortion is mainly caused by the defects of the lens. Bias distortion is mainly caused by the inconsistency between the optical center and the geometric center of the optical system. Thin prism distortion is caused by the defects of the lens design and the installation error. Owing to the very high accuracy of the optical system design, processing, and installation, the thin prism distortion is currently very small for standard industry cameras and can be ignored. Meanwhile, for non-extremely high precision measurements, the tangential distortion caused by bias distortion can also be ignored. Thus, we only need to consider radial distortion in general applications, of which only the first two nonlinear coefficients need to be calculated. Complex distortion models not only fail to improve the measurement accuracy, but may also produce numerical instability. (7) Therefore, the practical distortion correction model is as follows: x 0 ′ = x 0 + (x 0 − u 0 )[k 1 (x I 2 + y I 2 ) + k 2 (x I 2 + y I 2 ) 2 ], where (x 0 , y 0 ) is the ideal image point coordinate in the computer image coordinate system and (x 0 ′, y 0 ′) is the image point coordinate obtained by actual observation. (x I 2 + y I 2 ) is the square of the polar radius from the image point to the center. The terms k 1 and k 2 are the radial distortion coefficients. The detailed method of solving the radial distortion coefficient is discussed in ref. 7. Recovery method for image distortion By image processing, we can obtain the coordinate of a point on the intersection line L in the computer image coordinate system. Furthermore, we can obtain the coordinate P w (x w , y w , 0) of the point in the world coordinate system, with x w and y w obtained by eq. (8) and the constraint z w = 0. For a single target image, almost all the points on the intersection line L can be retained as feature points, so we can obtain hundreds to thousands of feature points with a single image. Therefore, we can greatly improve the calibration efficiency. To obtain the equation of the structured light plane, the target needs to be placed on at least two different locations. Solving the equation, we can obtain the noncollinear points. For the feature points obtained in each location, we can always unify their coordinates from world coordinate system to the computer image coordinate system by eq. (9). Then the least squares method is used to fit the structured light plane's equation. Calibration for rotating light plane Usually, the structured light plane needs to be scanned by a motor stage to measure the object's 3D geometric, as shown in Fig. 2. The parameters for the light plane change at each position. Therefore, if we use the method described above to calibrate the parameters of the structured light plane, we must calibrate the equation of the corresponding structured light plane for every rotary position of the motor stage. At the same time, we need to ensure that the repeatability of the motor platform's rotation angle is good. An alternative method is to calibrate the rotation axis of the motor stage in addition to the existing calibration method. Equations for several structured light planes can be obtained by calibration. The intersection line of these planes is the rotation axis of the stage. The intersection line is easily obtained by fitting. With this method we only need to accurately measure the offset between the current and initial angle when the structured light plane rotates. Then, we can obtain the equation of the structured light plane according to the precalibrated rotation. Calibration of structured light plane The experimental setup is shown in Fig. 2, which uses the STC-CLC152A-type complementary metal oxide semiconductor (CMOS) camera of Sensor Technologies, a 5-100 mm zoom lens of Guilin Optical Instrument Factory, and a 650 nm 10 W line structured laser. A 16×12 grid board-like plane target is used in the system, as shown in Fig. 3. The spacing between grids is 10×10 mm 2 . During calibration, the target can be freely moved to different positions in the visible space of the camera only if the target and the structured light plane can intersect with each other, as shown in Fig. 4. To facilitate subsequent image processing, the camera respectively shoots images of the target when the structured light source is off and on, as shown in Figs. 3 and 4. After Fig. 3 is subtracted from Fig. 4, we obtain the image of the light line to be processed, as shown in Fig. 5. Using the line extraction algorithm described in ref. 9, we can obtain the sub-pixel coordinates of the line structured light's center. By locking the focal length of the lens, the internal parameters and radial distortion coefficients of the camera can be calibrated by the method described in ref. 7. The calibration result is shown in Table 1. By moving the target to different positions, we obtain 1530 feature points on three different lines in the camera coordinate system. The equation of the structured light plane is obtained as eq. (10), by fitting these feature points. 32x − 2y − 18z + 10000 = 0 (10) Figure 6 shows the relationship between the feature points and the fitting plane, where dark points are the measured feature points and the plane made up of gray points is the light plane. To verify the accuracy of calibration results, the simultaneous equations of the target plane and the fitting light plane are presented, by which we can obtain the equation of the intersection line for the target and the light plane. We compare the contrast results of the measured point coordinates and the coordinates of the points on the intersection line and the root mean square error is 0.025 mm. Calibration of rotation axis As mentioned above, calibrate the corresponding rotation axis of the light planes when the motor platform rotates, we need equations of the structured light plane at at least two different positions. Using the calibration method described above, we can obtain these three structured light plane's equations: 0.801467x + 0.135193y − z − 326.112512 = 0, 0.510527x + 0.135689y − z − 36.414167 = 0. (13) Fig. 3 (left). Target when the structured light source is off. Fig. 4 (middle). Target when the structured light source is on. Fig. 5 (right). Image of the light line after Fig. 3 is subtracted from Fig. 4. Table 1 Calibration results of the internal parameters and radial distortion coefficients. Parameter Value α 1359.9 (pixels) β 1357.4 (pixels) c 0 u 0 533.9 (pixels) v 0 394.4 (pixels) k 1 −0.09732 k 2 0.41023 Equation (14) stands for the corresponding rotation axis. The fitting residue is 0.012 pixels. Figure 7 shows the position relationship between the structured light planes and the rotation axis. The intersection line of the three light planes is considered as the rotation axis. We can see in Fig. 7 that the three light planes intersect with each other just right in a single line, which also indicates the high calibration precision of these structured light planes. Conclusions In this paper, a novel method of structured light plane calibration is presented, which only uses a simple 2D target. The target can be freely moved to different positions in the visible space of a camera when the target and the structured light plane intersect with each other. The residue of the calibrated light plane is only 0.025 mm when the observing distance is approximately 500 mm. The calibration result showed that the method is simple, universal, and of high efficiency.
3,680.6
2013-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Spin-liquid behaviour and the interplay between Pokrovsky-Talapov and Ising criticality in the distorted, triangular-lattice, dipolar Ising antiferromagnet We study the triangular-lattice Ising model with dipolar interactions, inspired by its realisation in artificial arrays of nanomagnets. We show that a classical spin-liquid forms at intermediate temperatures, and that its behaviour can be tuned by temperature and/or a small lattice distortion between a string Luttinger liquid and a domain-wall-network state. At low temperature there is a transition into a magnetically ordered phase, which can be first-order or continous with a crossover in the critical behaviour between Pokrovsky-Talapov and 2D-Ising universality. When the Pokrovsky-Talapov criticality dominates, the transition is essentially of the Kasteleyn type. Introduction Spin liquids, which can be found in both quantum and classical systems, are often defined by the absence of symmetry breaking at low temperature. This raises the question of what is happening at low temperature, and the variety of possible behaviour is comparable to that of the vast array of symmetry-broken phases [1]. One of the oldest and best-understood examples of a classical spin liquid is the Ising model on the triangular lattice with nearest-neighbour interactions. It has been known for many years that this fails to order even at zero temperature [2,3], instead forming a critical state with long-range, algebraic spin correlations [4,5]. The key feature of the spin liquid is that there is robust local ordering associated with the requirement that every triangle has only two equivalent spins, but there are exponentially many configurations that respect this local order, resulting in long-range disorder [2]. This behaviour is not confined to T = 0, but holds to a good approximation throughout the region 0 ≤ T J 1 , where J 1 is the nearest-neighbour coupling constant. For T > 0 the correlations between spins are exponentially rather than algebraically decaying, but the correlation length remains large within the low-temperature region. The whole region 0 ≤ T ≤ J 1 can therefore be considered as a spin-liquid, and weakly-correlated, paramagnetic behaviour only occurs for T > J 1 . The reason that the nearest-neighbour, triangular-lattice, Ising antiferromagnet (TLIAF) is so well understood is that it can be mapped onto a 1D model of free spinless fermions [2,4,6]. This almost magical transformation converts a strongly-interacting spin problem into a non-interacting fermion problem, thus making possible the calculation of virtually all quantities of interest directly in the thermodynamic limit. The key to this "magic" is that the constraints imposed by the strong interactions between spins map directly to the Pauli exclusion principle, and can therefore be dealt with trivially in the fermionic picture. The situation becomes more difficult, and more interesting, when additional interactions couple spins beyond nearest neighbour. Further-neighbour interactions tend to stabilise an ordered phase at temperatures below a characteristic, further-neighbour energy scale, J fn [7,8,9,10,11], but this leaves open the possibility of spin-liquid behaviour in the temperature window J fn T J 1 . The difficulty in analysing further-neighbour models is due to the fact that they cannot be mapped onto free-fermion models, and therefore lack simple analytical solutions. Nevertheless, the fermion picture can still provide useful insights, and in particular one can classify different regions of the spin liquid as being weakly or strongly coupled in a fermionic sense. Weak coupling can be expected for J fn T < J 1 where the free-fermion model is only weakly perturbed, and therefore one expects to find the 2D classical equivalent of a Luttinger liquid. On the other hand, for T ∼ J fn the fermionic model is strongly coupled, making simple predictions more difficult. There is an essentially infinite number of ways to include further-neighbour interactions, and rather than trying to study all possible combinations of couplings, we concentrate on dipolar coupling between spins, where the interactions fall off with the cube of the separation. Nevertheless, we suggest that the results we obtain are very likely to be qualitatively correct for any system in which the coupling constants are monotonically decreasing with distance (if this condition is not respected there are other possibilities [10,11]). The Hamiltonian we consider is therefore given by, where σ i = ±1 denotes an Ising spin at site i, the sum over (i, j) includes all possible pairs of spins with i = j and r i is the position of the ith spin. Our choice to concentrate on dipolar interactions is not made at random, but is motivated by experiment. In particular, artificial spin systems consisting of arrays of out-of-plane nanomagnets arranged on a triangular lattice are starting to be fabricated, and these realise the dipolar TLIAF to a good approximation [12,13]. While artificial systems have been studied for a number of years [14,15,16,17,18,19,20,21,22,23], recent advances have made it possible to make the nano-dots small enough that they remain thermally active at experimentally viable temperatures [24,25], motivating our study of the equilibrium properties of H dip [Eq. 1]. While the primary experimental motivation comes from artificial spin systems, it is worth pointing out that there are many other realisations of TLIAFs with further-neighbour couplings. Examples include crystals of trapped ions [26], the disordered lattice structure of the spin-orbital liquid candidate material Ba 3 Sb 2 CuO 9 [27] and frustrated Coulomb liquids [28]. The only tuneable parameter in H dip [Eq. 1] is the temperature, and this already leads to subtle behaviour. Nevertheless, we also find it interesting to consider a second tuneable parameter, namely a small lattice disortion associated with squeezing the lattice, and this is parametrised by δ as shown in Fig. 1 (we only consider δ > 0). Such a distortion is both experimentally accessible, since it is possible to build it into the fabrication procedure of artificial spin systems, and convenient, since it can be used to tune the collective transition temperature relative to the single-nano-dot blocking temperature [13]. At least as important is that there is a good theoretical motivation to study the effect of distorting the lattice. For isotropic systems it has been shown that by carefully choosing the relative strengths of the further-neighbour interactions, it is possible to stabilise an intermediate nematic phase that breaks the 3-fold rotational symmetry of the triangular lattice, but not the Ising symmetry [10,11]. This nematic phase can be characterised by a set of fluctuating strings that wind the system and form a disordered grill-like superstructure, and the density of the strings can be controlled by temperature. At low temperatures the transition into an ordered phase takes place via a Kasteleyn transition, and shows Pokrovsky-Talapov critical behaviour, while at higher temperatures the nematic transitions into a paramagnet via a less-interesting first-order transition (see phase diagram in Ref. [11]). The problem with realising such physics in isotropic systems is that one requires a non-monotonically decreasing interaction strength, with, for example, J 5 > J 4 , and this is difficult to find in nature. By adding anisotropy of the type parameterised by δ, the symmetry distinction between the nematic and the paramagnet is lost, and therefore there is never a high-temperature phase transition between a paramagnet and a nematic, whatever the form of the further-neighbour interactions. However, the anisotropy drives the appearance of the most interesting features of the nematic, even for monotonically decreasing further-neighbour interactions, in particular the stabilisation at low temperature of a state with a tuneable density of fluctuating strings that shows Pokrovsky-Talapov critical behaviour approaching a Kasteleyn transition into a fluctuationless, low-temperature ordered phase. Since in such a situation the isotropic model is expected to have a direct first-order transition from the disordered to the ordered state, the addition of anisotropy also opens up the possibility that there is an unusual tricritical point, as a first-order transition turns into a Kasteleyn transition. As a foretaste of the results to come, we show in Fig. 1 a simplified phase diagram of H dip [Eq. 1] as a function of T and δ. One can see that there is a spin-liquid region sandwiched between an ordered stripe phase and a weakly correlated paramagnet. The focus of this article will be on the nature of the spin liquid, as well as on the transition from the spin liquid into the ordered phase, and it can be seen that, as expected, this changes from a first-order to a second-order, essentially Kasteleyn, transition via a tricritical point. The boundary between the spin-liquid and the paramagnet is a crossover and not a phase transition, and a naive guess puts this crossover at T ≈J 1 = (J 1A + J 1B + J 1C )/3, where, J 1A , J 1B and J 1C refer to nearest-neighbour interactions along A, B and C bonds (see Fig. 1 for bond labelling). We provide better ways of determining the boundary between the spin-liquid and paramagnetic regions below, but find that they essentially agree with the simple estimate given byJ 1 . Our results for the dipolar TLIAF are presented in the main text of the article, since this is the most experimentally relevant form of the interactions. The extensive appendicies discuss related, but simpler models, in which the couplings are short range. This allows important features of the TLIAF to be isolated and studied in more detail than is possible for the dipolar model, since there is both more freedom to separate competing energy scales, and the simpler models are more amenable to analytic calculations and larger scale Monte Carlo simulations. Methods We employ two complementary methods to study the dipolar TLIAF, Monte Carlo simulation and mapping onto a model of strings/fermions. Monte Carlo simulations Monte Carlo is the standard way to simulate 2D Ising systems, but in the case of the dipolar TLIAF proves difficult to equilibriate. To overcome this problem we use a combination of update methods, including parallel tempering, single-spin-flip updates and worm updates. Equilibriation difficulties are most acute close to the transition temperature, and are related to a vanishingly small density of defect triangles (triangles with three equivalent spins). In consequence local-update algorithms (e.g. single-spin-flip) have freezing problems. Our solution is to employ a worm algorithm [29,30,31] in which loops are constructed on the dual honeycomb lattice, taking into account the local interactions [32,33,11,34] (see Appendix B for a discussion of dimer configurations on the honeycomb lattice). The sets of Ising spins within these loops are then flipped with high probability, allowing the system to tunnel between very different configurations. For systems with local interactions (e.g. up to 5th neighbour) the loops of the worm algorithm can be constructed such that detailed balance is automatically obeyed, and therefore the algorithm is rejection free [11]. For dipolar interactions the construction of rejection-free updates is prohibitively time consuming, and instead the algorithm uses effective values of the local coupling constants, J worm 1 , J worm 2 and J worm 3 to guide the loop creation. If these are well chosen, then accepting a flip of all the spins within the loop has a reasonable probability. In practice we found that these parameters have to be carefully tuned so as to target the configurations expected just above the phase transition. For the isotropic, dipolar TLIAF just above the phase transition, an acceptance probability of about 0.035 was found for the worm updates, which dropped to about 0.004 on crossing the transition, before continuing to decrease. By running a dense set of temperatures, parallel tempering steps were accepted with a probability of at least 0.8 across the transition, providing a considerable aid to equilibriation. Increasing the transition temperature, for example by adding a lattice distortion, simplifies the simulations by increasing the density of defect triangles in the neighbourhood of the transition. Once this density is high enough, it is possible to use a simple single spin-flip algorithm in combination with parallel tempering. Simulations are run on hexagonal clusters that preserve all the symmetries of the triangular lattice and have periodic boundary conditions. The linear size of the clusters is L and the total number of triangular-lattice sites is N = 3L 2 . For the dipolar TLIAF we typically use clusters sizes of L = 24, L = 36 and L = 48. While in 2D the dipolar energy is convergent, it was found to be useful to use Ewald summation of the interactions to take into account their slow decay [35]. When performing Monte Carlo simulations a number of physical quantities are sampled. This includes the energy, E, and heat capacity, C, as well as the stripe order parameter and its associated susceptibility, where α ∈ {A, B, C}, i labels triangular lattice sites and τ α i is the spin at the ith site for perfect stripe order parallel to the α bond direction. It is also useful to track the triangular average of the winding number, W = (W 1 , W 2 ) (see below for the definition of the winding number, and also Appendix B), given by, which is designed such that W tri = 1 for W = (L, L), W = (−L, 0) and W = (0, −L), while W tri = 0 for W = (0, 0). Alternatively one can track the Monte Carlo average of the density of strings, n string (see Section B.2 for the definition of strings). Correlations can be understood by measurement of the spin structure factor, defined in the usual way in real and reciprocal space as, where r = r i − r j and q is in the Brillouin zone of the triangular lattice. String/fermion mapping In order to gain intuitive insights into the TLIAF that complement the Monte Carlo simulations, it is useful to consider some of the mappings that can be made. One option is to make a mapping onto a height model, which describes the configurations of the TLIAF in terms of a coarse-grained height field, and this is particularly powerful way to capture the long-wavelength features of the nearest-neighbour TLIAF [36]. For the questions we are interested in here, we find it more intuitive to make a mapping onto strings [37,38], which can then be interpreted as the worldlines of fermions. The mapping onto strings requires the choice of one of the 3 principal lattice directions as being special, and while this is natural for the anisotropic TLIAF the decision is arbitrary for the isotropic model. The strings live on the dual honeycomb lattice, whose bonds bisect those of the triangular lattice (see Fig. 2 and also Appendix B for the link to dimer mappings). Along the special direction each honeycomb bond bisecting an antiferromagnetically-aligned, triangular-lattice bond is assigned to be a segment of a string, while in the other two directions honeycomb bonds bisecting ferromagnetically-aligned, triangular-lattice bonds are assigned as string segments. The string-free configuration is thus seen to correspond to an Ising stripe configuration, with stripes of aligned Ising spins parallel to the special direction. Using the above definition, each honeycomb-lattice site can be touched by either 0 or 2 string segments, and this ensures the continuity of the strings. For a system with periodic boundary conditions the strings form closed loops, and no two strings can touch, let alone cross, one another (see Fig. 2). If a reference line is chosen that winds the system, the number of strings crossing it has to be even, and therefore string parity is conserved. If no defect triangles are present, then there is no way for a string to turn back on itself, and strings both wind the system and have a fixed length, and in this sense the strings are taut. Defect triangles act as sources or sinks of pairs of strings, allowing them to turn back on themselves and thus either form local loops or long floppy strings that wind the system. In the absence of defect triangles the string degrees of freedom provide a way of labelling the Ising configurations according to a pair of winding numbers. Two reference lines can be chosen that wrap around a periodic cluster of linear size L, and the number of strings crossing each reference line is related to the associated winding number according to W i = [L − no. of strings crossing ref. line i] (see Appendix B for the link to dimer representations of the Ising variables). In order to transition between Ising configurations with different winding numbers, it is necessary to either create a pair of defect triangles and transport one of them around the system before they recombine, or to make a non-local change of the Ising configuration. The properties of the strings allow them to be interpreted as the worldlines of spinless fermions, as is common for 2D statistical mechanics problems [39,40,41,42,43,44]. The strings "travel" in the direction perpendicular to the special lattice direction, and this is interpreted as the imaginary time direction (see Fig. 2). At a mininum, the quantum Hamiltonian has to include a chemical potential measuring the energy cost of creating a string/fermion, a hopping term that allows the strings to move in the direction perpendicular to that of imaginary time and pair creation and annihilation terms that take into account the effect of defect triangles. For the case of the nearest-neighbour triangular lattice these three terms are sufficient, and there is an exact mapping onto, where the parameters of the 1D quantum model can be expressed in terms of those of the 2D classical model (see Appendix C, Appendix D and Appendix G for details). The simplicity of the fermionic model is due to the fact that the string-string interactions in the nearestneighbour Ising model are purely entropic -they arise from the non-crossing constraint -and this maps onto the fermionic Pauli exclusion principle. Once further-neighbour Ising interactions are included, it is necessary to take into account energetic string-string interactions, and these map onto fermionic interactions. However, a phenomenological free-fermion model can still be applicable when the string/fermion density is low, and can provide quantitative insights into the critical behaviour of the Ising model. Futhermore, qualitative insights into the Ising system can be gained from considering the form of the fermion-fermion interactions. Monte Carlo simulation of the dipolar TLIAF Here we determine the main physical features of the dipolar TLIAF using Monte Carlo simulation. A more detailed discussion of their physical origin is postponed until Section 5. The ground state of the dipolar TLIAF is 6-fold degenerate and consists of alternating stripes of equivalent Ising spins running parallel to A, B or C bonds (see Fig. 5). The 3-fold degeneracy associated with the choice of stripe direction is multiplied by a 2-fold Ising degeneracy associated with a global spin flip, giving the overall 6-fold degeneracy. At low temperature there is a stripe-ordered phase, which is dominated by the ground state configurations. Local fluctuations are highly suppressed because they involve the creation of pairs of defect triangles, and the associated energy cost is large. In principle it is also possible to create strings that wind the system, but these are forbidden in the thermodynamic limit as they cost a finite free energy per unit length [10,11]. The maximum of χ W scales approximately with 1/N , as is standard for a 1st order phase transition, leading to our estimate that in the thermodynamic limit the transition temperature is T1/J1 = 0.1845 ± 0.0010. (b) Energy-histogram analysis of an L = 36 cluster for temperatures close to the χ W -maximum of T1/J1 ≈ 0.183. Energies are measured in units of J1 and energy bins have width 0.0005J1. A sharp low-energy peak, associated with an almost fluctuationless low-temperature phase, is separated from a Gaussian peak associated with the high-temperature phase by an energy gap of approximately 0.03J1 per site. The separation of the two peaks is evidence of a first-order transition. On further increasing the temperature, there is a transition from the stripe phase into a disordered phase. A previous study determined the transition temperature to be T /J 1 ≈ 0.18, but was not able to determine the nature of the transition or achieve equilibriation across the transition [45]. Our simulations, which do achieve equilibriation, show that the transition is first order, and this is clear from histogram analysis of the energy close to the transition temperature, as shown in Fig. 3. The transition temperature can be determined from the peak in the heat capacity, C, the order parameter susceptibility χ stripe [Eq. 2] or the winding number susceptibility χ W [Eq. 3]. The positions of the peaks in these different quantities coincide for a given system size, L, and we show results for χ W in Fig. 3. The position of the peak shows a weak L dependence, and using the standard scaling of a first-order transition temperature with 1/N , we determine a transition temperature of T 1 /J 1 = 0.1845 ± 0.0010. The first-order nature of the transition is also clear from simulations of the heat capacity, which are shown in Fig. 4. Integrating C/T from infinity shows that the disordered state just above the phase transition has an entropy per site of S/N ≈ 0.22. While this is less than the Wannier entropy S wan /N = 0.323 . . . associated with the ground state of the nearestneighbour TLIAF [2], it is still considerable. The low-temperature stripe phase is essentially fluctuationless and has S/N = 0, showing that there is a significant entropy release in the first-order transition. The nature of the correlations can be accessed via the spin stucture factor, and a representative set of examples are shown in Fig. 5. In the stripe-ordered phase there are Bragg peaks Error bars are smaller than the point size unless explicitly shown. (a) The heat capacity (red) shows a broad maximum centred on T = 0.8J1, which corresponds to the freezing out of defect triangles, and a sharp peak at T = 0.185J1 due to a first-order phase transition. This can be compared to the case of the nearest-neighbour TLIAF (blue), which shows a similar broad maximum centred on T = 1.2J1, but no low-temperature phase transition. (b) The entropy per site (red) is calculated by integrating the heat capacity from infinity. The entropy passes though Swan/N , at T = 0.5J1, but does not show an extended plateau, unlike the nearestneighbour TLIAF (blue). At the phase transition there is an entropy jump of ∆S/N ≈ 0.22. (c) The defect triangle density, n def . (Inset) At low temperatures n def follows an activated behaviour, and the blue line is the best fit to Eq. 29 with A = 0.24 and E def = 1.60 (see Appendix A). associated with the three different stripe directions, and these occur at q stripe = (0, 2π/ √ 3) and symmetry related wavevectors. At temperatures just above the transition, the structure factor develops weight on the whole of the Brillouin zone boundary, but remains peaked at q stripe , despite the significant entropy change. Small further increases in temperature result in the growth of sharp structure-factor peaks at q tri = (2π/3, 2π/ √ 3), and the disappearance of peaks at q stripe . Further increasing the temperature results in the structure-factor weight becoming more diffuse, and the peaks at q tri become less sharp. The crossover from a highly-correlated paramagnet with sharp structure-factor peaks to a weakly-correlated paramagnet with a diffuse structure factor is governed by the presence or absence of defect triangles. The temperature evolution of the density of defect triangles, n def , is shown in Fig. 4, where n def is defined as the total number of triangular plaquettes with three equivalent spins divided by the total number of plaquettes, 2N . Just above the transition temperature the density is very low, and at T /J 1 = 0.2, one finds n def ≈ 10 −4 . On the other hand, in the uncorrelated, infinite-temperature limit the defect-triangle density saturates at n def = 0.25, since triangles can take 8 possible equally probable configurations, 2 of which have three spins aligned. For simplicity we take the crossover from strong to weak correlation to be at n def = 0.025 (i.e. 10% of the saturation value) and this occurs at T = 0.75J 1 , which matches well to the broad peak in the heat capacity (see Fig. 4). It is possible to estimate the typical energy of an isolated defect triangle by making fits to n def in the temperature range T J 1 . This shows activated behaviour, and a crude estimate of the functional form is derived in Appendix A. The best fit, shown in Fig. 4, corresponds to a defect-triangle energy of E def = 1.60J 1 . This shows that one effect of the further-neighbour interactions is to slightly decrease the typical energy of a defect triangle relative to the nearestneighbour TLIAF, where E def = 2J 1 . 3) and symmetry-related wavevectors, associated with the stripe ordering. (b) Above the transition weight develops around the BZ boundary, but is peaked at q stripe and q tri = (2π/3, 2π/ √ 3), and this is associated with a domain-wall network configuration. (c) Further increasing the temperature results in the peaks at q tri dominating over those at q stripe , and this is associated with a switch from attractive to repulsive string-string interactions. (d) At still higher temperatures the peaks at q tri remain sharp, and the system can be described as a string-Luttinger liquid. (e) Once the temperature becomes comparable with J1 the peaks at q tri lose their sharpness, and this is associated with the proliferation of defect triangles and the loss of significant correlation. Monte Carlo simulation of the anisotropic, dipolar TLIAF Next we turn to the distorted triangular lattice, and show that even quite small distortions can lead to significant changes in the physical behaviour compared to the isotropic lattice. The distortion, which is parameterised by δ, leaves the length of A bonds invariant, while reducing the length of B and C bonds and therefore breaks the 6-fold ground-state degeneracy of the isotropic lattice down to a 2-fold degeneracy. Stripes form parallel to A bonds, and the 2-fold, ground-state degeneracy is simply due to an Ising degree of freedom, associated with a global flip of all the spins. As in the isotropic case, the transition from the stripe phase to the disordered phase can be located using the peaks in the heat capacity, C, the order parameter susceptibility χ stripe [Eq. 2] or the winding number susceptibility χ W [Eq. 3], and the resulting phase diagram is shown in Fig. 6. Histogram analysis of both n string and E show that the transition changes from first order at low δ (see Fig. 3) to second order at high δ (see Fig. 6), and the change occurs at δ tri ≈ 0.02. However, this type of analysis is not a very precise gauge of δ tri , due to both finite-size effects and the fact that the first-order nature of the transition becomes weaker approaching δ tri . In Section 5 we use finite-size scaling analysis to determine how the critical exponents depend on δ, and thus demonstrate that the change from first to second order occurs via a tricritical point located at δ = δ tri = 0.022 and T = T tri = 0.343. The spin-liquid region, in which strong local correlation co-exists with long-range disorder, is found to extend until approximately δ ≈ 0.15, with the associated temperature window decreasing with increasing δ. This is shown in Fig. 6, where we continue to use a defecttriangle density, n def = 0.025 (10% of the saturation value) to signify the upper extent of the spin liquid. The structure factor shows signs of a second order transition for δ > δ tri and can also be used to characterise the disordered region. For δ tri ≤ δ 0.1 satellite peaks appear at the transition either side of q stripe = (0, 2π/ √ 3(1 − δ)), and gradually shift towards q tri = (2π/3, 2π/ √ 3(1−δ)) as the temperature is increased. We will argue below that these follow the string density, and the structure factor is peaked at q = q string = (πn string (T, δ), 2π/ √ 3(1−δ)). In the spin-liquid region these peaks are sharp, and this is associated with a long spin-spin correlation length. In the weakly-correlated region, the structure factor becomes diffuse, signalling the breakdown of strong correlations and a short spin-spin correlation length. For δ 0.15 the spin-liquid region is totally suppressed and, above the transition, the structure factor has peaks at q stripe . In this region the critical behaviour shows the characteristics of a usual second-order Ising transition into a symmetry-broken, stripe phase, with a structure factor peak developing in the disordered region at the ordering vector. Discussion and analysis In order to gain physical insight into the Monte Carlo simulation results presented in Sections 3 and 4, it is useful to analyse them in terms of the string model introduced in Section B.2. We first discuss the nature of the phase transitions and then move on to the nature of the correlations within the classical spin-liquid region. The nature of the phase transitions Depending on the value of the anisotropy parameter, δ, the phase transition has different character, with a first-order transition at δ < δ tri turning into a second order transition at δ > δ tri via a tricritical point at δ = δ tri (see Fig. 6). The nature of the second-order transition for δ > δ tri is complicated by the fact that it shows a combination of Pokrovsky-Talapov and Ising criticality, with the details depending on δ and T . Here we show that the criticality is 2D Ising over some potentially narrow temperature window close to the transition, and then crosses over to Pokrovsky-Talapov outside this region (see Appendix D for a discussion of similar behaviour in a simpler setting). at q string that coexist close to the phase transition with an additional peak at q stripe . At higher temperatures the peaks at q stripe are suppressed, leaving the peaks at q string more visible. (Bottom row) At large values of δ (here δ = 0.2) the spin liquid region is absent, and S(q) is peaked at q stripe , displaying the usual behaviour associated with a second-order transition. The width of the Ising temperature window is exponentially suppressed as δ decreases and for small δ (e.g. δ = 0.05) the transition is to all intents and purposes in the Pokrovsky-Talapov universality class (i.e. is of Kasteleyn type). First we consider the extreme case where defect triangles are completely absent and the transition is strictly in the Pokrovsky-Talapov universality class (see also Appendix C). While this is never rigorously true in the dipolar TLIAF, it is a good approximation at low T . The partition function of the 2D classical model can be mapped onto that of a 1D quantum model with the Hamiltonian, where T c is the critical temperature of the dipolar TLIAF, and a 0 and b are phenomenological parameters. In the simpler case of the nearest-neighbour TLIAF, the exact 1D quantum Hamiltonian is given in Eq. 5, and matching this to Eq. 6 requires µ + 2t → a 0 (T − T c ), t → b and ∆ → 0. For the dipolar TLIAF such a microscopic matching of parameters is not possible, but the phenomenological dispersion given in Eq. 6 is valid as long as the probability of creating a defect triangle is very low. Furthermore, truncation of the dispersion beyond the q 2 term remains a good approximation as long as the string density is low. For the fermion model, the T that appears in Eq. 6 does not have the meaning of temperature, and is simply a tuneable parameter that controls the transition from an insulating phase with no fermions (T < T c ) to a metallic phase with gapless excitations (T > T c ) at a fermi wavevector q f = −a/b. Once this is mapped back to the dipolar TLIAF, T regains the meaning of temperature, the insulating fermionic phase corresponds to the string vacuum (the stripe phase) and the metallic fermionic phase to the spin-liquid phase where there is a finite density of fluctuating strings that wind around the system. The density of strings in the 2D classical model is equal to the density of fermions in the 1D quantum model, and can be calculated as, In the fluctuating phase the string density can be expressed as n string ∝ (T − T c ) β , with the critical exponent β = 1/2. This is characteristic of Pokrovsky-Talapov-type critical behaviour [39,40] associated with a Kasteleyn transition [46]. To make a connection with previous work [40], it is useful to determine the free energy of the 2D classical model, which is just given by the energy of the 1D quantum model, resulting in, where n string = q f /π. In Ref. [40] it was shown that the cubic term describes the string-string repulsion. For the dipolar TLIAF, the above analysis should apply to the second-order phase transition at anisotropy values δ δ tri , where the transition temperature is low enough that there are very few defect triangles in the system. In order to test this, we perform simulations of H dip [Eq. 1] for δ = 0.05 at a range of system sizes. While finite-size effects make it hard to directly measure the exponent β in simulations, it is possible to write down a scaling hypothesis for n string and use this to determine β. We consider, where g PT is an unknown scaling function and ζ is the correlation length in the direction parallel to the domain walls, defined as the lengthscale at which asymptotic values of the structure factor becomes valid [47]. In the critical region it is expected that ζ ∝ (T − T K ) −ν with ν = 1, and this can be compared to the correlation length perpendicular to the domain walls, which is given by ζ ⊥ ∝ (T − T K ) −ν ⊥ with ν ⊥ = 1/2 [40,48,47]. Since we typically consider hexagonal shaped clusters, finite-size effects will be dominated by ζ , since this diverges faster than ζ ⊥ . In order to quantitatively test the goodness of the data collapse according to the scaling hypothesis [Eq. 9], we use the measure proposed in [49]. The best collapse was found for β = 0.44 ± 0.07 and ν = 0.92 ± 0.2 (see Fig. 8), which is consistent with the expected Pokrovsky-Talapov exponents of β = 1/2 and ν = 1. Thus we conclude that at low values Ising tricritical Pokrovsky-Talapov Ising In the tricritical region the best data collapse is found for m stripe at δ = 0.022 using the scaling hypothesis given in Eq. 14 with Ising tricritical exponents β = 1/4 and ν = 1. (b) Data collapse at δ = 0.05 using the scaling hypothesis given in Eq. 9 for Pokrovsky-Talapov critical behaviour and for exponents that minimise the "goodness of collapse" measure proposed in [49]. This gives β = 0.44 ± 0.07 and ν = 0.92 ± 0.2, which are consistent with the expected Pokrovsky-Talapov exponents β = 1/2 and ν = 1. (c) Data collapse at δ = 0.2 using the scaling hypothesis given in Eq. 14 and the Ising critical exponents β = 1/8 and ν = 1. of δ the second-order transition shows critical behaviour associated with a Kasteleyn-type transition, which is driven by the appearance of non-local strings that wind the system. In reality H dip [Eq. 1] supports a small density of defect triangles at any finite temperature, and even a tiny density of defect triangles drives the transition to be in the Ising universality class (see also Appendix D). However the temperature window over which Ising criticality applies is exponentially suppressed at small δ. In order to better understand the nature of the suppression and the crossover between Ising and Pokrovsky-Talapov criticality, we consider the phenomenological 1D quantum Hamiltonian, where z def = e − E def T and E def is a measure of the energy cost of a defect triangle. Diagonalisation via a Bogoliubov transformation, results in, where a q and a † q are fermionic operators. In terms of fermions the parameter T controls a transition from a gapped, insulating phase at T < T c to a gapped, p-wave-superconducting phase at T > T c via a gapless point at T = T c . In terms of the 2D classical model this maps onto the transition from the stripe phase at T < T c to a phase with fluctuating strings at T > T c . The Ising/Pokrovsky-Talapov nature of the criticality is encoded in the location of the minimum of ω q [Eq. 11]. Ising criticality is associated with a minimum at q = 0, and this occurs for T c < T < T Is , where, T Is = T c + 8z 2 def /(a 0 b). The 2D Ising nature of the criticality in this temperature window is clear from considering the correlation length, which goes as ξ Is ∼ |T − T c | −1 [43]. For T > T Is the dispersion minimum moves away from q = 0 to q min = (−a/b − 8z 2 def /b 2 ) 1/2 and the system enters the crossover region between Ising and Pokrovsky-Talapov universality. Pure Pokrovsky-Talapov critical behaviour is recovered in the limit T − T c T Is − T c , where q min = (−a/b) 1/2 is recovered, and therefore n string ∝ (T − T c ) 1/2 . In the case T E def , the temperature width of the Ising window is exponentially suppressed due to the z 2 def factor, and the system shows Pokrovsky-Talapov characteristics over all accessible temperatures. In the critical region the string density and its derivative are given by, In the region T c < T < T Is it is possible to extract analytic expressions for these quantities in terms of elliptic integrals. However, these expressions are not so enlightening, and we instead show a numerical evaluation in Fig. 9. For z def = 0, the string density is finite both above and below the transition and takes the value n string = 2z def /(πb) at T = T c . As such n string is not technically a good order parameter, but it still provides a useful indicator of the transition temperature since dn string /dT shows a logarithmic divergence according to dn string /dT ∝ log |T − T c | (see Fig. 9). At intermediate values of δ (e.g. δ ≈ 0.1), the dipolar TLIAF should show a crossover between Ising and Pokrovsky-Talapov criticality. As is standard in such situations, an exponent φ can be used to parametrise the crossover [50,51]. This appears in the scaling form of physical quantities, and we consider the defect-triangle density, which is expected to scale as, where α is associated with the Pokrovsky-Talapov critical behaviour and therefore expected to take the value α = 1/2 [48]. As shown in Fig. 10, scaling collapse of Monte Carlo simulation data in the range 0.06 ≤ δ ≤ 0.1 works well for α = 0.5 and φ = 1.12. This compares well to a similar scaling analysis of the nearest-neighbour model, where n def can be calculated directly in the thermodynamic limit, and we find α = 0.5 and φ = 1 [see Appendix D]. At large values of δ there is a high density of defect triangles at the transition, and one expects Ising criticality to apply over a wide temperature window. That this is indeed the case can be shown by analysing simulation data at δ = 0.2. The standard scaling hypothesis for the Ising order parameter, m stripe [Eq. 2], is, and it is expected that data collapse occurs for β = 1/8 and ξ Is ∼ |T − T c | −1 . It can be seen in Fig. 8 this results in good collapse of the simulation data. At δ = δ tri = 0.022 there is a tricritical point, and the critical behaviour is different from that of the second-order transition. Since the transition temperature at the tricritical point is low, one would naively expect that the associated low density of defect triangles would result in Pokrovsky-Talapov tricritical behaviour (see Appendix E for a discussion of Pokrovsky-Talapov tricriticality). Pokrovsky-Talapov tricriticality can be described by the 1D quantum dispersion, where a > 0, c > 0 and b(δ − δ tri ) is an odd function of δ − δ tri that changes sign when δ = δ tri . It follows that exactly at the tricritical point (b=0), resulting in a critical exponent of β = 1/4. In terms of the free energy of the 2D classical model, the tricritical point occurs when the cubic term disappears, resulting in, Since the cubic term controls the string-string interaction, changing its sign is equivalent to going from a repulsive interaction associated with a second-order transition to an attractive interaction associated with a first-order transition. For the tricritical point to be effectively in the Pokrovsky-Talapov-tricritical universality class, it is necessary that the Ising temperature window is negligible. The problem with this is that the expression we previously calculated, Including the fourth order term in ω q [Eq. 15] results in T Is = T c + 6z The exponential suppression of the Ising temperature region with E def /T is thus less pronounced than at the critical point, and, depending on the value of c, this could in principle lead to a wide temperature window of Ising tricriticality. Monte Carlo simulations allow us to test whether Pokrovsky-Talalapov or Ising tricriticality dominates, and come down in favour of a significant Ising-tricritical window. This can be seen in Fig. 8, where m stripe [Eq. 2] shows convincing data collapse for 2D Ising tricritical exponents. At higher temperatures the system presumably crosses over to Pokrovsky-Talapov tricriticality, but the Ising temperature window is wide enough that this is difficult to ascertain. For δ < δ tri the transition is first order. In this situation the string-string interaction is attractive, and the string density jumps at the transition. Expansion of the free energy in terms of the string density is therefore only possible close to the tricritical point, where the jump in the string density is relatively small. Close to the first-order transition the effective fermion degrees of freedom are long lived, since decay of fermions is associated with the, essentially negligible, presence of defect triangles in the 2D classical model. As a result, even though the fermions are strongly interacting, the interactions just renormalise the free-fermion terms in the Hamiltonian, but don't generate new terms. Thus one can still think in terms of an effective free-fermion dispersion, ω q . However, the deeper one goes into the first-order region the larger the jump in the string density and the more (even) powers of q have to be retained in the expansion of ω q . This is because in order to generate a jump in n string a finite region of q values must have a flat dispersion with ω q = 0 at the transition, and the larger this region, the more powers of q are required to capture it effectively (strictly all powers of q are required for a region of ω q = 0, but close to the tricritical point this effect can still be essentially captured by a finite expansion). The predictive power of the phenomenological theory thus reduces away from the tricritical point due to the rapid increase in the number of coefficients. Taking all the results of this section together, one can see that with relatively few parameters, it is possible to understand the full gamut of critical behaviour in the dipolar TLIAF. The important parameters are the reduced temperature, (T − T c )/T c , which changes sign at a second-order phase transition, the distortion-dependent parameter b, which changes sign at the tricritical point, and the ratio E def /T c , which determines the temperature window of Ising criticality at the transition. Correlations between spins Next we turn to the spin correlations, the nature of which can be used to attain a more detailed understanding of the phase diagram. These can be probed via the spin structure factor, S(q) or S(r) [Eq. 4]. The correlations are very simple in the stripe-ordered phase, which has Bragg peaks in S(q) at q = q stripe = (0, 2π/ √ 3(1 − δ)), and symmetry-related wavevectors (see Fig. 5 and Fig. 7). Since at low temperatures the stripe phase is essentially fluctuationless, virtually all the spectral weight is contained in the Bragg peak, and in real space there is no decay of the spin correlations with separation. For larger values of δ the stripe-ordered phase survives to higher temperatures, and for T ∼ J 1 local fluctuations around the stripe ground state associated with pair creation of defect triangles become significant, resulting in some diffuse scattering surrounding the Bragg peak. Domain wall network String Luttinger liquid Paramagnet Figure 12: Typical string configurations in the domain-wall-network, string-Luttinger-liquid and paramagnetic regions. In the domain-wall-network region strings typically wind the system and attractive string-string interactions cause them to bind together. In the string-Luttinger-liquid region the strings also tend to wind the system, but repulsive interactions result in a grill-like superstructure with strings avoiding one another as far as possible. In the paramagnet there are many defect triangles that act as sources and sinks of pairs of strings, resulting in the strings being floppy and forming short closed loops. More interesting is the disordered phase, which shows three qualitatively different regimes of spin correlations. At high T there is a weakly-correlated paramagnet in which the spin correlations are short ranged, at lower T and for δ < 0.15 there is a strongly-correlated regime with longer-range correlations that we name a string Luttinger liquid and a for T < T tri and δ < δ tri there is a different strongly-correlated regime that we call a domain-wall network (see Fig. 11). It is instructive to discuss each of these regimes in more detail, and first we turn to the string Luttinger liquid (see also Appendix E). In this regime the density of defect triangles is low, and there is a repulsive interaction between the strings. Since the strings repel one another they form a (disordered) grill-like superstructure where the average spacing between the strings depends on the string density, n string (T, δ) (see Fig. 12). As a result of this superstructure, the structure factor is peaked at q = q string (T, δ) = (πn string (T, δ), 2π/ √ 3(1 − δ)) and related wavevectors (see Fig. 7). However, since the strings are fluctuating, the peaks are not Bragg peaks, and in real space spin correlations decay to zero for large enough separations. In the absence of defect triangles spin correlations in real space decay algebraically, while in the presence of defect triangles the decay is exponential at large enough distances. We make the ansatz that the real-space, spin-correlation function takes the asymptotic form, where ξ ⊥ and ξ are correlation lengths in the directions perpendicular and parallel to the strings (in the isotropic case ξ ⊥ = ξ ). The exponential nature of the decay only becomes apparent when the spin separation is comparable to the correlation length ξ ⊥ or ξ . For low defect-triangle densities, this correlation length is typically large (it becomes infinite when the defect triangle density goes to zero) and for r ≤ ξ the spin correlations are essentially algebraic. The parameter K detemines the speed of the algebraic part of the decay, and is nothing but the Luttinger parameter familiar from 1D fermionic systems [52]. That this should appear is unsurprising given the mapping between strings and spinless fermions (see Section B.2) and due to the fact that a repulsive interaction between strings maps onto a weakly attractive interaction between fermions, one expects K > 1 (for comparison non-interacting fermions have K = 1). In practice simulations show that the reciprocal-space structure factor in the string-Luttinger-liquid region is dominated by peaks at q = q string , but there is also spectral weight on the line of q values joining q string and −q string (see Fig. 7). In consequence it is better to fit the structure factor in reciprocal space than in real space, and Fourier transforming the asymptotic form given in Eq. 18 in the vicinity of q = q string and for ξ ⊥ = ξ = ξ gives [53], where p = q − q string , p = |p| and, where J 0 (x) is the Bessel function of the first kind, Γ(x) is the Euler Gamma function and is the hypergeometric function. The result of fitting this to simulations for δ = 0 and appropriate T is shown in Fig. 11, and it can be seen that the Luttinger parameter does indeed take values K > 1, and the correlation length can be many multiples of the lattice spacing. A more precise determination of K and ξ would require simulations on larger clusters (for a numerical determination of K in a simpler model see Appendix E). It is clear from the simulations of S(q) shown in Fig. 5 that the string-Luttinger liquid regime does not survive all the way down to the phase transition when δ < δ tri . Rather than being peaked at q = q string , the low-temperature structure factor in the disordered region has spectral weight spread around the BZ boundary, and in particular weight starts to develop at q = q stripe . As was shown in Ref. [11] this type of stucture factor is associated with the formation of sizeable domains of stripe order, with neighbouring domains having stripes along different principal axes (see Fig. 12). The formation of large stripe-ordered domains is suggestive that within this regime the strings attract one another (see also Appendix E). It makes sense that the crossover from the string-Luttinger-liquid regime (high T , repulsive string-string interactions) to the domainwall-network regime (low T , attractive string-string interactions) should occur at approximately T = T tri , and this is consistent with the S(q) measurements (see Fig. 5). A simple way to test that this is the case is to perform simulations in a highly restricted manifold of Ising configurations containing two strings, each of which winds the system. The average separation of the strings does indeed show a significant drop starting at T ≈ T tri , indicating a shift from repulsive to attractive interactions (see Fig. 11). We find that the temperature at which this change occurs is essentially independent of δ in the relevant region (0 < δ < δ tri ), and therefore the crossover between the string-Luttinger-liquid and domain-wall-network regimes is approximately flat, as shown in Fig. 11. In terms of fermions, the domain-wall network state can be thought of as being a fluctuating, phase-separated state, with a loose analogy to the clustering of holes in superconductors [54,55]. At high temperatures the system forms a weakly-correlated paramagnetic regime. The correlations can still be described by Eq. 18, but the correlation length is comparable to the lattice spacing, and so the correlations are exponentially decaying at all length scales. While we have defined the crossover from the Luttinger liquid regime to the weakly-correlated regime in terms of the density of defect triangles reaching 10% of its saturation value (see Fig. 6) this is roughly equivalent to defining a crossover in terms of the correlation length reducing to about 2 lattice spacings. For δ 0.15 there is a direct transition from the stripe-ordered phase to a standard paramagnet, and as such the structure factor shows the usual features of a second-order transition, with spectral weight building up at the ordering vector as the transition is approached from above, and a diverging correlation length at the transition that results in the formation of a Bragg peak. Triangular lattice antiferromagnets with general couplings Here we take a step back and discuss the general features of TLIAF models with monotonically decreasing further-neighbour interactions. We have argued that a good way to understand such models is in terms of the string degrees of freedom, which can be thought of either in their 2D classical incarnation or as the worldlines of spinless fermions in 1D. As such we would like to determine which energy scales present in a given microscopic model dictate the behaviour of the strings and therefore the form of the phase diagram and the physical observables. In general TLIAF models have many competing couplings, as is clearly true in the dipolar case. Our claim is that these can in most cases be distilled into four important energy scales (it is worth noting that other energy scales can become important if the further-neighbour interactions are not monotonically decreasing interactions [10,11]). The first and most important energy scale is the isotropic part of the nearest-neighbour interaction; that is the part of the nearest-neighbour interaction that does not vary with the anisotropy (i.e. J 1A ). This approximately sets the energy cost of creating defect triangles, and therefore "interesting", strongly-correlated physics only occurs in the region T < J 1A . Next are two energy scales that combine the isotropic parts of the further-neighbour couplings. The first of these, J fn , is a measure of the internal energy of a string and also sets the string-string interaction energy scale. As an example, for the TLIAF with J 1 , J 2 and J 3 couplings it is given by J fn = J 2 − 2J 3 [10]. This shows that even if the further-neighbour couplings are comparable with J 1 , their combined effect can still be small due to frustration. The second energy scale is J c , and this is related to the energy cost associated with a string changing direction, and in the case of the J 1 -J 2 -J 3 model this is given by J c = J 2 . One thing that is important to note is that we always consider J fn , J c > 0, and if this is not the case different physics can be expected [11]. The final energy scale we consider is a measure of the anisotropy and is labelled J an . For the dipolar TLIAF it clearly depends on δ, and a rough estimate is given by the difference in the nearest-neighbour interaction strengths, resulting in, The energy scales J 1A , J fn , J c and J an have been constructed with the string degrees of freedom in mind, and we now make the link more explicit. We concentrate in particular on J 1A J fn , J an , which is the requirement for the existence of spin-liquid behaviour. A particularly important quantity is the internal free energy per unit length of an isolated string, which depends on J fn , J c and J an , and is approximately given by [10,27], If string-string interactions are ignored, strings will be present in the system above a temperature T string , and this is approximately given by, While a number of approximations have been made in order to arrive at this simple expression, except in the extreme case of J c J fn , it matches well to Monte Carlo simulations of simple models [11]. In reality the strings are not isolated, and the transition temperature and the nature of the correlations in the spin liquid depend on the string-string interactions. These have two main contributions, the first of which is an entropically-driven repulsion associated with the nocrossing constraint, and in the fermion language this maps onto the Pauli exclusion principle. The second is an energetically-driven attraction due to the further-neighbour interactions, which is approximately measured by J fn , and in the fermion language it is only this second contribution that counts as an interaction. If the attractive interaction dominates in the vicinity of T = T string then an array of strings can lower their energy by binding together, and this binding energy results in a firstorder transition with T 1 < T string . As a result the string density jumps at the transition from n string ≈ 0 to a finite value. At temperatures just above the transition the string-string interactions remain attractive and the strings loosely bind together, forming a domain-wallnetwork state. The domain-wall-network state also relies on a positive J c which penalises changes in direction of the strings. The larger the value of J c and J fn relative to T , the larger the domain size will be. The dominance of attractive interactions in the string picture corresponds to the strong-coupling regime of the fermionic model. When T J fn the entropically-driven repulsion between strings dominates over the energetically-driven attraction. If T string J fn then the strings repel one another in the critical region, resulting in a second-order transition at T = T string . As long as J 1A T string then this transition is essentially of the Kasteleyn type, since it is driven by the sudden appearance of strings that mostly wind the system. This type of phase transition is quite different from the more usual Ising transition which is driven by the proliferation of local defects. Above the second-order transition the string density, n string , increases with increasing temperature, and, while the strings fluctuate, they on average form an equally-spaced, grilllike structure due to their mutual repulsion. In the fermionic language this corresponds to weak coupling and a 2D classical equivalent of a Luttinger liquid forms. When the attractive and repulsive interactions balance, the phase transition is tricritical, and this occurs when J an ≈ J fn . Just above the transition the string or fermion dipersions are soft, resulting in large fluctuations in the string/fermion density. The crossover between the spin liquid and paramagnet occurs at T ≈ J 1A and at this temperature defect triangles become common. As a result strings form short closed or longer floppy loops which typically don't wind the system. If T string ≈ J 1A , then the transition is in the Ising universality class and is driven by the proliferation and growth of local defects, resulting in a direct transition from the ordered phase to the weakly-correlated paramagnet. In the dipolar TLIAF this occurs for δ 0.15. For the dipolar TLIAF it is possible to approximately determine the appropriate energy scales as J fn ≈ 0.02J 1A , J an ≈ 9δJ 1A /4 and J c ≈ 0.08J 1A . Here J c is determined as half the energy cost of an isolated corner, while J fn is determined so as to be consistent both with T string [Eq. 23] and with Monte-Carlo, worm-update simulations, which are found to work best with approximately this value of J worm 2 − 2J worm 3 (see Section 2.1). Despite the slowly decreasing nature of the dipolar interaction with distance, it can be seen that frustration leads to a value of J fn that is considerably smaller than J 1A , resulting in a significant window in which the spins are strongly correlated. An obvious question raised by this analysis is how to further reduce the value of J fn and J c relative to J 1A , since this would increase the size of the spin-liquid region and give a cleaner realisation of the Kasteleyn transition. One possibility would be to find systems with local interactions such that J 1 J 2 , J 3 . . . , but we are not currently aware of any such systems. A more realistic option is to change the nature of the long-range interaction such that J ij ∝ |r i − r j | −a , where a = 3 corresponds to the dipolar case. The possibility of changing a has been realised experimentally using trapped ions that naturally form a triangular lattice, and a was found to be tuneable in the range 0 < a < 3 [26]. Estimating the relationship between J fn , J c and a is complicated, due to the competition between the further-neighbour interactions, but it seems most likely that suppression of J fn would require the further-neighbour interactions to fall off faster than in the dipolar case, and therefore a > 3. Another possibility is to add a small transverse magnetic field. This would tend to act in opposition to the further-neighbour interactions, since quantum fluctuations favour nearestneighbour-flippable configurations of Ising spins, while the stripe configuration is maximally unflippable. Therefore a transverse field would be likely to reduce the critical temperature by suppressing J fn . Conclusion We have shown that the dipolar TLIAF shows a variety of behaviours, with stripe-ordered, spin-liquid and paramagnetic phases. Furthermore, the nature of the spin-liquid region can be tuned by temperature between a "strongly-coupled" domain-wall network and a "weaklycoupled" string Luttinger liquid, where the strength of the coupling refers to a mapping to a 1D fermionic model. The addition of a small anisotropy allows the nature of the spin liquid to be further tuned, and this in turn changes the critical behaviour from first order to Kasteleyn-like, via a tricritical point with mixed tricritical-Ising and tricritical-Pokrovsky-Talapov characteristics. We end with the hope that the physics we have described will soon be explored experimentally in artificial spin systems. In such a setting the physics of the isotropic dipolar TLIAF may be even richer, since it is likely that the dynamics will be too local to reliably find the stripe-ordered phase at low temperature, and instead the domain-wall network state will likely freeze to form a glassy state. Funding information We thank the Swiss National Science Foundation and its SINERGIA network "Mott physics beyond the Heisenberg model" for financial support. A Defect triangles in the dipolar TLIAF In this appendix we construct a crude model for the density of defect triangles, n def , in the low-temperature paramagnetic state of the dipolar TLIAF. The aim is to justify the simple functional form of n def used to fit the Monte Carlo simulations in Fig. 4. Defect triangles are constrained to occur in pairs, and can be considered to appear on top of microstates of the constrained manifold (configurations without defect triangles). We make the crude assumption that the energy cost of these defect triangles is only weakly dependent on position and has an average value E def . In this approximation, the total energy due to the defect triangles is given by, where N def is the number of defect triangles and interactions between defect triangles have been ignored. The number of ways N def defect triangles can be placed in the system with N plq triangular plaquettes is simply given by the binomial coefficient, and therefore the associated partition function is, where β = 1/T . The average number of defect triangles is given by, In the limit T E def / log N plq the defect triangle density is given by, in agreement with an exact calculation for the nearest-neighbour TLIAF. In the opposite limit of T E def / log N plq then for T E def one finds, While the above analysis is clearly highly simplified with respect to the true situation in the dipolar TLIAF, it suggests that at low temperature and on finite-size systems one should expect the density of defect triangles to obey the relationship, with A and E def fitting parameters. The result of fitting this to Monte Carlo simulations is shown in Fig. 4, and we find E def = 1.60J 1 in the isotropic, dipolar TLIAF. This can be compared with the nearest-neighbour TLIAF, where the energy cost per defect triangle is 2J 1 . B Mappings and winding numbers There are a number of possible mappings from Ising configurations of the TLIAF to dimer and string representations. Here we review the mappings used in this article and the links between them. In order to do this it is useful to define two different manifolds of Ising configurations, the unconstrained manifold that contains all possible configurations and the constrained manifold that contains only those configurations that are ground states of the nearest-neigbour TLIAF. The constrained manifold is clearly smaller than the unconstrained one, but is itself extensive [2]. B.1 Mapping to dimer coverings of the dual lattice One useful mapping is from Ising configurations on the triangular lattice to dimer configurations on the dual honeycomb lattice [46]. We use this when constructing Monte Carlo worm updates [11]. The dual honeycomb lattice is constructed such that its bonds cut exactly one bond of the original triangular lattice (and vice versa), as shown in Fig. 13. If the triangular-lattice bond has two equivalent spins, then the honeycomb-lattice bond is covered by a dimer, while if the spins are inequivalent the honeycomb-lattice bond is left empty. The mapping between spin and dimer configurations is 2 → 1, since the dimer configuration is unaffected by a global flip of all the Ising spins. Configurations within the constrained manifold (i.e. ground states of the nearest-neighbour TLIAF model) have one ferromagnetic bond per triangle, and therefore the number of dimers is fixed and equal to the number of triangular lattice sites, N . It follows that sites on the honeycomb lattice respect the usual dimer model constraint of being covered by exactly one dimer, as shown in Fig. 13(a). In the unconstrained manifold, for each pair of defect triangles there are two additional dimers, and therefore the number of dimers is not fixed. The honeycomb-lattice site at the centre of a defect triangle is covered by three dimers, and therefore does not respect the usual dimer model constraint (see Fig. 13(b)). For the unconstrained manifold of Ising configurations an alternative dimer mapping is possible, which is constructed such that the number of dimers is fixed and each site obeys the usual dimer-model constraint of being covered by exactly one dimer [56]. This involves extending the honeycomb lattice such that every original site is replaced by three new sites arranged in a triangle (see Fig. 13). Dimers are then placed on the original honeycomb lattice bonds in the same way as before, leaving a unique way of dimer covering the remaining sites of the extended honeycomb lattice such that every site is covered exactly once. Reference Actual B.2 Mapping to string configurations on the dual lattice The main mapping used throughout the article is onto string configurations on the dual honeycomb lattice [37,38]. Here we show how this is related to the dimer mapping described above. This proceeds by comparing a given dimer configuration to a reference configuration in which all the dimers are parallel (see Fig. 14). Any honeycomb-lattice bonds on which there is a discrepancy between the actual dimer configuration and the reference configuration is assigned to be part of a string. The chosen reference configuration consists of alternating horizontal stripes of aligned Ising spins, and this corresponds to all vertical bonds of the honeycomb lattice being covered by dimers (see Fig. 14(b)). This choice of reference configuration results in a number of useful properties of the strings, the most important of which is that strings never touch or cross. For periodic boundary conditions there is the additional constraint that the number of strings crossing an arbitrary reference line that winds the system has to be even, meaning that the string parity is conserved. If the Ising configurations are restricted to be in the constrained manifold the strings are directed, in the sense that they cannot turn back on themselves, and therefore have to wind the system, as shown in Fig. 14(c). In the unconstrained manifold defect triangles act as sources and sinks of pairs of strings, resulting in (non-winding) closed loops of strings as well as strings that turn back on themselves, as shown in Fig. 14(d). B.3 Winding number sectors In the presence of periodic boundary conditions, Ising configurations within the constrained manifold can be labelled by a pair of winding numbers. One way to define the winding numbers, W = (W 1 , W 2 ), is to consider a pair of reference lines, as shown in Fig. 15. For each dimer crossing the horizontal part of the reference line the winding number is augmented by +1, and for each dimer crossing the angled part of the reference line it is augmented by −1. In the string picture, the winding number is simply given by, and it follows that the density of strings in the constrained manifold can be written as, The string vacuum is therefore equivalent to the winding number sector W = (L, L) and the sector W = (0, 0) corresponds to a density n string = 2/3. The winding numbers split the constrained manifold into topological sectors, in the sense that it is not possible to move between configurations with different winding numbers by making a series of local spin flips. Instead it is necessary to flip clusters of spins that wind the system. In the unconstrained manifold (defect triangles allowed) W remains a useful quantity, but is no longer strictly a winding number, since the creation of a pair of defect triangles on the reference line is a local move that alters W. Nevertheless, it remains a useful concept when the defect-triangle density, n def , is low. In order to isolate and study some of the important features of general TLIAF's, we consider a number of simple models, in which the interactions are local and can be varied at will. The subject of this appendix is the simplest of these models, the TLIAF with anisotropic nearest-neighbour interactions and a constraint forbidding defect triangles. The purpose of studying such a model is to understand the Pokrovsky-Talapov critical behaviour [39,40] and the correlations within the spin-liquid phase in a simple setting. In terms of the anisotropic, dipolar TLIAF studied in the main text, the ideas will be particularly relevant to the phase transition in the region δ tri < δ 0.1 and to the correlations in the string-Luttinger liquid phase for T J 1A . The solution of this model is already well known due to the fact it can be mapped to free fermions, and was studied by Wannier in the case of isotropic interactions [2], and can be transformed to the Kasteleyn model for anisotropic interactions [46]. The Hamiltonian is given by, where ij α denotes nearest-neighbour bonds in the α direction (see Fig. 1 for the definition of bond directions). An alternative parametrisation can be achieved by writing, and we consider the case δJ > 0 (equivalently J 1A < J 1B ). We also impose the constraint that defect triangles are forbidden, which corresponds to taking the limit J 1A /δJ → ∞. C.1 Dimer mapping H ABB [Eq. 32] can be mapped onto the Kasteleyn model of dimer coverings of the honeycomb lattice, which has an exact solution [46]. The mapping from Ising spins on the triangular lattice to dimers on the dual honeycomb lattice is described in Appendix B.1, and the energy of a dimer configuration is given by, where N bond = 3N is the total number of bonds (this is the same for the triangular and dual honeycomb lattices) and N α dim is the number of dimers covering α-type bonds. Since defect triangles are forbidden, the total number of dimers is fixed as N A dim + N B dim + N C dim = N bond /3. In the ground state N A dim = N bond /3 and N B dim = N C dim = 0, and therefore the energy of a given configuration relative to the ground state energy is, It follows that the partition function can be written, up to a configuration independent prefactor, as, where the sum is over all dimer coverings of the honeycomb lattice. It can be seen from Eq. 35 that the weight associated with dimer covering a B or C bond is given by, C.2 Evaluation of the partition function It has been known how to evaluate partition functions of the type Z hon [Eq. 36] for many years [46,6]. Here we will briefly sketch the solution, since it will prove a useful basis from which to consider more complicated models. The starting point is to introduce a real, anticommuting Grassmann variable at each site of the honeycomb lattice [6,57,58]. These variables obey the usual rules: a i a j = −a j a i , da i = 0 and da i a i = 1. Since the honeycomb lattice has a 2-site basis, it is useful to label Grassmann variables as a and b on the two sublattices, and the partition function is therefore given by, where i labels unit cells and S 2 [a, b] is the Kastelyn action. Here K is a signed adjacency matrix, known as the Kasteleyn matrix [46], and contains the weights z [Eq. 37]. The reason for the appearance of det K rather than the more usual Pfaffian is that the matrix connects sites on different sublattices, but not those on the same sublattice. To simplify the geometry the honeycomb lattice is distorted into the brick lattice, as shown in Fig. 16. Bonds are assigned a direction in accordance with the Kasteleyn theorem, which states that transition cycles should have an odd number of arrows in each sense [46]. The bond weights are assigned according to Eq. 36, with weight 1 on A bonds and weight z on B and C bonds. It follows that the action can be written as, where the coordinate system is defined by the unit vectorsê x andê y , as shown in Fig. 16. The action is simply diagonalised by taking the Fourier Transform, to give, Finally the partition function can be evaluated as, where * k = −k has been used. C.3 Physical properties In order to understand better the physical properties of H ABB [Eq. 32] it is useful to notice that the free energy, which is given by, is typically dominated by the minimal values of | k |. The physical characteristcs of the system are therefore determined predominantly by the "low-energy" part of | k |. At low temperature, the spectrum of | k | is gapped at all k, as shown in Fig. 16, and this corresponds to the stripe-ordered state. The gap closes at k = (π, 0) at the temperature, and this corresponds to the temperature at which the free energy of strings goes to zero. At T = T K strings condense into the system, and there is a Kasteleyn transition out of the stripe-ordered phase and into the spin liquid. This is second order due to the non-crossing constraint of the strings, which results in an entropically-driven string-string repulsion. The transition is in the Pokrovsky-Talapov universality class [39,40]. For all T > T K the spectrum of | k | is gapless. The position of the gapless point moves from k = (π, 0) at T = T K to k = (2π/3, −π/3) at T → ∞ and the position of this point is simply related to the string density, which smoothly increases with increasing temperature. We show below that the gaplessness of the spectrum is associated with algebraic decay of the spin-spin correlations [4,5]. In order to detect the transition between the stripe-ordered phase and the paramagnet, one possibility is to measure the local stripe order parameter m stripe [Eq. 2]. However, this is somewhat unsatisfactory, since m stripe = 1 in the ordered phase, and there is a discontinuous jump to m stripe = 0 at T = T K (see Fig. 17). Thus m stripe does not show critical behaviour, and this is due to the fact that the transition is not driven by the proliferation of local defects, but by strings that wind the system. A more useful physical quantity is the density of strings, n string , and this does show critical behaviour. However, unlike a conventional order parameter, n string = 0 in the ordered phase, and only takes a finite value for T > T K . It can most simply be calculated in terms of dimer densities, according to, where the normalisation is such that 0 ≤ n string ≤ 1. In the case of the constrained manifold, the total number of dimers is fixed (see Appendix C.1) and this leads to the simplified expression [37], where the second equality follows from the expression for Z hon [Eq. 36]. When working in the constrained manifold, the string density can be calculated in a simple closed form. Substituting the expression for | k | [Eq. 42] into n string [Eq. 46], making the change of variables p x = (k x + k y − π)/2 and p y = (k x − k y − π)/2, and taking the thermodynamic limit results in, n string = 1 π 2 π 0 dp x π 0 dp y 4z 2 cos 2 p x − 2z cos p x cos p y 1 − 4z cos p x cos p y + 4z 2 cos 2 p x = 1 π 2 π 0 dp x u 2 ∂ ∂u π 0 dp y log[1 − 2u cos p y + u 2 ], where u = 2z cos p x . The integral over p y is tablulated and given by [2], As a result one finds, and this is plotted in Fig. 17. It can be seen that for T → ∞ (z → 1) the string density saturates at n string = 2/3, while for T ≈ T K it shows Pokrovsky-Talapov critical behaviour with n string ∝ (T − T K ) β and β = 1/2. C.4 Correlations In order to better understand the correlations it is useful to study the spin-spin structure factor [defined in Eq. 4]. When doing this care should be taken not to confuse q, which denotes a reciprocal vector in the Brillouin zone of the triangular lattice, with k, which lives in the Brillouin zone of the brick lattice. The structure factor can be calculated in the thermodynamic limit using the Grassmann path integral approach, following the general method proposed in Ref. [59]. For δJ = 0 this reproduces the results of Ref. [4,5]. A detailed summary of the calculation is given in Appendix F, both for pairs of spins separated by an arbitrary number of A bonds (i.e. in the direction perpendicular to the strings, denoted r x ) and for separations orthogonal to A bonds (i.e. in the direction parallel to the strings, denoted r y ). In both cases the structure factor can be written as the determinant of a Toeplitz matrix, whose dimension is proportional to the separation between the spins. Exact expressions can be written for the matrix elements, but we find it necessary to calculate the determinant numerically. In the case of isotropic interactions the structure factor takes the asymptotic form [4,5], where q = (±2π/3, 2π/ √ 3), as can be seen in Fig. 18. The algebraic decay of correlations shows that the T = 0 nearest-neighbour TLIAF is critical, and is on the verge of forming 3-sublattice order. The combination of long-range disorder and local correlation means that the system forms a classical spin liquid. For δJ = 0 and T > T K the structure factor retains the long-distance functional form given in Eq. 50, but the wavevector becomes temperature dependent and is given by q = ±q string (T ) = (±πn string (T ), 2π/ √ 3). This is clearly physically sensible, since the strings separate regions in which Ising spins have opposite sign, and the oscillation of the correlation function in the direction perpendicular to the strings should therefore have a period given by the average string separation. Some typical examples are shown in Fig. 18. Also shown is S(q), which has pairs of algebraically diverging peaks at q = ±q string (T ). In the vicinity of these peaks S(q string + δq) ∝ |δq| −3/2 in agreement with the algebraic decay of S(r) [Eq. 50]. It can be seen that the critical nature of the correlations is not broken by a non-zero δJ as long as T > T K , and this is due to the constraint forbidding defect triangles. Having stated that the structure factor has the functional form given in Eq. 50 in the long distance limit, it is useful to be more precise over what counts as long distance. This has been considered in the closely related field of adsorption of a gas onto a substrate, where there exist domain walls with similar properties to the strings of the TLIAF [47]. A correlation length can be defined beyond which the long-distance algebraic correlation function given in Eq. 50 applies. In the direction perpendicular to the strings it is intuitively obvious that this is given by the average string-string separation, and therefore ζ ⊥ ∼ 1/n string . In the direction parallel to the strings it can be argued that ζ ∼ 1/n 2 string [47]. In the case of δJ = 0 (or T δJ), where n string = 2/3, the correlation lengths, ζ ⊥ and ζ , are not much longer than a single lattice spacing, and the long distance asymptotic form of the structure factor is recovered for spins separated by only a few lattice spacings. On the other hand, for δJ = 0 and T ≈ T K the string density is low, and the correlation lengths become very large, especially in the direction parallel to the strings. For T → T K an expansion of n string [Eq. 49] shows that the correlation lengths diverge as ζ ⊥ ∼ (T − T K ) −ν ⊥ and ζ ∼ (T − T K ) −ν , with ν ⊥ = 1/2 and ν = 1. For T < T K the spin structure factor clearly does not follow the functional form of Eq. 50. Instead it is constant in real space, since the stripe state admits no fluctuations, and has Bragg peaks in reciprocal space. On cooling through T = T K the pair of algebraically-diverging peaks in S(q) coalesce to form a single Bragg peak at q = q stripe = (0, 2π/ √ 3) (see Fig. 18). C.5 Mapping to 1D quantum model A slightly different perspective on the nearest-neighbour TLIAF is achieved by making a mapping onto a 1D quantum model of spinless fermions. The idea is that the strings can be viewed as the worldlines of spinless fermions, and the spatial direction parallel to the strings interpreted as imaginary time. The non-crossing constraint of the strings corresponds to the Pauli exclusion principle, and periodic boundary conditions in the 2D classical model enforce periodicity in imaginary time in the 1D quantum model. This type of mapping has been frequently used for related 2D classical models with non-crossing domain walls [39,40,41,42,43,44]. We show in Appendix G that H ABB [Eq. 32] maps exactly onto the 1D quantum model, The mapping demonstrates that the Grassmann variables, a i and b i , describe coherent states of fermions/strings (for details see Appendix G). H 1D [Eq. 51] is simply diagonalised by Fourier transform, giving, and the phase diagram of the fermion model can be matched to that of the nearest-neighbour TLIAF. For µ/2t < −1 there are no fermions in the system and this is analagous to the stripe phase in which there are no strings. At µ/2t = −1 there is a phase transition due to the minima of the fermion band touching zero, and for µ/2t > −1 the fermion density, n f , is given by, n f = 1 − arccos[µ/2t]/π, which can be seen to be exactly equal to n string [Eq. 49]. According to the mapping given in Eq. 51, µ and t are not independent parameters, and the maximum value of their ratio is given by µ/2t = 1/2. This corresponds to z = 1 (equivalently T → ∞) and at this point n f = n string = 2/3 as expected. Other physical quantities, such as the heat capacity or the spin-spin structure factor can be calculated within the 1D fermion picture, and in some cases this simplifies the procedure. It should be noted that if the 2D classical model has periodic boundary conditions, then the number of strings in the system is constrained to be even. The 1D quantum model is therefore restricted to the even-parity fermion subsector. If the 2D model is instead defined on a cylinder with the periodic direction parallel to the strings, then this restriction is lifted. One of the utilities of the 2D classical to 1D quantum mapping is that for more complicated 2D models with longer range interactions it provides a good starting point for phenomenologial theories. D J 1A -J 1B model with an unconstrained manifold The next model we consider is the TLIAF with anisotropic nearest-neighbour interactions but now with defect triangles allowed (i.e. in the unconstrained manifold). The point is to better understand the crossover between the spin liquid and the weakly-correlated paramagnet and the crossover between Ising and Pokrovsky-Talapov criticality, which are both also features of the dipolar TLIAF. The Hamiltonian H ABB [Eq. 32] is the same as in Appendix C, except for the important difference that the manifold of Ising configurations is unconstrained, meaning defect triangles are allowed. D.1 Dimer mapping H ABB [Eq. 32] with an unconstrained manifold can be mapped onto a dimer model on the honeycomb lattice, but there is no longer a hardcore constraint, since vertices at the centre of defect triangles are covered by 3 dimers. Since the Grassmann path integral approach to determining the partition function requires the dimers to obey a hardcore constraint, it is necessary to instead consider the mapping onto a dimer model on the extended honeycomb lattice (described in Appendix B.1 and Fig. 13). This type of mapping was suggested in a more general context in [56], and makes possible an exact evaluation of the partition function. The dimers can be categorised as those covering A, B or C bonds of the original triangular lattice (see Fig. 1 for bond labelling) or "extra" dimers, covering the bonds introduced in the act of extending the honeycomb lattice. The total number of dimers is fixed and given by, where N bond = 3N refers to the number of bonds of the original triangular lattice and N ext dim is the number of dimers on "extra" bonds. The energy of a given configuration relative to that of the ground state can be written as, and therefore, where the sum is over all dimer coverings of the extended honeycomb lattice, and the factor z 2N A ensures that Z exhon is equal to Z hon [Eq. 36] in the limit where z A → 0 and z B → 0 with z finite (i.e. the condition for being in the constrained manifold of Ising configurations). D.2 Evaluation of the partition function The evaluation of the partition function proceeds as in Appendix C, with the main difference being that there are 6 rather than 2 lattice sites in the unit cell (see also Ref. [60] for slightly different way of evaluating the partition function). Fig. 16), and these are taken to be unit length. (b) The spectrum | k | [Eq. 60] along the path k = (k, k + π) for J 1A = 1 and J 1B = 1.5. For T < Tc (black) the spectrum is gapped at all k, and this corresponds to the stripe-ordered phase. At T = Tc (red) the gap closes at k = (π, 0) and an Ising transition occurs. For T > Tc (blue) the gap reopens. For Tc < T < T Is the minimum of | k | is at k = (π, 0). For T > T Is the minimum migrates away from k = (π, 0). In the limit T → ∞ the spectrum becomes flat. Each site of the extended honeycomb lattice is assigned a real Grassmann variable, as shown in Fig. 19, and these are labelled a l i and b l i where i labels the unit cell, and l ∈ {1, 2, 3} labels sites within the unit cell. Bond weights are z on B and C bonds, 1 on A bonds and z −1 A on "extra" bonds, in accordance with Z exhon [Eq. 55]. The partition function is given by, where, This can be diagonalised by taking the Fourier Transform of the Grassmann variables (see Eq. 40), resulting in, After rewriting the action as a matrix equation, taking the Pfaffian of the matrix and absorbing the z 2N A factor, one can show that, It can be seen that this reduces to the | k | of the constrained manifold [Eq. 42] when the limit z A → 0 and z B → 0 is taken such that z remains finite (equivalently J 1A → ∞ and J 1B → ∞ while δJ remains finite). It can also easily be checked that in the T → ∞ limit the entropy per site is S/N = log 2 as expected. D.3 Physical properties The spectrum | k | [Eq. 60], which is is shown in Fig. 19, determines the physical properties of H ABB [Eq. 32]. As in the case of the constrained manifold there is a phase transition between an ordered and disordered phase. However, we label the transition temperature T c rather than T K , since it is not technically a Kasteleyn transition, as will be explained below. For T < T c the spectrum is gapped at all k, and this corresponds to the stripe-ordered phase. The main difference from the case of the constrained manifold is that local fluctuations involving the creation of pairs of defect triangles are possible, though, depending on the value of T c , they can be highly suppressed. There is a phase transition at T = T c associated with the closing of the gap in | k |, and this occurs at k = (π, 0). It can be seen from Eq. 60 that this requires, and the solution of this equation gives the critical temperature. For T > T c a gap reopens in | k |, and this signifies that correlations are exponential in the paramagnetic state [47]. In order to investigate the nature of the phase transition, it is natural to define a second temperature, T Is , such that in the temperature range T c < T < T Is the minimum of | k | is at k = (π, 0), while for T > T Is the minimum of | k | is at a temperature-dependent incommensurate wavevector. T Is can be determined from the equation, and it can be seen from Eq. 60 that for T c < T < T Is the gap is given by, min | k | = |1−2z−z 2 B |. After setting T = T c + δT , with δT T c and δT < T Is − T c , one can show that the gap goes as min | k | ∝ δT . Taking the correlation length to be inversely proportional to the gap, ξ ∝ 1/ min | k |, results in ξ ∝ δT −ν with ν = 1, and this is typical of a 2D Ising transition [43]. Therefore Ising critical exponents are realised in the temperature window T c < T < T Is . However, the caveat to this is that the Ising temperature window can be exponentially small, and this is the case for δJ For T > T Is the minimum of | k | moves away from k = (π, 0) and the critical behaviour crosses over to that of the Pokrovsky-Talapov universality class for δT T Is −T c . Thus in the situation where δJ J 1A the transition is technically an Ising transition, but all practical measurements, whether in experiment or simulation, will show the features of a Kasteleyn transition. The values of T c and T Is are shown as a function of δJ/J 1A in Fig. 22, and it can be seen that the Ising temperature window only starts to be significant for δJ/J 1A 0.3. Further increases in T increase the size of the gap and in the limit T → ∞ the spectrum, | k |, becomes completely flat, corresponding to an uncorrelated paramagnet where all configurations are equally likely. The density of strings, n string , can be calculated using Eq. 45 and the result is shown in Fig. 20. In the stripe-ordered phase n string is low, but not fixed to zero, as it is possible to create bound pairs of defect triangles, connected by a pair of strings. Its value increases rapidly at T = T c , since the defect triangles unbind, and therefore strings can wind the system. On further increasing T the density of strings passes through n string = 2/3 (the value realised in the constrained manifold) before saturating at n string = 3/4. Since n string is not zero in the stripe phase, it is not, strictly speaking, an order parameter. However, it remains a useful indicator of where the transition occurs, since the derivative dn string /dT diverges logarithmically, as can be seen in Fig. 20. D.4 Ising to Pokrovsky-Talapov crossover The nearest-neighbour TLIAF provides a good setting in which to study the crossover from Ising to Pokrovsky-Talapov critical behaviour, since physical quantities can be calculated directly in the thermodynamic limit. For T c < T < T Is the system shows Ising critical exponents, while for T − T c T Is − T c it shows Pokrovsky-Talapov criticality. The crossover between these two limiting cases can be understood by studying the density of defect triangles, n def (see Ref. [51] for a similar analysis in terms of monopoles in spin ice), and we postulate a scaling ansatz, where z def = exp[−E def /T ], E def = 2J 1B = 2(J 1A + δJ) and g φ is an unknown function. The exponent α is the usual heat capacity exponent, and is expected to take the value α = 1/2 [48], while φ is the crossover exponent. By calculating n def in the thermodynamic limit and performing scaling according to Eq. 63 we find a convincing data collapse for φ = 1, as shown in Fig. 21. D.5 Phase diagram and correlations The phase diagram for H ABB [Eq. 32] in the unconstrained manifold can be calculated exactly, and is shown in Fig. 22. The nature of the correlations can be explored via the spin structure factor [Eq. 4], and this is shown in the same figure for a representative set of parameters. The phase diagram shows three regions, a stripe-ordered phase, a strongly-correlated spinliquid region and a weakly correlated paramagnet. The stripe-ordered phase is separated from the disordered region by a phase transition at T c [Eq. 61], while we take the crossover between the spin-liquid and paramagnetic regions to occur when the density of defect triangles, n def reaches 10% of its saturation value (i.e. n def = 0.025). As δJ is increased, the transition temperature T c increases faster than the crossover temperature, and therefore the spin-liquid region shrinks. The nature of the correlations in the disordered phase changes significantly with varying T and δJ, and this can be seen from studying the structure factor [Eq. 4]. S(r) can be calculated in the thermodynamic limit via the Grassmann path integral approach (see Appendix F), and some examples are shown in Fig. 22. Also shown is S(q), which for simplicity is calculated using Monte Carlo simulation. We make the ansatz that in the disordered regions S(r) takes the long-distance asymptotic form [5] (see Appendix F), where in the case of δJ = 0 the correlation length perpendicular to the strings, ξ ⊥ , can be different from that parallel to the strings, ξ . This is found to give good fits to the calculated values of S(r) after taking into account the definition of long distance given in Appendix C.4. In the spin-liquid region the correlation length is considerably larger than the lattice spacing, and the system approximately realises the algebraically decaying correlation function studied in Appendix C.4 for the constrained manifold. In particular for δJ = 0 and T J 1A the correlation length diverges as ξ ⊥ = ξ ∝ exp[2J 1A /T ] [53]. At the crossover to the paramagnetic region, the correlation length is approximately ξ ⊥ ∼ 5, with ξ ≥ ξ ⊥ . Since the density of defect triangles is by definition low within the spin-liquid region, most of the strings wind the system, and therefore the relationship q ≈ ±q string (T ) = (±πn string (T ), 2π/ √ 3) holds to a good approximation. In the paramagnetic region the correlation lengths become comparable with the lattice spacing, and the structure factor has a very different form to the algebraic decay found for the constrained manifold. In this region the strings mostly form short closed loops, and therefore the relationship between q and n string breaks down. For δJ 0.5 there is a direct transition from the stripe-ordered phase to the paramagnet. In the Ising critical region close to the transition the correlation function shows the usual 2D Ising scaling and is peaked in reciprocal space at q stripe . In the stripe-ordered phase the asympotic form of S(r) given in Eq. 64 is no longer relevant, and Bragg peaks form in S(q) at the ordering vector q stripe = (0, 2π/ √ 3). Fluctuations around the ground state are not strictly forbidden, but are rare unless T ∼ J 1A , which is only possible for large anisotropies. Eq. 32] in the unconstrained manifold can be exactly mapped onto a 1D quantum model of spinless fermions, as was found to be the case for the constrained manifold in Appendix C.5. The main difference is that in the unconstrained manifold defect triangles act as sources and sinks of pairs of strings. In consequence, pair creation and annihilation terms appear in the 1D quantum model. D.6 Mapping to 1D quantum model Following a similar logic to that of Appendix G, there is an exact mapping of H ABB [Eq. 32] onto, where, and the evolution of these parameters with the temperature of the classical model is shown in Fig. 23. where, Physical properties of the classical TLIAF can be calculated directly from the quantum model. For example the classical quantity n string [Eq. 45] is equal to the fermion density [43]. The parameters of the 1D quantum model depend on those of the classical model according to Eq. 66, and the relationship is shown for J 1B /J 1A = 1.5. As T → 0 then t → 0, µ → −1, ∆ → 0, µ/2t → −∞ and ∆/t → 0. The phase transition occurs when µ/2t = −1. In the limit T → ∞ then t → 2, µ → 0, ∆ → 2, µ/2t → 0 and ∆/t → 1. The last simplified model we study consists of a TLIAF with first and second-neighbour interactions and a constraint forbidding defect triangles. The motivation is that this is the simplest form of further-neighbour interactions, and can be used to study a number of features of the more general TLIAF in a simplified setting. In particular we will consider the crossover of the phase transition into the stripe-ordered phase from first to second order via a Pokrovsky-Talapov tricritical point, the string Luttinger liquid (as opposed to the free-fermion spin liquid studied in Appendix C) and its crossover into a domain-wall network state. Since the further-neighbour interactions destroy the mapping onto a free-fermion model, we rely on a combination of Monte Carlo and perturbation theory. The Hamiltonian is given by, where the second-neighbour bonds are labelled ij 2 and we consider the constrained manifold of Ising configurations (i.e. no defect triangles). E.1 General considerations Before turning to detailed calculations, it is worth considering some of the qualitative features of H ABB2 [Eq. 69], both in terms of the nature of the phase transition and of the correlations in the spin-liquid phase (there is no paramagnetic region due being in the constrained manifold). The second-neighbour interaction, J 2 , and the nearest-neighbour anisotropy, δJ, act in concert with one another, in the sense that they both favour a stripe-ordered ground state. However, they act in opposition in the sense that J 2 favours a first-order phase transition, while δJ favours a second-order transition. This can be seen by comparing the δJ = 0 case to that with δJ J 2 . At δJ = 0 the J 2 interaction selects a 6-fold degenerate, stripe-ordered ground state from the manifold of constrained Ising configurations [10]. The J 1 -J 2 TLIAF has been extensively studied, both analytically and by Monte Carlo simulation, and it is known that there is a first-order phase transition into the stripe phase [7,8,10,9,11]. In the limit of J 1 → ∞ the transition occurs at T 1 = 6.39J 2 [11]. Therefore we expect that, in the region where J 2 δJ, the transition between the paramagnet and stripe-ordered state will be first order. In contrast, the first-neighbour anisotropy, δJ, favours a 2-fold degenerate stripe-ordered ground state, with stripes running parallel to A bonds (see Fig. 1 for the definition of bond directions). For δJ J 2 the J 2 interaction is irrelevant, and to a good approximation the analysis of Appendix C applies, indicating that the transition is second order. One focus here will be to study the crossover between the first and second-order phase transitions, which occurs when J 2 and δJ are comparable in magnitude. When the transition is second order it is driven by the creation of isolated strings that wind the system (in [10] this is discussed in terms of the closely related concept of double domain walls). In order for a second-order transition to occur it is necessary that there is a repulsive interaction between these strings, and this repulsion is entropically driven and associated with the no-crossing constraint [39,40]. We show below that further-neighbour interactions result in an energetically-driven attraction between the strings, and that the second to first order crossover occurs when this balances the entropically-driven repulsion. The free energy of an isolated string can be calculated exactly, and this can be used to find the exact transition temperature in the case of a second-order transition. Relative to the ground-state energy, strings cost an energy per unit length of E string = 2δJ + 4J 2 and corners, at which the string changes direction, have an energy cost E c = 2J 2 [10,11]. It follows that the free energy per unit length of an isolated string relative to the stripe ground state is given by [10], The second order transition temperature, T K , can be calculated from solving the equation f string (T K ) = 0, and in the case of J 2 = 0 it can be seen that this reduces to Eq. 44. The behaviour in the spin-liquid state should be closely related to the nature of the phase transition, since it is also sensitive to whether the interaction between strings is attractive or repulsive. In the introduction it was argued that the associated fermionic model can be weakly or strongly coupled, and it makes intuitive sense that weakly-coupled fermions correspond to repulsive string-string interactions, while strongly-coupled fermions correspond to attractive string-string interactions. In the weak-coupling case it can be expected that the spin liquid realises a 2D classical equivalent of a Luttinger liquid. In the strong coupling case it is less clear what to expect a priori. The crossover between weak and strong coupling is controlled by the ratio J 2 /T , with weak coupling for T J 2 . E.2 Diagrammatic perturbation theory The first approximate method we use to better understand H ABB2 [Eq. 69] in the constrained manifold is that of perturbation theory around the high-temperature limit. This approach cannot hope to compete with Monte Carlo simulations in terms of quantitative measures of, for example, the transition temperature, but does provide useful physical insights that are not apparent in Monte Carlo. While the approach is well motivated in the "weak-coupling" regime, we find that it also gives some clues as to how the system crosses over to the "strongcoupling" regime and to the appearance of a first-order phase transition. and this can be done using a Grassmann path integral approach, following in spirit Ref. [61]. The first step is to map the Ising model, H ABB2 [Eq. 69], onto a dimer model on the dual honeycomb lattice. For the nearest-neighbour interactions the mapping is the same as in Appendix C. The second-neighbour coupling maps onto dimer-dimer interactions, where dimers on the same hexagon interact if they are separated by one unfilled bond (see Fig. 24). It follows that the partition function can be written as, where N 2 is the number of dimer-dimer interactions (see Fig. 24). The mapping of Z hon2 [Eq. 72] onto a Grassmann path integral does not result in a purely quadratic action, and therefore it is not exactly solvable by this method. Instead the mapping results in an action including terms with 2, 4, 6 . . . 2N Grassmann variables, and one can write, where the quadratic term S 2 [a, b] is given in Eq. 39 and contains products of 2 Grassmann variables, the quartic term, S 4 [a, b], contains products of 4 Grassmann variables and similarly for higher order terms, with 2N the number of honeycomb lattice sites. For a particular dimer configuration, one can ask which terms in the expansion of the action are required to correctly assign the weight. The answer depends on the size of the largest cluster of dimers connected by pairwise interactions (see Fig. 24). If the largest cluster contains n dimers, then it is necessary to consider the terms S 2m [a, b] with m ≤ n. Since clusters that include a sizeable fraction of all the dimers are common, many dimer configurations require one to consider terms up to n ∼ N . For an infinite lattice it is necessary to truncate the expansion of the action in order to be able to perform calculations. This can be done systematically by considering z 2 − 1 to be a small parameter, which is valid for T 2J 2 . The reason that this is a useful expansion parameter is due to the fact that S 2n [a, b] has a lowest order contribution proportional to (z 2 − 1) n−1 . Thus for a chosen value of n, it is only necessary to consider terms in the action up to S 2n [a, b]. A simple worked example on a finite-size lattice is given in Appendix H to show how this type of expansion works in detail. Here we will consider n = 2, and therefore only retain the S 2 [a, b] and S 4 [a, b] terms in the action, thus working at first order in the small parameter |z 2 − 1|. The quartic term in the action can be determined by observing that for a 2-site unit cell there are 6 terms containing 4 Grassmann variables, and these are shown schematically in Fig. 25. Thus one finds, and taking the Fourier transform using Eq. 40 results in, where, is the interaction vertex symmetrised over the pairs {k 1 , k 3 } and {k 2 , k 4 } and, The truncated action, which is given by the sum of S 2 [a, b] [Eq. 41] and S 4 [a, b] [Eq. 75], has a quartic interaction term, and therefore it is not possible to perform the path integral exactly. Instead a perturbative diagrammatic approach can be used, as is standard in quantum field theory [61]. It is important to note that the expansion order of the perturbation theory is set by the truncation of the action, and only diagrams consistent with this order should be considered. The first step in the construction of a diagrammatic perturbation theory is the calculation of the free Green's function, and this is given by, This can be used to perturbatively construct the interacting Green's function, which is given by, where˜ k = k + Σ k and Σ k is the self energy. In the case we are considering, the anomalous Green's functions a k 1 a k 2 and b k 1 b k 2 vanish at all orders of perturbation theory, and this is related to the absence of defect triangles. In consequence the effective quadratic action takes the simple form, and it follows that the partition function can be written as, In order to be consistent with the expansion of the action to first order in the small parameter |z 2 − 1|, we consider the Hartree-Fock diagrams, and therefore approximate the self energy as, where, At this level of approximation the partition function is given by, . On the right-hand plot the spin-liquid is split into a string Luttinger liquid for T > T tri = 9.14J2 and a domainwall network for T < T tri (separated by dashed yellow line). (Middle) Structure factor S(q) calculated by Monte Carlo simulation of an L = 72 hexagonal cluster (letters correspond to those on the phase diagram). (Bottom) Cuts through both S(q) and S(r). (a) For δJ = 0 and close to the first-order phase transition S(q) has significant spectral weight around the perimeter of the triangular-lattice Brillouin zone, as is typical of a domainwall network. (b) On increasing the temperature spectral weight rapidly accumulates at q = (±2π/3, 2π/ √ 3), as is typical for a string Luttinger liquid. In the whole string Luttinger liquid region the asymptotic form of S(r) follows Eq. 89 with a parameter-dependent Luttinger parameter, K ≥ 1. (c) At temperatures just above the Pokrovsky-Talapov tricritical point, there is a near-degeneracy between string sectors and the structure factor therefore shows extended spectral weight in the qx direction. (d) Further increasing the temperature breaks this quasi-degeneracy and sharp peaks form at q string (T ) = (±πn string , 2π/ √ 3). (e) At temperatures just above the second-order transition the structure factor is sharply peaked at q string (T ). (f) For T J2 the behaviour of the nearest-neighbour TLIAF in the constrained manifold is recovered, with K = 1. The effective action S 2,eff [a, b] [Eq. 80] can be used to study the physical properties of the system, as in Appendix C. In particular we focus on the crossover between a second and first-order phase transition, which corresponds to the crossover from the weak to the strong coupling regimes (in fermionic language). Information about the nature of the phase transition can be extracted from the spectrum, HF k [Eq. 84], and it can be seen in Fig. 25 that this undergoes a change of structure at δJ/J 2 = 0.56. For δJ/J 2 > 0.56 the spectrum, HF k [Eq. 84], shows the characteristic features of a second-order transition. In the disordered phase it has a gapless point at a temperature-dependent and incommensurate wavevector. As the temperature is lowered towards the critical point the gapless point migrates towards the wavevector k = (π, 0), and the critical temperature can be found from solving the equation HF (π,0) (T ) = 0. Below the transition the spectrum is gapped at all wavevectors, and the minimum is at k = (π, 0). The second-order transition temperature is known exactly from Eq. 70, and Fig. 26 shows a comparison between the exact value and the estimate from first-order perturbation theory. First-order perturbation theory seems to work well even approaching the tricritical point, where T tri ≈ 9J 2 (the tricritical temperature will be determined more accurately by Monte Carlo simulations in the next section). At this temperature the small parameter is 1 − z 2 (T tri ) ≈ 0.2, and so the perturbation expansion is reasonably well controlled. The discrepancy in the critical temperature between zeroth and first-order perturbation theory can be seen from expanding the exact second-order transition temperature as, where it can be seen that for δJ ≈ J 2 the J 2 /δJ term is larger than the leading term. In fact further expansion of the transition temperature reveals that at δJ ≈ J 2 higher order terms are not small, but do cancel one another. However, it is important to remember that J 2 /δJ is not the expansion parameter. Exactly at the critical temperature the spectrum has qualitatively the same behaviour as the J 2 = 0 case (see Appendix C) close to the gapless point. Along the path k = (k, k + π) the spectrum grows as (k − π) 2 . This behaviour is typical of a Pokrovsky-Talapov transition [39,40]. At δJ/J 2 = 0.56 the spectrum shows a change of character. The coefficient in front of the quadratic term goes to zero, and the spectrum grows as (k − π) 4 around the gapless point. We refer to this point as a Pokrovsky-Talapov tricritical point, as the critical exponents are different from the standard ones of the Kasteleyn transition. This change of behaviour is not just an artifact of first-order perturbation theory, since its effects can be observed in Monte Carlo simulation (albeit at δJ/J 2 = 0.7 -see Appendix E.3). δJ=0 T/J 2 =6.3 T/J 2 =6.72 T/J 2 =8 Figure 27: The spectrum HF k [Eq. 84] at δJ = 0 and for varying temperature. The path through 2D reciprocal space is parametrised by k = (k, k + π). In the paramagnet (red) there is a single gapless point in the region k > 0 and this occurs at an incommensurate wavevector. At T /J2 = 6.72 (orange) the gap at k = (π, 0) closes, but there remains a gapless point at an incommensurate wavevector. At lower T (blue) the gapless points approach one another, and the spectrum is very flat in their vicinity. While it is clear that the perturbative approach has broken down at such a small value of δJ/J2, the results are suggestive that lines of zeros appear in the spectrum, and this would be consistent with a first-order phase transition. For δJ/J 2 < 0.56 the perturbative approach breaks down, but can be used to find some clues as to the true situation. In the paramagnet there is a gapless point at an incommensurate wavevector, as shown for the case of δJ = 0 in Fig. 27. As the temperature is reduced this migrates towards k = (π, 0), as is the case for a second-order transition. However, before the gapless point reaches k = (π, 0) the gap at k = (π, 0) closes, resulting in a pair of gapless points. This situation is not physical, and does not obviously correspond to the expected first-order phase transition. However, it can be seen that the spectrum is very flat between the two gapless points. We suggest that in reality gapless lines should develop in this region, and this would correspond to a first-order phase transition. This type of behaviour can never be exactly recovered using a perturbative approach, since a gapless line relies on the correct relationship between all coefficents in the expansion of the free energy. E.3 Phase diagram determined from Monte Carlo simulations As a complement to the perturbation theory approach, we also study H ABB2 [Eq. 69] using Monte Carlo simulation. The simulations are carried out using a worm algorithm very similar to that presented in Ref. [11]. This works in the dimer representation (see Appendix B.1), and creates loops of alternating dimer-filled and empty bonds, which are then flipped, resulting in the reversal of all the Ising spins contained within the loop. The loop creation is carefully controlled such that detailed balance is maintained, and the absence of rejection results in an efficient algorithm. Hexagonal shaped clusters with periodic boundary conditions are used, containing N = 3L 2 Ising spins, where L measures the length of one side. Simulations are performed using system sizes from L = 24 up to L = 192. The phase diagram of H ABB2 [Eq. 69], as determined by Monte Carlo simulation, is shown in Fig. 26. The phase transitions can be located either from measuring the triangular average of the winding number and associated susceptibility, defined in Eq. 3, or by measuring the heat capacity, and the results are consistent. In the region where the phase transition is second order, the critical temperature is found from finite-size scaling analysis. We use the standard relation for a Kasteleyn transition [62], where c is a constant, L is the linear dimension of the system and ν = 1 is the critical exponent of the correlation length in the direction parallel to the double domain walls, below which the algebraic scaling of spin correlations breaks down [40,48,47]. We consider the parallel correlation length, ν , rather than the perpendicular correlation length, ν ⊥ , due to the anisotropy of the system which results in ν ⊥ = 1/2 = ν . Since the clusters used in the simulations are hexagonal in shape, and therefore isotropic, the growth of correlations parallel to the strings dominates the finite size effects. The exact second-order transition temperature is known from solving Eq. 70, and it can be seen in Fig. 26 that the finite-size-scaled Monte Carlo results are in good agreement with this. As an example of such data collapse one can consider H ABB2 [Eq. 69] in the constrained manifold. We set δJ/J 2 = 1.5, since this is far enough from the tricritical point that deviations from the Pokrovsky-Talapov universality class are expected to be negligible. The results are shown in Fig. 28, and a convincing data collapse is found for β = 0.47 ± 0.04 and ν = 1.05 ± 0.09, which is consistent with the expected β = 1/2 and ν = 1. The line of second order transitions ends at a Pokrovsky-Talapov tricritical point, which is found to be at δJ/J 2 = 0.7 and T = T tri = 9.14J 2 . In order to test for the presence of a Pokrovsky-Talapov tricritical point in Monte Carlo simulations one can use the scaling hypothesis, If the data for different system sizes can be collapsed using β = 1/4, then this provides good evidence of the presence of a Pokrovsky-Talapov tricritical point. We apply this scaling hypothesis to H ABB2 [Eq. 69] in the constrained manifold in Fig. 29, and find that for δJ = 0.7 the data can be convincingly collapsed using β = 0.21 ± 0.04 and ν = 0.91 ± 0.25. The findings from Monte Carlo can be seen to be in reasonable agreement with first order perturbation theory (see Appendix E.2), where a Pokrovsky-Talapov tricritical point was found at δJ/J 2 = 0.56. For δJ/J 2 < 0.7 the transition is first order, and it is typically possible to simulate large-enough systems that the finite-size effects are small. In consequence the transition temperatures plotted in Fig. 26 are taken from the largest simulated systems. For 0.5 < δJ/J 2 < 0.7 this is L = 192, while for δJ/J 2 < 0.5 it is sufficient to consider L = 48. In order to gain physical insight into the crossover between a second and first-order phase transition, which in the spin-liquid region corresponds to the crossover between weak and strong coupling, we perform Monte Carlo simulations in a reduced manifold of states. The number of strings is fixed to be two, and the idea is to study the interaction between a pair of strings. Monte Carlo simulations are performed on a square cluster with periodic boundary conditions, linear dimension L and total number of sites L 2 . In the 2-string manifold, allowed Ising configurations are distinguished by their J 2 energy, but all have the same energy in terms of δJ, since there are a fixed number of dimers occupying B and C bonds. In consequence it is not necessary to vary δJ/J 2 , but only T /J 2 . This shows that the string-string interactions are independent of δJ, and therefore the temperature at which weak coupling crosses over to strong coupling is also δJ independent. For each considered temperature we measure the average separation between the strings, taking into account the periodic boundary conditions, and this is given by, where x 1 and x 2 are the positions of the strings along the x axis at a given height. It can be seen in Fig. 30 that as the temperature is reduced there is a change in ∆x starting at about T = T tri . For T > T tri the strings repel one another, and ∆x/(L/2) ≈ 1/2. This repulsion is entropically driven, and is due to the no-crossing constraint obeyed by the strings, which reduces the available fluctuations of a string if it is in close proximity to another string. This type of pairwise repulsion is crucial for the existence of a second-order phase transition out of the stripe phase, since it limits the number of strings that are condensed into the system when the free energy of an isolated, f string [Eq. 70], goes to zero. For T < T tri the pair of strings start to approach one another, showing that the energeticallydriven attractive interaction starts to dominate over the repulsive interaction. The strings gain some binding free energy by being, on average, proximate to one another, and this is consistent with the crossover from a second to a first-order phase transition at T = T tri seen in the full Monte-Carlo simulations (see Fig. 26). The lower the temperature the more tightly the strings bind, suggesting that the transition should become more first-order as the temperature is decreased, and this is also consistent with the full simulations. Since the string-string interactions are independent of δJ, the spin liquid region should have attractive string-string interactions in the temperature window T 1 < T < T tri , and we will discuss the implications of this in the next section. E.5 Mapping to 1D quantum model and correlations The nature of the correlations in H ABB2 [Eq. 69] can be used to understand the behaviour of the spin liquid, and can be determined by combining Monte Carlo simulation of the spin structure factor with insights from fermionic mappings. It is useful to first consider at a qualitative level how the mapping to a 1D quantum model of spinless fermions is altered by the further-neighbour interactions (see Appendix C.5 and Appendix G for the nearest-neighbour case). At the level of an isolated string the J 2 interaction both increases the internal energy, and adds an energy penalty to "corners" where the string changes direction [10,11]. In the fermion model this alters the values of µ and t and adds a history dependence to the motion of the fermion, such that the passage from the imaginary timestep τ to τ + ∆τ depends not only on the fermion configuration at τ but also on the configuration at τ − ∆τ . A second effect of the J 2 coupling is to drive an attractive interaction between strings. When strings neighbour one another their J 2 energy is reduced, and therefore the fermionic model also has an attractive interaction of the form V (z 2 )c † i c i c † i+1 c i+1 . In the string picture this attractive interaction is energetically-driven and competes with the entropically-driven repulsive interaction arising from the string non-crossing contraint. In the fermionic language the entropic repulsion maps onto the Pauli exclusion principle, which is a property of free fermions, and therefore the fermionic model is always attractive. One advantage of mapping onto a fermion model is that it is known that fermions with weak attractive interactions form a Luttinger liquid, with Luttinger parameter K > 1 (K = 1 for free fermions) [52]. We therefore make the ansatz that the spin structure factor in the 2D classical model takes the asymptotic form [48,52], where K > 1. This corresponds to a reciprocal space structure factor with algebraically sharp peaks at q = q string . The asymptotic form given in Eq. 89 can be tested against Monte Carlo simulations, and we find that it gives a good fit to the simulations for T T tri , and some examples are shown in Fig. 26. The value of K can be extracted from the fits to the simulations, and the result of doing this for T > T tri and δJ = 0 is shown in Fig. 31. It can be seen that close to T = T tri the Luttinger parameter, K, becomes significantly different from the free fermion case of K = 1, while in the limit T /J 2 → ∞ the free fermion case is recovered, corresponding to 1/ |r| spin correlations (see Eq. 50). As a result of these findings we label the region of the spin liquid with T > T tri as a string Luttinger liquid. At T = T tri the entropic repulsion and energetic attraction between strings becomes comparable (see Fig. 30) and the strings start to bind together. At this temperature the distribution of spectral weight in the structure factor starts to rearrange itself such that S(q) is no longer dominated by a single q value, and Eq. 89 is inapplicable. Instead the weight is distributed around the perimeter of the triangular-lattice Brillouin zone (see Fig. 26), and this is typical of a domain-wall network (see supplementary material of Ref. [11]). Neighbouring parallel strings form domains in which Ising stripes are parallel to either B or C bonds, while domains with stripes parallel to A bonds correspond to an absence of strings, and an example of this is shown in Fig. 31. We find that the spin-liquid region of H ABB2 [Eq. 69] is best described as a domain-wall network in the region T 1 < T < T tri , as shown in Fig. 26. The domain-wall network state can be thought of as being a fluctuating, phase-separated state, with a loose analogy to the clustering of holes in superconductors [54,55]. The more the energetically-driven attraction between strings dominates over the entropic repulsion, the more tightly bound the strings and the larger the average domain size. In the case of H ABB2 [Eq. 69] and for δJ = 0 a first-order phase transition into the stripe phase occurs while the average domain size is relatively small. The addition of a third-neighbour interaction with 0 < J 3 < J 2 /2 suppresses the transition temperature, and therefore allows the average domain size to become larger since the attractive interaction becomes more important at low temperature [10,11]. Increasing δJ causes domains with stripes parallel to A bonds to grow, which corresponds to decreasing the string density, n string . At the tricritical point the A-domains coalesce and cover the whole system and there is a continous transition into the stripe phase. F The spin-spin correlation function for the nearest-neighbour TLIAF Here we show how to calculate the real-space, spin-spin correlation function, S(r) [Eq. 4], for the nearest-neighbour TLIAF, working in both the constrained manifold (i.e. without defect triangles) and the full, unconstrained manifold. Integral expressions for the correlation function can be derived in the thermodynamic limit, and numerical evaluation results in exact results up to numerical error. In the isotropic case these calculations just show how to derive the long-established results of Ref. [4,5] within the Grassmann variable approach [59]. The point of showing the calculations here is that the Grassmann approach makes it simple to extend the old results to the case of anisotropic interactions, for which it is necessary to separately consider correlations parallel and perpendicular to the string direction. In this Appendix we show the mechanical steps used to calculate the correlation functions, while a physical discussion of the results is given in Appendix C, Appendix D and the main text. We consider separation vectors, r = r j − r i , that are either perpendicular or parallel to the average direction of the strings, and label the corresponding correlation functions as S ⊥ (r) and S (r) (corresponding to theê x andê y direction in Fig. 32). F.1 Spin-spin correlations in the constrained manifold First we consider the nearest-neighbour TLIAF with a constrained manifold. The calculations are slightly simplified by using a unit cell that contains 2 triangular lattice sites and 4 honeycomb/brick lattice sites, as shown in Fig. 32 (as opposed to the minimal unit cell with 1 triangular and 2 honeycomb/brick sites used in Appendix C). The two spins contained within the ith unit cell are labelled σ 1,i and σ 2,i and the perpendicular and parallel spin-spin correlation functions are, S ⊥ (rê x ) = σ 1,i σ 1,i+êx , S (rê y ) = σ 2,i σ 2,i+êy . (90) The unit cell also contains 4 Grassmann variables, labelled a 1,i , b 1,i , a 2,i and b 2,i . These can be used to determine the partition function as in Appendix C, resulting in, where the action is, Fourier transforming the Grassmann variables results in, and this is diagonalised to give, F.1.1 Correlations perpendicular to the strings In order to calculate the spin-spin correlation function, it is necessary to express products of spins in terms of Grassmann variables. Before considering the general case, it is useful to first consider a pair of spins, σ 1,i and σ 1,i+êx , separated by a single honeycomb/brick lattice bond (see Fig. 32). If this bond is covered by a dimer then the spins are equivalent and σ 1,i σ 1,i+êx = 1, while if it is not dimer-covered σ 1,i σ 1,i+êx = −1. The expectation value is therefore given by, where P dim b 1,i ;a 1,i is the probability of finding a dimer on the bond connecting the Grassmann variables b 1,i and a 1,i . In order to determine P dim b 1,i ;a 1,i one can calculate a reduced partition function in which the sites b 1,i and a 1,i are excluded. Exclusion of these sites effectively fixes a dimer on the bond between them, and therefore P dim b 1,i ;a 1,i is given by the ratio of the reduced partition function to the original partition function, Z hon [Eq. 94]. In order to exclude the two sites, it is simply necessary to place b 1,i and a 1,i inside the partition function integral, using the properties of Grassmann variables (a 2 = 0). In consequence one finds, and it is clear that P dim b 1,i ;a 1,i is just the thermodynamic average of b 1,i a 1,i . In consequence, The thermodynamic average of two Grassmann variables can be calculated using, b 1,i a 1,j = 2 N k b 1,−k a 1,k e ik·(r j −r i ) e iky/2 , b 1,−k a 1,k = e iky/2 In the isotropic case (z = 1) integration yields P dim b 1,i ;a 1,i = 1/3, as expected, and therefore σ 1,i σ 1,i+êx = −1/3. More generally, the correlation between a pair of spins with a separation vector parallel toê x is given by, This can be expanded using Wick's theorem, and rewritten as the deteminant of an r × rdimensional Toeplitz matrix, resulting in, with components, In the thermodynamic limit the sum can be converted into an integral giving, where u = 2z 2 (1 − cos k x ). The integral over k y is given by, It follows that, where, is the Fermi wavevector of the quantum model H 1D [Eq. 51]. It can be seen that (M ⊥ ) mn = (M ⊥ ) nm and thus the Toeplitz matrix is symmetric. It is also worth noting that the matrix elements could have been calculated by making use of the exact mapping onto the 1D quantum model given in Appendix C.5. F.1.2 Correlations parallel to the strings The correlation between spins parallel toê y can be calculated by an analagous method. The difference is that a pair of spins are separated by not one but two dimers (see Fig. 32). As such the correlation function is given by, (2za 2,i+lêy b 1,i+lêy − 1)(2zb 2,i+lêy a 1,i+lêy − 1) . where the z's take into account the weights of the excluded dimers. Wick's theorem allows this to be rewritten as the determinant of a 2r × 2r-dimensional Toeplitz matrix, with components, where m, n ∈ {1 . . . r}. The matrix elements can be calculated from, where, F.2 Spin-spin correlations in the unconstrained manifold Calculation of the spin-spin correlation function in the nearest-neighbour TLIAF with an unconstrained manifold follows a very similar pattern to that of the constrained manifold. However it is complicated by having to work with 6 or 12 Grassmann variables in the unit cell, as well as the fact that the extended brick lattice is not bipartite. The two-spin unit cell is shown in Fig. 33 and contains 12 sites of the extended brick lattice, and therefore 12 Grassmann variables, which are labelled a 1 ...a 6 and b 1 ...b 6 . The partition function can be calculated as in the constrained case, and this results in, with, The spin-spin correlation function in the direction perpendicular to the strings (parallel toê x ) is given by, and as in the constrained manifold case the correlation function can be written as the determinant of an r × r-dimensional Toeplitz matrix, M ⊥ , with matrix elements, The correlation function in the direction parallel to the strings (parallel toê y ) is given by, The matrix elements of interest can be determined from, where b 2,−k a 2,k etc. are relatively simple to calculate within the Grassmann variable approach, but result in very length expressions. Finally, we note that when calculating the determinant of the Toeplitz matrices, there is typically a finely-tuned cancellation between different terms. In the calculations presented above, this is unproblematic, since the matrix elements can be computed exactly (at least up to numerical accuracy). However, this limits the utility of the Toeplitz matrix approach for models with further-neighbour interactions where only a perturbative solution of the Grassmann variable spectrum is available. Small, unavoidable errors in the calculation of the matrix elements quickly have a significant effect on the determinant, resulting in unphysical results. For this reason we do not use this method for the J 1A -J 1B -J 2 model considered in Appendix E, but instead rely on finite-size Monte Carlo simulations of the correlation function. G Mapping between the 2D Kasteleyn partition function and a 1D fermionic coherent-state path integral Here we demonstrate the correspondence between the 2D, classical, nearest-neighbour TLIAF, H ABB [Eq. 32], and a 1D quantum model of spinless fermions, H 1D [Eq. 51]. For simplicity, we work in the constrained manifold of Ising configurations, but an analagous calculation can be carried out in the unconstrained manifold. In particular, we show that the Kasteleyn formulation of the partition function naturally maps onto a quantum path integral written in terms of fermionic coherent states. This makes clear that the Grassmann variables introduced in the Kasteleyn method describe coherent states of strings. The standard way to map between a d dimensional classical theory and a d−1 dimensional quantum theory is by equating the transfer matrix and the Hamiltonian according to T ≡ e −δτ H , where δτ is a small step in imaginary time. The partition function can then be viewed either as a matrix product of transfer matrices, or as a quantum path integral. Detailed examples of how to carry out this procedure in the cases of spin-ice and the cubic dimer model are presented in Ref. [63,64]. In the case of the TLIAF, the transfer matrix acts on states of strings, as shown in Fig. 34, and a translation across 4 bonds is necessary before the lattice structure repeats. These string states can be reinterpreted as the fermionic state of a 1D quantum model at a given The transfer matrix translates strings (shown in purple) by four honeycomb bonds in the y direction (from one blue dashed line to the next). At each translation the string can follow one of four possible routes, with the result that its x coordinate either remains the same or gets translated one step to the left or right. The string state on one of the blue dashed lines can be specified by the presence (1) or absence (0) of a string at each site on the line. This can be re-interpreted as the presence or absence of a spinless fermion at a particular imaginary timestep in a 1D quantum model. imaginary time coordinate, and the classical partition function sum is therefore equivalent to the fermionic path integral. While it is possible to solve the nearest-neighbour TLIAF using a transfer matrix approach [2,3], the solution is considerably more compact using the Kasteleyn formulation expressed as a multiple integral over Grassmann variables (see Appendix C and Appendix D). Furthermore, the Kasteleyn appoach provides a good starting point for perturbative studies of more complicated models (see Appendix E.2). As such it would be useful to know how to link the Kasteleyn action to that of the fermionic path integral. We demonstrate below that the Kasteleyn action naturally maps onto the quantum action when written in terms of fermionic coherent states. To do this we first re-examine the classical partition function using a non-minimal, 4-site unit cell, motivated by the fact that the string states shown in Fig. 34 involve a translation across 4 sites. We then examine the coherentstate fermionic path integral, and show that the action can be brought to the same form as the Kasteleyn action by introducing and summing over extra degrees of freedom that take into account the intermediate sites present in the honeycomb/brick lattice (see Fig. 34). Finally we link the Kasteleyn spectrum to that of the quantum model. It should be noted that the mapping could just as well have been performed in the other direction, by starting from the Kasteleyn action and performing a Gaussian integral over half of the Grassmann variables to arrive at the fermionic coherent-state path integral. While this alternative method is probably slightly more direct, we feel that the method we present makes clearer the physical link between the two. G.1 Kasteleyn action in 4-site basis The procedure for determining the Kasteleyn action in terms of Grassmann variables was developed in [6] and is reviewed in Appendix C. To ease the comparison with the 1D quantum model, H 1D [Eq. 51], we here re-determine the Kasteleyn action of the nearest-neighbour TLIAF in the constrained manifold, using a 4-site unit cell and a different set of bond orientations compared to the main text. The reason for using a 4-site cell is that it is natural to identify a string traversing 4 sites of the honeycomb/brick lattice with a single imaginary timestep in the quantum model (see Fig. 34). The bond orientations are shown in Fig. 35 and the reason they are different from those in the main text is just to simplify the mapping. They are of course chosen in accordance with Kasteleyn's theorem [46] and therefore there is no effect on the physical properties. G.3 Matching the fermionic and Kasteleyn actions The action, S 1D [η,η], exactly reproduces the partition function of the nearest-neighbour TLIAF, but it does this by averaging over some of the degrees of freedom of the Kasteleyn action. In order to make the mapping explicit, it is necessary to introduce these extra degrees of freedom into the quantum path integral. The microscopic relation between the quantum and classical partition functions requires a correspondence between an imaginary timestep and a translation of the classical system across 4 bonds in the y direction (see Fig. 36). In the classical set-up the string can hop by a maximum of one 1D lattice site per imaginary timestep, and the expansion of the quantum time-translation operator can therefore be truncated to first order without approximation, giving However, it can be seen in Fig. 36 that, if the string configuration is only known every 4 bonds (i.e. on the blue dashed lines in Fig. 36), there is an ambiguity, since a string can take two possible routes that leave its x coordinate invariant. In terms of the original Ising spins, these two possible routes describe different configurations. In order to explicitly describe all where, S 1D [η,η, θ,θ] = l,m −η l,m η i,m −θ l,m θ l,m + z(η l,m+1 θ l,m +η l+1,m+1 θ l,m +θ l,m η l,m +θ l−1,m η l,m ) . (140) The fermionic action is now in a form that can be directly compared with the Kasteleyn action S 2 [a 1 , b 1 , a 2 , b 2 ] [Eq. 119]. It can be seen that these can be brought to the same form simply by identifying, η l,m → a 1,i ,η l,m → b 1,i , θ l,m → a 2,i ,θ l,m → b 2,i , where there is an equivalence between i, which labels the unit cells in the 2D lattice, and (l, m), which labels the sites and timeslices in the 1D quantum problem. This justifies the mapping between the quantum and classical coefficients given in Eq. 51. G.4 Matching the classical and quantum spectrums As well as showing how the Kasteleyn and fermionic coherent state actions can be brought to the same form, it is also useful to show the link between the spectrums. Since the fermions/strings are free, the partition functions can be simply evaluated by Fourier transform, and the spectrums compared. The Kasteleyn spectrum is given by | K p | [Eq. 123], and the partition function, Z hon [Eq. 121] is the product of this spectrum. In conclusion, there is an exact mapping between the Kasteleyn action and that of the fermionic, coherent-state path integral with Hamiltonian H 1D [Eq. 124]. This mapping also makes it clear that the Grassmann variables introduced in the Kasteleyn formulation of the classical partition function describe coherent states of strings, and therefore demonstrates the link between the Kasteleyn and transfer matrix approach to solving the nearest-neighbour TLIAF. H Perturbative expansion of the action: a simple example The Grassmann path integral representation of interacting dimer problems, as used in Appendix E.2, is not unknown [61,66] but has not been widely explored in the literature. As an aid to the interested reader, we here consider Z hon2 [Eq. 72], and show a worked example of how to evaluate this partition function via Grassmann path integration on the simplest, nontrivial lattice: the hexagonal plaquette. This provides useful insights into the construction of a perturbation theory for the infinite lattice, as presented in Appendix E.2. We consider the dimer covering of a hexagonal plaquette with a dimer weight of 1 on A bonds, z on B and C bonds and a dimer interaction with weight z 2 between dimers separated by one unfilled bond (see Fig. 37). For a single plaquette there are only two possible dimer coverings, shown in Fig. 37, and each of these has a weight z 2 z 3 2 . The partition function, Z hon2 [Eq. 72], is therefore given by, Z hon2 = 2z 2 z 3 2 = 2z 2 (z 2 − 1) 3 + 3(z 2 − 1) 2 + 3(z 2 − 1) + 1 , where the second equality is an exact rewriting that will prove useful below. While in such a simple case the partition function can be calculated exactly just by inspection, it is instructive to perform the calculation via the Grassmann path integral representation. On a finite lattice the highest order term in the action is S 2N [a, b], where 2N is the number of honeycomb lattice sites, and for the 6-site plaquette the partition function can therefore be rewritten as, where i = {1, 2, 3}. The quadratic term in the action does not take into account the z 2 interaction, and is given by, If the action is truncated at quadratic order, then the usual rules of Grassmann integration can be used to find, Comparison with the exact value of the partition function [Eq. 148] shows that this quadratic approximation becomes exact in the limit z 2 − 1 → 0. The quartic term in the action takes into account pairwise interactions of the dimers in isolation from other pairwise interactions, and is given by, The factor z 2 − 1 is chosen such that if there were a configuration with only 1 dimer-dimer interaction, the -1 would remove the contribution from the purely quadratic action, while the z 2 would replace this with a contribution that takes the interaction into account. In the case of the hexagonal plaquette the allowed dimer configurations contain 3 mutually interacting dimers, and this mutual interaction is not fully taken into account by the quartic term. Direct evaluation results in, reproducing the exact partition function [Eq. 148] to first order in z 2 − 1. Finally, the hexatic term takes into account the fact that the dimers are not interacting in isolation, but are all mutually interacting, and is given by, where in the first line the −1 removes the contribution from the quadratic action, the 3(z 2 −1) removes the contribution trom the quartic action and the z 3 2 replaces these with a contribution that correctly reproduces the weight of three mutually interacting dimers. Direct calculation including quadratic, quartic and hexatic terms correctly reproduces Eq. 148 for the partition function, i da i db i e S 2 [a,b]+S 4 [a,b]+S 6 [a,b] = 2z 2 (z 2 − 1) 3 + 3(z 2 − 1) 2 + 3(z 2 − 1) + 1 = 2z 2 z 3 2 . As the lattice size is increased, it rapidly becomes impossible to determine the partition function by inspection. A full expansion of the partition function in terms of Grassmann variables also becomes complicated due to the increase in the number of terms in the action. However, the advantage of this method is that it provides a way of systematically carrying out perturbation theory around the non-interacting limit |z 2 −1| → 0. For large or infinite lattices direct evaluation of Grassmann actions with quartic and higher order interacting terms is not possible, but approximate diagrammatic methods can be used, and the order of expansion matched to that of the truncation of the action.
34,472.2
2017-07-11T00:00:00.000
[ "Physics" ]
Genome-wide association study identifies multiple susceptibility loci for craniofacial microsomia , C raniofacial microsomia (CFM, MIM: 164210) encapsulates congenital anomalies of the external and middle ear, maxilla, mandible, facial and trigeminal nerves, and surrounding soft tissues on the affected side 1 . The occurrence of CFM is between 1 in 3,000 and 1 in 5,600 living births 2 . Popular assumptions for the pathogenesis of CFM include neural crest cell (NCC) disturbance and vascular disruption 3 . The NCCs originate from the neural ectoderm, migrate over long distances to participate in the formation of the first and second pharyngeal arches, and give raise to craniofacial structures 4 . Mouse models have indicated that dysfunctional genes involved in NCCs delamination, proliferation, migration or reciprocal interactions with other cell types in pharyngeal arches would cause impairments of the craniofacial development 5 . Through vascular disruption during the morphogenesis of the craniofacial vascular system 6 , localized ischaemia has been considered as another risk factor for CFM, although this notion is debatable 7 . Many studies have revealed that CFM is caused by inherited and/or environmental factors 3,8,9 . Genetic variants are largely believed to contribute to this anomaly. Despite that various CFM candidate genes were proposed from mouse models or human syndromes with CFM 3 , to date, very few genetic variants have been identified and validated in human. To fill in gaps in our knowledge about CFM and to decipher its genetic basis, we perform the first genome-wide association study (GWAS) along with whole-genome sequencing (WGS) in CFM patients from China. We find eight significant and five implicated loci associated with CFM. Functional analyses on these loci identify multiple CFM candidate genes involved in NCC development. Results Basic GWAS results. For discovery, we conducted a GWAS in 939 CFM cases and 2,012 healthy controls from China, by testing single-nucleotide polymorphisms (SNPs) that satisfied quality control (792,342), with or without stratifications on subgroups of gender and (left-versus right-) side-affected CFM. We then evaluated the significant SNPs with a P value o1 Â 10 À 5 from the discovery stage in validation set of 443 cases and 1,669 controls from China. Logistic regression (LR) analyses on the two combined sample sets identified seven genome-wide significantly (Po6.3 Â 10 À 8 , the Bonferroni-corrected significance threshold) associated loci with lead SNPs of rs13089920 (LR P ¼ 2.15 Â 10 À 120 , odds ratio (OR) ¼ 5.18), rs10459648 (LR P ¼ 2.86 Â 10 À 23 , OR¼ 0.63), rs17802111 (LR P ¼ 9.57 Â 10 À 18 , OR ¼ 1.48), rs11263613 (LR P ¼ 7.91 Â 10 À 17 , OR ¼ 1.68), rs3754648 (LR P ¼ 6.33 Â 10 À 13 , OR ¼ 1.39), rs7420812 (LR P ¼ 6.74 Â 10 À 10 , OR ¼ 1.33), and rs10905359 (LR P ¼ 5.11 Â 10 À 9 , OR ¼ 0.76; Fig. 1, Table 1). In addition, five implicated loci with lead SNPs of rs3923380, rs754423, rs4750407, rs9574113 and rs7222240, reached a suggestive genome-wide significance level (LR Po1 Â 10 À 5 ; Table 1, Supplementary Fig. 1). LR analyses on the subgroups of CFM on gender and affected-side identified a significant associated locus with a leading SNP of rs17090300 (LR P ¼ 1.04 Â 10 À 11 , OR ¼ 2.31) to left-side-affected CFM patients (n ¼ 481; Supplementary Fig. 2). The significant heterogeneity of association pattern (Cochran's Q-test P ¼ 2.09 Â 10 À 6 ) was found between the left-and right-side-affected subgroups at rs17090300 (Supplementary Table 1). The phenotypic variance explained by the significantly associated and implicated lead SNPs were 6.92% and 1.96%, respectively, with a prevalence rate of 1.4 per 10,000 in China 10 . Furthermore, the joint effect of all the 792,342 genotyped SNPs could explain 28.4% of the variance observed in this study. Imputation followed by conditional and joint analyses. To identify additional associated variants, we imputed the untyped variants from genotyping data and the haplotype information provided by the 1000 Genomes Project (1KG). Among the imputed variants, we identified 68 additional SNPs (LR Po1 Â 10 À 5 ) associated with CFM risk in the 13 associated loci (7 significant, 1 left-side specific, and 5 suggestive loci, Supplementary Data 1). To assess whether the 68 SNPs were independent from our initially identified leading SNPs, we performed conditional analyses on the genotyped and imputed variants. We did not find any other independently associated variants ( Supplementary Fig. 3). To identify other new loci associated with CFM, we used multiple regression (by Wu et al. 11 ) to test for the joint effect of the variants from a gene or haplotype block. We were able to replicate some of the identified loci, and did not find additional associated ones (Supplementary Table 2). Thus, no more new associated variants or loci were identified in conditional and joint analyses. Functional annotation and eQTL analysis. Functional noncoding variants within gene regulatory elements may potentially result in a disease phenotype through modulating gene expression level. To predict the effects of variants on gene expression, we submitted 291 SNPs (including 151 imputed SNPs) with a P value o1 Â 10 À 4 to SeattleSeq Annotation 138 and HaploReg (v2) for analyses 12 . We found six SNPs located in known transcription factor-binding sites (TFBS), and three of them (GWAS Po6.3 Â 10 À 8 ) located near or within ROBO1 (rs147642420), KLF12 (rs7986825) or ARID3B (rs7497036) (Supplementary Data 2). Among the HaploReg annotated variants, 187 SNPs were located in gene expression regulatory motifs, such as the enhancers, promoters, open chromatins and protein-binding sites (Supplementary Data 3). For enrichment analyses of cell type-specific enhancers and DNase hypersensitive sites, we conducted queries in HaploReg with the 291 SNPs and their linked SNPs (r 2 ¼ 1), based on the epigenomic data from ENCODE or Roadmap. As for ENCODE data, these SNPs were enriched in the enhancers or DNase hypersensitive sites of eight cell lines (Supplementary Data 3), noting that the fold change from observed to expected 'strongest enhancer' was 15.6 (binomial test Po1 Â 10 À 6 ) in H1 embryonic stem cells. As for Roadmap data, these SNPs were significantly (w 2 -test P ¼ 1.3 Â 10 À 4 ) enriched in the enhancers of stem cells and stem cell-derived cell lines ( Supplementary Fig. 4). To confirm the relations between CFM-associated SNPs and gene expression, we used Genevar to map the expression quantitative trait loci (eQTL) by correlating the SNPs with gene expression levels in lymphoblastoid from HapMap populations 13 . We found that several lead SNPs or their linked variants (r 2 40.8, calculated from Asian populations of 1KG) had nominal associations (Po0.05) with the expression levels of the nearest genes (Supplementary Data 4), such as ROBO1 (rs4401330, linear regression P ¼ 9.2 Â 10 À 3 , in CHB), KLF12 (rs7986825, linear regression P ¼ 8.8 Â 10 À 3 , in GIH), EDNRB (rs5351, linear regression P ¼ 5.1 Â 10 À 3 , in YRI) and SHROOM3 (rs4859453, linear regression P ¼ 2.0 Â 10 À 3 , in JPT). We then looked into the regulatory function of these nominal cis-eQTLs and found that 63% of them were located within the promoters, enhancers, DNase hypersensitive sites or TFBS (Supplementary Data 5). Enrichment analyses for these regulatory elements showed that embryonic cells, epithelial cells and carcinoma cells were significantly enriched for those 'strongest enhancers' or DNase hypersensitivity sites. Pathway analyses. To identify the CFM candidate genes from the 13 associated loci and their potential connections, we used Gene Relationships Across Implicated Loci (GRAIL) methods 14 to analyse the 46 genes within the 13 associated loci (Supplementary Data 6). Overall, 13 candidate genes were identified by GRAIL as follows: ROBO1, GATA3, EPAS1, PARD3B, GBX2, SHROOM3, FRMD4A, FGF3, KLF12, EDNRB, NID2, SEMA7A and PLCD3. The pairwise relationships for the genes in the associated loci are illustrated in Supplementary Fig. 5. In particular, this figure highlights that genes involved in embryonic development, such as ROBO1, NRP2, GBX2, FGF3, PARD3B, SEMA7A and SHROOM3, are closely connected. Also, ROBO1, NRP2, GBX2, FGF3 and SEMA7A are involved in signalling pathways that regulate the migration of NCCs. To investigate the enrichment of functional annotation, we used Database for Annotation, Visualization and Integrated Discovery (DAVID) 15 Each point represents a SNP plotted with its À log 10 P value as a function of genomic position (hg19). Imputation analysis is shown with circles and direct genotyping with squares. In each regional plot, the purple symbol denotes the lead SNP, showing its name on the top of each plot. The colour coding of the rest of the SNPs showed in LD with the lead SNP: red, r 2 Z0.8; gold, 0.6rr 2 o0.8; green, 0.4rr 2 o0.6; cyan, 0.2rr 2 o0.4; blue, r 2 o0.2; grey, r 2 unknown. Recombination rates were estimated from ASN population of 1KG project (Mar 2012). Gene annotations were taken from the UCSC genome browser. and mesenchymal cell; (3) vasculature development; and (4) regulation of phosphorus metabolic process. We used pairwise kappa similarity between terms from these four clusters to show their network structures (Fig. 2). We found the four clusters of biological processes correlated with each other. Many terms within the four clusters are relevant to the progresses of embryonic development, noting that the differentiation and migration of NCCs and mesenchymal cells play paramount roles in craniofacial morphogenesis. To further explore the CFM candidate genes and their expression patterns, we analysed the 13 associated loci with the Data-Driven Expression-Prioritized Integration for Complex Traits (DEPICT) tool 16 . This analysis showed that 11 significantly prioritized genes (SHROOM3, DCAKD, NID2, PARD3B, ROBO1, ARID3B, KLF12, FGF3, EPAS1, EDNRB, FRMD4A; with a false discovery rateo5%) had functional connections (Supplementary Data 8). Gene set enrichment analyses identified the enriched categories of 'positive regulation of cell differentiation,' 'abnormal neural tube morphology,' and 'failure of initiation of embryo turning' from those genes. Tests of enrichment of expression in particular tissues and cell types further identified 25 significant categories (t-test P-valueo0.05), including 3 entries from the cardiovascular system, 6 from the musculoskeletal system, 6 from stem cells and 4 from the connective tissue cells ( Supplementary Fig. 6). These significant categories were closely related to each other and critical for embryonic development. Gene expression patterns in embryos and gene-editing mice. To investigate the expression patterns of our candidate genes in embryos, we interrogated the in situ hybridization data from the gene expression database of the Mouse Genome Informatics, the Gallus Expression in situ Hybridization Analysis, and the Xenbase. All the candidate genes were expressed in the above database, and 10 of them were expressed in pharyngeal arches from where the craniofacial structures developed (Supplementary Fig. 7 and Data 9). It is notable that all the candidates expressed in the CFM-influenced organs during embryogenesis, such as cranial ganglion, mandible and sensory organs of ear and eye. To understand the phenotypic consistency between CFM and mutant mouse models of the candidate genes, we interrogated the phenotypes of gene-editing mice deposited in the database of Mouse Genome Informatics. Mutant mice of nine candidate genes (ROBO1, GATA3, GBX2, FGF3, NRP2, EDNRB, SHROOM3, SEMA7A and ARID3B) were characterized by malformations of craniofacial system (Supplementary Fig. 7 and Data 10). Many mutant mice even shared similar phenotypes with CFM, such as abnormal craniofacial bone morphology, abnormal ear development and abnormal cranial ganglia Fig. 9). p.R291H in PLCD3 may disrupt an H-bond between amino acids 291 and 286 ( Supplementary Fig. 10), which may change the energy level (from À 281.151 to À 14.364) at 291 site and potentially lead to the instability of local structure. Discussion Gestational exposure to teratogens supports the notion that the environmental factors contribute to CFM. However, various susceptibility loci identified from CFM or CFM-related syndromes indicate the critical involvement of genetics in this congenital disease 3,17 . Here we performed the first GWAS on CFM, identifying eight genome-wide significant loci and five implicated loci, which jointly explain 8.9% of the variance in susceptibility to this craniofacial anomaly. Several CFM-related genes or mutations have been proposed. Importantly, the candidate genes, except FGF3 (ref. 18), within the 13 loci were newly reported in association with CFM. Our findings not only identify new risk loci for CFM, but also imply the complexity of genetic aetiology of this malformation. Our results suggest that the candidate genes within the 13 associated loci are strongly correlated with the craniofacial development. First, the most prominent finding is that many of our candidate genes, such as ROBO1, GBX2, NRP2, EDNRB and FGF19, are functionally connected to each other and involved in NCCs and mesenchymal cells development and vasculogenesis. It is well known that craniofacial structures are derived from the first and second pharyngeal arches, which are composed of mesenchymal cells of cranial neural crest and mesodermal origin 4 . Second, the cell type-specific enrichment analyses based on the ENCODE or Roadmap projects indicate the enrichment of CFM-associated variants in regulatory elements of embryonic stem cells, which implies their potential roles in embryonic development. Third, in situ hybridization in the embryos of mouse, chicken and frog demonstrate that many of our candidate genes, such as ROBO1, FGF3, EPAS1, KLF12, ARID3B, GBX2, EDNRB and NRP2, are highly expressed in the pharyngeal arches and their derivatives of CFM-related craniofacial substructures, such as jaw, ear and eye. Fourth, mutant mice of the candidate genes frequently exhibit abnormalities at pharyngeal arches and the craniofacial region [19][20][21][22][23][24][25] . For example, Ednrb or Arid3b mutant mice have abnormal pharyngeal arch morphology 24,25 . Mouse embryos deficient in GBX2 display aberrant migration and patterning of NCCs through disrupting the Slit/Robo signalling pathway 21,23 . Mutations in SHROOM3 lead to cranial neural tube defects in mice 22 . Altogether, our findings reinforce the involvement of these genes in the pathogenesis of CFM. NCCs are generated at the dorsal of the neural tube and subsequently undergo processes of delamination, transition, migration, patterning and differentiation into multiple cell types, which contribute to the formation of peripheral nervous system, craniofacial cartilage and bones and pigment cells 4 . Many of candidate genes identified in this study participated in all steps of NCCs development. SHROOM3 plays a critical role in neural tube closure 26 . GBX2 activates the expression of ROBO1 involved in the Slit-Robo signalling that controls the motility and localization of NCCs 27 . NRP2 is involved in the Sema-Nrp signalling, which shapes the NCCs migration streams by marking NC-free regions 28 . SEMA7A may also be involved in the Sema-Nrp signalling due to its widespread expression in cranial NCCs 29 . EDNRB is highly expressed in neural crest-derived head mesenchyme and determines the migration path of NCCs 30 . ARID3B and FGF3 encode integrant for the identity, survival and differentiation of chondrogenic NCCs 31,32 , and the FGF signalling is also important for the homing process of NCCs 33 . In summary, many of the CFM candidate genes participate in the migration and differentiation of NCCs, and subsequently affect the formation of the NCCs-derived craniofacial organs. NCC development disturbance has been well-accepted in the pathogenesis of CFM, while the hypothesis of vascular disruption is also noticeable 3 . Disruption in the development of the blood vascular system in an embryo can result in local ischaemia and birth defects 34 . In this study, EPAS1 is found as a candidate gene for CFM. EPAS1 is highly expressed in pharyngeal arches and vascular endothelial cells to regulate several genes involved in the development of blood vessels 35 . Meanwhile, one of the fates of NCCs is to differentiate into vascular endothelial cells and, later, to build up the vascular wall 4 . Although more studies are needed to reveal the relationship between EPAS1 and NCCs, NCCs disturbance and vascular disruption may act synergistically to result in the facial malformation. Our results significantly improve our understanding of the genetic pathogenesis of CFM. However, further studies are required to strengthen our findings. First, future GWAS and subsequent meta-analysis with world-wide CFM patients are expected to validate those associated loci, as well as to identify new ones. Second, deep-sequencing more DNA samples at the associated loci would help to identify the causative variants for CFM with next-generation sequencing technologies. Third, those associated variants mapped to regulatory elements require functional validation in their relevant cell types, such as NCCs and stem cell lines. Taken together, our study finds several new risk loci for CFM and connects the candidate genes to biological processes of NCCs migration and differentiation. The results not only highlight the genetic architecture of CFM, but also provide new clues for other craniofacial anomalies or syndromes. Methods Samples. We collected 1,382 congenital CFM patients from Plastic Surgery Hospital of Peking Union Medical College as a case cohort for a GWAS study. The cohort was composed of 1,056 males and 326 females with a mean age of 11.9 years old (s.d.: 6.5; range 4-48 years). Most of the patients were presented as a unilateral anomaly (1,256 individuals, 90.9%), with the right side being affected in nearly 61.7% (775 individuals). More details on phenotypes were illustrated in Supplementary Data 13. Among them, 1,308 patients had a record of geographic location and 71% of them were from northern China (using the boundary suggested by Xu et al. 36 ). The control cohort was composed of 3,681 individuals with 2,362 males and 1,319 females, and was collected from several medical examination centres located in both northern and southern China. The percentage of control samples from northern China was 69.7%, not significantly (two-tailed w 2 -test P ¼ 0.52) different from that of the case cohort. All participants signed informed consent forms for biological investigations. This project was reviewed and approved by the Ethics Committee of the Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Beijing Institute of Genomics, Chinese Academy of Sciences, in adherence with the Declaration of Helsinki Principles. Genotyping and quality control. All DNA samples were extracted using DNA-extraction kits (Tiangen Biotech). At the discovery stage, 942 cases and 2,020 controls were randomly loaded in 96-well plates and genotyped with the Human Omni-Zhonghua chips (Illumina) according to the manufacturer's specifications. Genotyping module of Genomestudio v3.0 (Illumina) was used to call the genotype of about 0.9 million SNPs. All DNA samples were successfully genotyped at a call rate 499.7% with a genotype call threshold (boundary for calling genotypes relative to its associated cluster) of 0.15. The genotype reoccurrence rate for three duplicated individuals was 99.99% on average. To obtain high-quality data for GWAS, we pruned the data set of discovery stage with the following criteria: sample call rate 499%; SNP call rate 495%; and a threshold for Hardy-Weinberg equilibrium of 0.0001 (Fisher's exact test) in control cohort ( Supplementary Fig. 11). In addition, to exclude closely related individuals, we calculated genome-wide identity by descent (IBD) for each pair of samples. We found that one pair of case and eight pairs of control have IBD 40.05, and removed one from each pair for the subsequent analyses. Due to limited power of rare variants in an association study, we only kept SNPs with minor allele frequencies 40.01. We extracted genotype data of the Yoruba in Ibadan (YRI), Utah Residents (CEPH) with Northern and Western European Ancestry (CEU), Japanese in Tokyo (JPT), Han Chinese in Bejing (CHB) and Southern Han Chinese (CHS) populations from the 1KG project and performed a principal component analysis (PCA) on these samples along with our genotyped samples using smartPCA package 37 . Asian populations (including CHB, CHS, JPT, and our samples) were clustered together, while Chinese samples were well separated from the Japanese samples ( Supplementary Fig. 12). All Chinese samples were clustered into two subgroups, consistent with the notion of two different populations of northern and southern Chinese. We found that two outliers (based on genomewide IBS) existed within our patients and were removed from subsequent analyses. In the end, we obtained 939 cases and 2,012 controls with 792,342 SNPs for our GWAS analyses. The total genotyping rate was 99.86%. Genotyping for the lead SNPs in the 13 loci was done in additional 446 cases and 1,669 controls using the MassARRAY system from Sequenom. Three samples with more than 5% missing genotypes were removed from the data analysis. Fourteen SNPs had less than 5% missing genotypes and showed no deviation from Hardy-Weinberg equilibrium (P40.05, Fisher's exact test) in control samples. Genetic power calculation. We used CaTS 38 to estimate the statistical power of the current sample size. Under a multiplicative model, we set the case number at 942, control number at 2012, and a disease prevalence rate at o0.001, then estimated the power to obtain a significant level of 0.05, 1 Â 10 À 4 and 5 Â 10 À 8 at disease allele frequency (DAF) of 0.1, 0.05 and 0.01, respectively ( Supplementary Fig. 13). Although the power was limited under the current sample size, we still had 80% chance to obtain genome-wide significant SNPs (P ¼ 5 Â 10 À 8 , the Bonferroni-corrected significance threshold) with genetic relative risk (GRR) ¼ 1.7 and a minor allele frequency ¼ 0.1, or GRR ¼ 2 and DAF ¼ 0.05, or GRR ¼ 4 and DAF ¼ 0.01. Association test. We estimated the associations between SNP genotypes and CFM traits by applying LRs in Plink (v1.9) 39 . To handle the population stratification of the samples, we performed LRs on all SNPs with a covariate of the first 20 eigenvectors from PCA. A QQ plot of this test was shown in Supplementary Fig. 14, of which the genomic inflation factor was 1.036 (based on median w 2 ). The Manhattan plots were constructed using qqman 40 . Bonferroni adjustment was corrected for multiple comparisons, and the threshold for genome-wide significance was set at a P value o6.3 Â 10 À 8 ( ¼ 0.05/792,342 variants). The regional association plots and linkage disequilibrium (LD) plots were performed using LocusZoom 41 . We performed more conditional LRs on the replicated samples with the first 20 eigenvectors from PCA as covariate and carried out combined analyses on the discovery and replication data, male versus female subgroups, left-versus right-side-affected subgroups using METAL 42 with the parameters as follows: EFFECT, Beta; Weights in P value-Based Analysis, sample size; and heterogeneity, Cochran's Q-test. Genotype imputing. Pre-phasing haplotypes of each significantly associated locus was performed by SHAPEIT algorithm 43 . Imputing the untyped SNPs within a CFM-associated locus was based on the 1KG project phase 1 integrated variant set (b37; December 2013) with IMPUTE2 (ref. 44). In order to remove poorly imputed SNPs, we used a strict cutoff (info of 0.85) for post-imputation SNP filtering. LRs, controlling for the first 20 eigenvectors from PCA, were performed to test for the associations of imputed variants with CFM. Conditional association analysis. To identify other independently associated SNPs at a significant locus, we performed a conditional analysis on genotyped and imputed data using Plink. We first conducted association tests on the remaining significant SNPs by adjusting for the most significantly one at that locus. We then repeated the test with adjustment of the most significant one plus the remaining variants until no further genome-wide significant SNPs could be remained. Independently associated SNPs were those who have P value o0.05 after Bonferroni's adjustment in conditional association test. Joint multiple-SNP analysis for association study. To interrogate the interactions of SNPs within a gene or a defined haplotype block, we performed joint analyses. We defined 18,414 gene sets harbouring 358,890 SNPs and 120,458 block sets harbouring 606,013 SNPs. To test for joint effects with SKAT package 11 , multiple LR was implemented with the first five eigenvectors of PCA as covariates and with polyphen scores 45 as each SNP's weight. The thresholds for adjustment of multiple tests were set at 2.72 Â 10 À 6 (0.05/18,414 sets) and 4.15 Â 10 À 8 (0.05/120,458 sets) for gene set and haplotype-set-based regressions, respectively. Functional annotation. We annotated the CFM-associated variants (typed and imputed) using SeattleSeq (v138) and HaploReg (v2) 12 . For SeattleSeq, we just kept variants that might have functional effects (Supplementary Data 2). For HaploReg, we only queried variants with a GWAS P value o1 Â 10 À 4 . All the annotations were displayed in Supplementary Data 3. The LD calculation was based on the ASN populations from 1KG (phase 1), and LD threshold (r 2 ) was set at 1.0. The enrichment analyses of enhancer and DNase hypertension site were performed based on the ENCODE and the Roadmap databases with 1KG ANS pilot data as background set. eQTL analysis. To interrogate the associated SNPs with regard to gene expression, we performed eQTL analyses on SNPs with a GWAS P value o0.01 using Genevar (v3.3.0) a platform of database and web services, designed for data integration, analysis and the visualization of SNP-gene associations 13 . With a SNP-centric approach, we used SNP-gene association analyses with genetic variations and gene expression profiling data from lymphoblastoid cell lines of the CEU, CHB, GIH, JPT, LWK, MEX, MKK and YRI individuals from HapMap. We measured the effects with the parameters set to Spearman's rank correlation coefficients and with a window size of 200-kb and a P value threshold of 0.01. For the seven HapMap populations, significant eQTL SNPs associated with gene expression are illustrated in Supplementary Data 4. Estimation of CFM variance explained. We used the GCTA package 46 to estimate the variance in CFM liability, which could be explained by either the associated SNPs or all genotyped SNPs. The prevalence of CFM was 1.4 per 10,000, estimated from a 5-year epidemiological study in China 10 . For each associated locus, we used a SNP set composed of SNPs with a P value o0.05 in that locus to estimate the phenotypic variance that could be explained. Candidate gene prediction and pathway analyses. We used GRAIL 14 to analyse the potential relationships of the residing genes in the 13 associated loci without phenotype information. The query regions comprised the 200-kb flanking regions of a lead SNP (if no gene was found, then the nearest gene to that SNP was picked). The analysis settings were as the following: human genome assembly, HG18; HapMap population, CHB þ JPT; functional data source, PubMed Text (August 2014); gene size correction, off; gene list, default gene list; queries and seed regions, equal. To perform gene-annotation enrichment analyses and functional annotation clustering, we analysed the 46 genes from GRAIL using the DAVID v6.7 (ref. 15). Modified Fisher's exact test was used to determine the significance of gene-term enrichment. The ES was used to rank the overall enrichment of the annotation groups. The ES value was defined as minus log transformation on the average P values of annotation terms and was set at 1.3 (non-log scale of 0.05) for significance. To trim the annotation clusters, we used high-classification stringency parameters set suggested by DAVID. To depict the relationship among gene ontology terms within a significant cluster, we used R language to illustrate the kappa similarity between the terms. We also used DEPICT to systematically identify the most likely causal genes in a CFM-associated locus with regard to the highly expressed tissues and cell typed and enriched physiological condition 47 . We first retrieved independent 13 sets of loci using clump methods in Plink (parameters of --clump-p1 1e-5 --clump-kb 500 --clump-r2 0.1). We then submitted them to DEPICT and obtained 13 non-overlapping genomic regions (similar to our previous identified 13 loci) with a total of 29 genes. Meanwhile, gene expression level and physiological system enrichment were also analysed using various databases of gene expression, protein-protein interactions, Mouse Genetics Initiative, Gene Ontology and pathways of Reactome and KEGG. Gene expression in embryos and gene-editing mice. To investigate the expression pattern of the candidate genes in embryos, we interrogated in situ hybridization data of ROBO1, GATA3, GBX2, FGF3, NRP2, EDNRB, SHROOM3, SEMA7A, EPAS1, KLF12, PLCD3 and ARID3B using the database of Mouse Genome Informatics, Gallus Expression in situ Hybridization Analysis and Xenbase. We focused on the NCCs-related tissues and CFM-influenced facial substructures. To explore the phenotypes of mutant mice caused by these candidate genes, we interrogated the database of Mouse Genome Informatics and focused on embryonic substructures that related to the craniofacial development. External ear's malformation was a common character to CFM. The development of external ears is completed at 5 d.p.n. for mouse. We collected the external ears from BALB/c lineage at 18 d.p.c. (3 samples, 1 male and 2 females), 0 d.p.n. (3 samples, 2 males and 1 female), 5 d.p.n. (3 samples, 2 male and 1 females) and adult (4 samples, 3 males and 1 females), respectively. Frozen tissues were disrupted and homogenized in RLT Buffer. Total RNA was extracted from the ear tissue samples with the traditional TRIzol method, quantified with a Nanodrop spectrophotometer (Thermo Fisher Scientific). The quality of RNA was confirmed with agarose electrophoresis. The total RNA was reverse transcribed into complementary DNA (cDNA) in a 20-ml reaction using a FastQuant RT Kit (YQYK-biotech). For quantitative reverse transcription-PCR amplifications, gene-specific primers for ROBO1, ARID3B, SEMA7A, FGF3, FGF4, EPAS1, KLF12, GBX2, SHROOM3, NRP2, EDNRB and PLCD3 were from Sangon-biotech (Supplementary Table 3). A genomic quantitative real-time PCR was performed with the 7500 Real-Time PCR system (Applied Biosystems). In a 10 ml PCR reaction, 5 ml of SYBR Green Master mix (Applied Biosystems), 30 ng of cDNA, and 10 pmol of each primer were included. The expression level of GAPDH was measured in parallel as an internal control for normalization. Amplification efficiency was confirmed by melt curve analysis demonstrating the absence of nonspecific products or primer-dimers. Three replicates were performed for each biological sample at the reverse transcription step and the same batch of cDNA was used for all subsequent PCR amplifications. The relative expression level was determined using the 2 À DCt method 48 . This experiments was reviewed and approved by the Ethics Committee of the Plastic Surgery Hospital, Chinese Academy of Medical Sciences, in adherence with the Declaration of Helsinki Principles. Whole-genome sequencing. We sequenced the whole genome of 21 CFM patients from our study samples, including 7 left-side-affected, 7 right-side-affected and 7 bilateral individuals. The selected individuals were those who had risk alleles (with a frequency greater in cases than in controls) of the lead SNPs rs17802111, rs3754648, rs13089920, rs10905359, rs11263613 and rs10459648 for right-sideaffected CFM, rs13089920, and rs17090300 for left-side-affected CFM, and rs13089920 for bilateral CFM. Paired-end sequencing with 150-bp read lengths was performed on Illumina HiSeq X10 instrument and yielded a mean depth of 27 Â . All reads were mapped to the human reference genome (hg19) using BWA 49 (version 0.7.5a). PCR duplicates were removed using the Picard software program (version 1.92; http://broadinstitute.github.io/picard/). The Samtools 50 (version 0.1.19) and GATK 51 (version 3.1) software packages were used to call variants. Within the 13 associated loci, we annotated variants with SeattleSeq Annotation 138 and removed variants that had been reported in dbSNP 138. Then we focused on the missense, frameshift, splicing and conserved (GERP score 42 or phastCons score 40.8) variants, as well as variants in TFBSs. All variants in Supplementary Data 11 pass manual confirmation using the IGV package 52 . Functional analyses on variants. We performed functional analyses on all identified candidate variants with following steps. First, we evaluated possible impacts of the mutations on the structures or functions of the corresponding proteins using Polyphen-2 (ref. 53) and SIFT 54 . Mutations with PloyPhen score 40.5, or SIFT score o0.05 were considered as deleterious to the function or structure of protein. Second, SignalP 4.1 was used to predict the signal peptide with the assumption that the protein contained no transmembrane segments 55 . The parameters for analysis with SignalP were as follows: Organism group, Eukaryotes; D-cutoff values (optimize the performance and affect sensitivity), Default; Method, Input sequences do not include transmembrane segments. Third, we predicted the secondary structure of both the wild-type and mutant proteins using an online software PSIPRED (v3.3) 56 . Fourth, we used SWESS-MODEL to predict the tertiary structure of each protein and found that mutations were not in any range of modelled residues except p.R291H in PLCD3. We searched the three-dimensional (3D) structure deposited in the Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB). We found that only GATA3 had X-ray-derived 3D structure, but p.A20S in GATA3 was not in the fragment of unknown structure. Fifth, based on modelled 3D structure of PLCD3, we used Swiss-PdbViewer 4.1 (ref. 57) to view the effect of p.R291H on the protein PLCD3. We downloaded Q8N3E9-PLCD3 protein from SWISS-MODEL repository and analysed the wild-type and mutant proteins using parameters as follows: minimum energy, residues within six angstroms to the p.R291H, secondary structure as ribbon format, colourful secondary structure by types, computing H-bonds and van der Waals.
7,651
0001-01-01T00:00:00.000
[ "Biology", "Medicine" ]
A novel method of fuzzy fault tree analysis combined with VB program to identify and assess the risk of coal dust explosions Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. Introduction Coal dust, produced in coal mining activities, can lead to coal dust explosions (CDE), posing a serious threat to miners' occupational safety [1,2]. CDE are a major disaster accident in coal mines, often causing heavy casualties and huge economic losses [3,4]. In China, over 85% of underground coal mines face the risk of coal dust explosion and the number of casualties from CDE exceeded 5000 between 1949 and 2015 [5][6][7]. In September 2000, an extremely large CDE, caused by a gas explosion, occurred in Muchonggou coal mine in Guizhou province and led to 162 deaths and a direct economic loss of 12 million yuan [8]. In May 2004, another shocking CDE occurred in Dongfeng coal mine in Heilongjiang province, leading to the deaths of 171 miners [5]. In recorded history, the most serious CDE occurred in 1942 in Benxi coal mine, which caused the deaths of 1549 miners and left 246 injured [7]. The damage caused by the CDE is destructive and the current technique is incapable of avoiding the destruction when a CDE occurs, making an early warning system extremely important. With this concern, an effective method for identifying and assessing the risks of CDE in advance is very necessary. By identifying the causes of CDE and assessing the probability of the various causes and the CDE, we can determine the weak links of the coal mine system and take preventive measures in advance to avoid occurrences of CDE. However, few studies have been done to investigate a scientific assessment of CDE risk and obtaining the exact probability data of each basic event is almost impossible, which limits the application of conventional fault tree analysis (FTA), due to the complexity and fuzziness of coal mine environment. Fuzzy mathematics is an effective method employed to solve problems with fuzzy characteristics [9]. For example, fuzzy theory was employed to analyze pricing and retail service decisions in fuzzy uncertainty environments [10] and an optimistic decisionmaking method was proposed for optimization problem based on fuzzy mathematics [11]. This paper employs FTA, which is a powerful technique used for evaluating system reliability or accidents in other industrial fields [12][13][14][15], to assess CDE risk. In addition, VB program is also employed to overcome the difficulty of large amount of computation and artificial mistakes in the quantitative analysis process, which limits the direct application of conventional FTA. Under this background, the authors proposed a novel method combined fuzzy set theory and VB program for identifying and assessing CDE risk, which could play an important role in the prevention of CDE accidents. In this paper, we first gave a brief introduction to the method of fault tree analysis combined with fuzzy set theory and constructed the fault tree for CDE. Then the qualitative and quantitative analysis of the CDE fault tree was conducted. Subsequently, the VB program was proposed as a tool to simplify the analysis procedure and promote analysis efficiency. Finally, a case study was carried out to verify the effectiveness of the proposed novel method. General idea and construction of CDE fault tree General procedure of the novel method In this paper, fuzzy set theory is introduced to overcome the restrictions of conventional FTA. Fault tree analysis combined with fuzzy set theory has been proven to be effective on solving such problems [16,17], and some researchers have applied this method to analyze some accidents [18,19]. The procedure and the principle of the proposed methodology are presented in Fig 1. Based on the explosion mechanism and influence factors, a fault tree for CDE can be constructed, and then some arithmetic operations of intuitionistic trapezoidal fuzzy numbers will be used to calculate the failure probability data for basic events, which are expressed as intuitionistic trapezoidal fuzzy numbers by the expert elicitation. With the exact probability data of each basic event, the further quantitative analysis for the FTA of CDE can be carried out. In addition, the Visual Basic (VB) program is proposed to overcome the inherent drawback of conventional FTA, which not only analyzes the risk quickly but also reduces artificial mistakes. At last, a case study is implemented to illustrate the effectiveness of this proposed method. Construction of CDE fault tree FTA is a deductive and powerful method for evaluating coal mine system safety and identifying its potential causes that lead to undesired CDE. In this paper, the fault tree starts with the CDE and work backwards towards three intermediate events that must occur together: 1. "the concentration of coal dust reaching the minimum explosive concentration", 2. "high temperature ignition source", and 3. "the coal dust being explosive". We continue to develop the fault tree until all branches have been terminated by 50 basic events, which are shown in the Table 1. Finally, a complete CDE fault tree is constructed, as shown in Fig 2A, 2B and 2C. Qualitative analysis Aiming to identify the minimal cut sets (MCSs) and the minimal path sets (MPSs), which are the undeveloped combinations of basic events, qualitative analysis is performed. CDE will occur if basic events in one MCS fail together, and we can identify various paths that lead to the CDE occurrence by MCSs. Once the CDE occur, it is convenient to identify the causes with better understanding of MSCs. Besides, the most critical MCS, which leads to the occurrence of CDE with greatly possibility, can be identified and some effective measures will be Novel method to identify and assess the risk of coal dust explosions proposed based on the MCSs. In contrast, the CDE will not occur if basic events in one MPS never fail together and the MPSs can help us discover preventive measures. With the help of quantitative analysis, preventing the occurrence of basic events in the MPS that its occurrence possibility is the highest can be adopted as the main preventive method, which can prevent the occurrence of CDE with high performance. By using a combination of the Fussell-Vesely algorithm and the rules of Boolean algebra [20,21], MCSs and MPSs can be obtained from Eq (1) and Eq (2), respectively: Quantitative analysis Trapezoidal fuzzy numbers to define the possibility of basic events. Conventional FTA is completely understood by the basic events represented by exact values of failure probabilities. However, exact values of the failure probabilities are difficult to obtain due to the physical constraints. To overcome this limitation, fuzzy set theory is employed. The concept of fuzzy set was introduced by Zadeh [22], and fuzzy set theory is widely used to deal with imprecise and vague information. In this paper, the probability of basic events for the CDE fault tree is described by the intuitionistic fuzzy numbers from a quadruple (a 1 ,a 2 ,a 3 ,a 4 ). Aiming to obtain the corresponding intuitionistic fuzzy numbers, we need to incorporate expert judgment into the FTA study. The expert elicitation method involves the direct estimation of probability by specialists in relevant fields, and the estimated failure probability values are closer to the real values. When judging the probability of basic events, experts usually give the judgment language: "equally", "high", "low", and so on. Therefore, we need to convert linguistic terms into corresponding fuzzy numbers. The corresponding membership function is shown in Aggregation stage. In this paper, three experts in relevant fields are invited to judge the possibility of basic events occurring. As they are with different backgrounds, the opinions of the experts are different for the same basic events and it is necessary to aggregate the opinion of each expert to reach a consensus. The aggregation stage can be divided into four steps. Step 1 Weighting factor calculation. Each expert expresses his opinions of the basic events based on his/her background: professional experience, educational or technical qualification, and professional position. The rating of these judgments is necessary due to the difference in background factors. The weighting scores of experts are defined as shown in Table 2. In this paper, synthetically relative deviation distance was used to calculate the weighting factor for each expert. a. The construction of the expert scores matrix Definition 1: Let u ij be the score of expert i (i = 1,2,3) on item j (j = 1,2,3), where "item 1" represents the "Professional position", "item 2" represents the "Professional experience (years)", and "item 3" represents the "Educational or technical qualification". Then we can acquire the expert scores matrix as follows: b. The construction of the relative deviation distance matrix Definition 2: Let δ ij be the relative deviation distance of item j (j = 1,2,3) of expert i (i = 1,2,3) Then, δ ij can be obtained by Eq (3): Where u imax = max{u 1i ,u 2i ,u 3i }, and u imin = min{u 1i ,u 2i ,u 3i } Where Δ represents the relative deviation distance matrix. c. Assigning weighting factor of each evaluation factor It is necessary to consider the relative worthiness of each evaluation factor because one factor may show more importance over another one in reality. In this paper, the weighting factor of each evaluation factor is assigned by an expert who is responsible and extremely familiar with the system and is written in the following form: Where b j represents the weighting factor of item j (j = 1,2,3). d. The calculation of the synthetically relative deviation distance The synthetically relative deviation distance can be calculated from Eq (4): Where d i represents the synthetically relative deviation distance of expert i (i = 1,2,3). e. The calculation of the weighting factor of each expert The weighting factor of each expert can be obtained using Eq (5): Where D(i) represent the weighting factor of expert i (i = 1,2,3). Step 2 Relative agreement calculations. Due to the range of different backgrounds, the opinions of each expert have different weighting factors. To ensure that the opinions of all expert are taken into consideration, a relative agreement calculation is necessary during the aggregation weighting calculation process. The relative agreement calculation process is divided into three steps: (1) Similarity measure calculation This step calculates the similarity measure of two opinions on a same basic event. The opinions are described by a set of fuzzy numbers. In this paper, the similarity is obtained by calculating the arithmetic average minimum similarity degree, as expressed in Eq (6). Where φ (i,j) represents the similarity measure of expert i (i = 1,2,3) and expert j (j = 1,2,3) on the same basic event, μ i (k) represents the k-th number of the trapezoidal fuzzy number of expert i (i = 1,2,3), and μ i (k) represents the k-th number of the trapezoidal fuzzy number of expert j (j = 1,2,3). (2) Average agreement calculation The average agreement (AA) can be obtained by Eq (7): Where AA(i) represents the average agreement of expert i (i = 1,2,3). (3) Relative agreement calculation The relative agreement (RA) of each expert is calculated by Eq (8): Where RA(i) represents the relative agreement of expert i (i = 1,2,3). Step 3 Aggregation weighting (AW) calculation. To balance the weighting factor and relative agreement, the aggregation weighting is calculated by Eq (9): Where AW(i) represents the aggregation weighting of expert i (i = 1,2,3), α represents the value of the relaxation factor of this proposed method and shows the importance of D(i) over RA(i). Step 4 Calculation of the aggregated results of experts' judgment (EG). Following the above steps, we easily acquire a set of aggregated fuzzy numbers describing the probability of basic events and the aggregated results can be calculated using Eq (10): Where μ EG represents the aggregated results of the experts' judgment, and μ i represents the judgment of expert i (i = 1,2,3). Defuzzification process. In this paper, in order to obtain quantifiable results about the probability of basic events, the center of area defuzzification technique [21,23] is employed to defuzzify the fuzzy numbers. Defuzzification of a trapezoidal fuzzy numbers μ = (a 1 ,a 2 ,a 3 ,a 4 ) can be realized from Eq (11): Where P Ã represents the crisp possibility values of a basic event. Converting crisp possibility values into probability values. The probability values (P) can be obtained from the possibility in Eq (12) [24]: 8 < : Quantitative calculation. By the application of fuzzy set theory, the occurrence probability of basic events can be evaluated and the quantitative analysis can be carried out. In this paper, the occurrence probability of CDE, the occurrence probability of MCS and MPS, and the importance of each basic event are calculated. The occurrence probability of CDE calculation (CDEOP) We can easily calculate the CDEOP when acquiring the occurrence probability of each basic event. We know the situation of CDE in this coal mine by the occurrence probability. We can also decide whether to take measures to avoid CDE immediately. The occurrence probability of MCS and MPS calculation The occurrence probability of MCSs and MPSs is very important in the decision of targeted measures. By calculating the occurrence probability of MCSs (MCSOP), we can acquire the most crucial MCSs for the undesired CDE and take effective measures to avoid the CDE. By calculating the occurrence probability of MPSs (MPSOP), we can take the relative solution to avoid the CDE. The importance of basic event calculation (IOBE) The IOBE calculation is employed to evaluate the contribution of each basic event to the occurrence of CDE. Using this method, we determine the relative important basic events and take corresponding measures to avoid the occurrence of these basic events, which will avoid the CDE effectively. The IOBE can be acquired by calculating the occurrence probability of the CDE while the occurrence probability of this basic event is considered 0, shown as Eq (13): Where PI j represents the value of the importance of basic event i. Obviously, the smaller PI is, the more important basic event j (j = 1,2,3) is. Rapid assessment by the VB program The CDE fault tree includes fifty basic events and we can acquire 1120 MCSs, so we need to employ a computer program to simplify the analysis process. VB program is a simple and convenient tool used for small software programs. In addition, it is convenient to download and install the VB program. This paper employs VB program to simplify the analysis process. The program not only realizes the calculation of the CDEOP, MCSOP, MPSOP, and IOBE, but also realizes the ranking of each MCSOPs. Due to the actual requirements, the program ranks the top 10 MCSs consisting of 3 basic events, 5 basic events, and 6 basic events, respectively. In the program, MCSOP3 represents the occurrence probability of MCSs consisting of 3 basic events, MCSOP5 represents the occurrence probability of MCSs consisting of 5 basic events, and MCSOP6 represents the occurrence probability of MCSs consisting of 6 basic events. The form of the VB program is shown in Fig 4. Case study To illustrate the proposed method, the Zhuxianzhuang Coal Mine, which has a risk of CDE, is taken as an example. Background The Zhuxianzhuang Coal Mine is located in Huaibei mining area, eastern China. It is a modern large scale mine with a 2.45 million tones production capacity. There are hundreds of miners working underground within a working shift, and there are three villages, a primary school, and a secondary school around this coal mine. If one CDE event occurs, there will be catastrophic results. Therefore, it is necessary to analyze the potential danger of a CDE for this coal mine. CDE risk identification and assessment Step 1. Experts' profiles. As mentioned above, three experts from different backgrounds are invited to evaluate the occurrence probability of basic events and the weights of the experts are not equal (Table 2). In order to calculate the weighting factor for each expert, a profile of each expert is necessary. Experts' profiles and corresponding factor scores are shown in Table 3. Step 2. Experts' judgments. Experts' judgments on the basic events are shown in Table 4. Step 3. Assigning the values of α, b 1 , b 2 , and b 3 . The exact values of α, b 1 , b 2 , and b 3 are different for different coal mine systems, and it is necessary to assign values for α, b 1 , b 2 , and b 3 before the analysis is performed. Step 4. Data input. Before the VB program analysis, we need to input the relevant data. Fig 5 shows the form of the VB program after data input. Step 5. The results of the VB program analysis. The results of the VB program analysis are shown in Fig 6. Based on the IOEP values, we find that the most important basic event is X2 and the least important basic event is X7. The top 6 basic events are X2, X6, X10, X23, X25, and X31. From the occurrence possibility values of MCSs, we know that the MCSs of X1, X2, X31 and X1, X6, X31 are more critical. Based on the values of MPSOPs, MPS1 and MPS2 are the better choices for avoiding the occurrence of CDE. Suggestions From the results of the VB program analysis, the following suggestions are provided: 1. It is better to take measures to avoid the occurrence of basic events X2, X6, X10, X23, X25 and X31. The IOEP values of above basic events were smaller relatively, which represents that these basic events are the dominant factor resulting in the CDE. Besides, the MCSs of (X1, X2, X31) and the MCSs of (X1, X6, X31) were the main ways leading to CDE. As long as one of the basic events in the MCS does not occur, this MCS will not contribute to the occurrence of CDE. We cannot change the explosiveness of coal dust, so the occurrence of Novel method to identify and assess the risk of coal dust explosions X1 cannot be prevented. Hence, preventing the occurrence of these basic events can reduce occurrence possibility of CDE greatly. Blasting workers should obey the blasting rules and operate correctly. Technicians should take measures to control the moisture in the air. In addition, some necessary measures should be taken to avoid friction fire and prevent gas explosion. And managers should educate the workers about safety and the significance of correctly operating the equipment. They also need to clean up the settled coal dust in underground roadways in time. 2. When taking prevention measures, the MPS of MPS1 and MPS2 are the better choices in our opinion. Because the occurrence probability of the MPS1 and MPS2 are larger than others, the prevention measures from the MPS1 and MPS2 are more effective and easily realized. As long as all the basic events in the MPS do not occur, the CDE will not occur. Comparison with safety checklist analysis The conventional method to evaluate the CDE risk is safety checklist analysis that experts judge the condition of potential risk listed the checklist. This evaluation method is simple and easy to master. However, this method is only used for simple qualitative analysis and cannot be employed to identify the ways leading to the occurrence of CDE. Without quantitative analysis, this method fails to show the risk level of the system in intuitive number and provides us with rough understanding of the safety status of the system. Besides, pre-cautionary measures and some important measures cannot be put forward and the employed measures are not aimed at the outstanding problems as well as potential risks. What's more, the safety checklist analysis fails to aggregate the judgement of experts to reach a consensus, which makes the judgements more subjective. By comparison, the novel method proposed in this paper can overcome the above problems in the safety checklist, which is more scientific, objective and efficient. Conclusions This paper proposed a novel method for a fuzzy fault tree associated with the VB program to identify and assess the risks of CDE. Conclusions can be drawn as follows: a. The CDE fault tree constructed in this paper can reflect the CDE process comprehensively, in which the influence of gas concentration and fugitive coal dust that are overlooked easily is taken into consideration. Additionally, the CDE fault tree can help us identify the potential causes and determine the various paths which easily lead to CDE. b. With the application of the fuzzy set theory and expert elicitation into conventional FTA, the exact value of the IOBE can be obtained, and the quantitative analysis of the CDE fault tree can be reasonably carried out. Synthetically relative deviation distance techniques make this quantitative analysis more effective. c. The VB program technique remarkably simplifies the process of analysis and can work out the calculation results in a few minutes. Due to the complexity of the coal mine system, the fuzzy fault tree analysis of CDE is very difficult and time-consuming. The VB program technique makes it possible to quickly solve the complex fault tree for CDE. d. This novel method was successfully applied to assess the CDE risk in the Zhuxianzhuang Coal Mine and some valuable comments and suggestions are provided to the decision makers. The results have verified the effectiveness of this method. Therefore, it is believed that this method can be used for CED accident prevention and protect the safety of workers.
5,273.8
2017-08-09T00:00:00.000
[ "Computer Science" ]
Imaging Characteristics of Disturbance Flow Field Surrounding a Hypersonic Target : The disturbance flow field arises naturally with hypersonic target flying in near space. In situations where traditional infrared and radar systems lose effectiveness, space-based optical detection of this surrounding flow can serve as an alternative method for detecting high-speed targets. This paper presents a remote sensing imaging analysis of the disturbance flow field surrounding a hypersnoic target at different flight altitudes and Mach numbers. Utilizing Fourier Optics and Background-Oriented Schlieren, in conjunction with the fourth-order Runge-Kutta ray tracing algo-rithm, the imaging blurring and imaging deviation of three typical backgrounds under the influence of the disturbance flow field are obtained. Additionally, the study analyzes the influence of flight conditions and parameters of the imaging system on the imaging characteristics, and provides optical design recommendations. The results indicate that the presence of disturbance flow fields leads to varying degrees of visually apparent blurring effects and indiscernible deviation effects on the background images. Furthermore, the profiles of the disturbance flow field are extracted, in agreement with current experimental research. This study verifies the feasibility of space-based optical detection of hypersonic targets through disturbance flow field remote sensing imaging and contributes to the advancement of imaging research in this field. Introduction Hypersonic targets in near space are characterized with high flight speed, excellent maneuverability, and great stealth; the importance of their efficient detection has been emphasized; and numerous studies on infrared [1,2] and radar [3,4] technologies for space-based and ground-based detection have been conducted. When an aircraft flies in near space at hypersonic speed, the surrounding air is violently compressed, generating a high-temperature, high-pressure, and high-density flow field [5]. The surface layers of the aircraft are ablated into gas under the effect of the high-temperature and highpressure flow field and ionized with the surrounding air, forming a plasma sheath around the aircraft. This plasma sheath refracts and absorbs electromagnetic waves, creating challenges for radar detection [6,7]. Furthermore, the low flight trajectory of the target in near space and the curvature of the Earth also impose difficulties on radar detection. In some cases, infrared detection proves ineffective due to the aircraft's use of gliding without powered propulsion, as well as the advancements in infrared stealth materials. However, throughout the flight of the aircraft, the disturbance flow field inevitably arises, offering an opportunity for measurement and detection. The non-uniform refractive index distribution and variation of the flow field affect the transmission of target light, resulting in the blurring and deviation of the background image [8,9]. Current mechanism analysis or experimental exploration focuses on the impact of the flow field near the optical dome of hypersonic guided missiles on the transmission of target light [10][11][12], which is incompatible with the optical transmission through the entire disturbance flow field. For studying the optical transmission through the entire disturbance flow field, an effective approach is to visualize it. In April 2011, October 2014, February 2015, and December 2018, NASA [13] conducted a series of flight tests against the desert flora below 10 km and visualized the disturbance flow field of single and dual transonic aircraft through Background-Oriented Schlieren, respectively. The Background-Oriented Schlieren (BOS) technique is a flow visualization method based on the deflection of light after passing through the flow field, and measures the variation of the flow field density by computer image processing, with the advantages of the simple optical system, ensuring easy and fast measurement. Currently, the BOS technique is mainly applied to the parameter measurement of flow fields [14,15], flow visualization [16], and the optical system transfer function measurement [17]. NASA has indeed visualized the flow field, but its flight tests were conducted for transonic aircraft at altitudes below 10 km and at Mach number approximately 1.0 Ma, where the atmospheric components and conditions are quite different from those of hypersonic aircraft in near space [18]. Additionally, NASA has not disclosed specific details about their experiments, and it is also difficult to perform BOS experiments in near space. Therefore, it is essential to investigate the imaging characteristics of the disturbance flow field surrounding hypersonic targets. This paper presents a comprehensive investigation into the imaging characteristics of the disturbance flow field. The disturbance flow field of a hypersonic target under diverse flight conditions was simulated using the computational fluid dynamics software ANSYS Fluent. Ray tracing was performed utilizing the fourth-order Runge-Kutta algorithm, enabling the calculation of essential optical parameters, such as refractive index gradients, phase differences, and deflection angles. Moreover, the imaging blurring and imaging deviation were analyzed against three representative backgrounds (desert, ocean, and city) using Fourier Optics and Background-Oriented Schlieren techniques. Subsequently, the correlation between imaging blurring, imaging deviation, and the orbit height, pixel size, and focal length of the space-based imaging detection system was established. The results demonstrate that the presence of perturbed flow fields leads to a varying degrees of visually perceptible blurring effects and indiscernible deviation effects in the background image under different flight conditions. The profiles of the disturbance field were successfully extracted from the deviated image, demonstrating the feasibility of detecting hypersonic targets through the disturbance flow field. This approach holds promise as an alternative scheme in situations where infrared and radar detection methods are not available. The remainder of this paper is organized as follows. In Section 2, the research method of imaging characteristics of the disturbance flow field surrounding a hypersonic target based on Fourier Optics and Background-Oriented Schlieren was analyzed. In Section 3, the results of the imaging blurring and imaging deviation caused by the disturbance flow field were demonstrated, and the influence mechanism of the relevant parameters was analyzed. In Section 4, the conclusions drawn from the study were presented. Flow Density Calculation The disturbance flow field of a 2D conical target was simulated using the computational fluid dynamics software ANSYS Fluent [10]. The target's geometric model was generated in Auto CAD and then imported into ANSYS ICEM CFD for meshing. The majority of the grid quality values were found to be greater than 0.8, which meets the CFD calculation standard. Figure 1 displays the geometric model of the target used in the simulations. The CFD calculations of the disturbance flow field were performed under different flight conditions, as detailed in Table 1. The RANS solver was employed to calculate the mean density distribution ρ(x, y, z) of the disturbance flow field, as depicted in Figure 2a. Flow Refractive Index Computation and Ray Tracing Algorithm The refractive index of the flow can be derived from the flow density via the Gladstone-Dale relationship [19] n = 1 + ρK GD where n is the refractive index, ρ is the density in kg/m 3 , and K GD is a coefficient which can be expressed as [8] K GD = 2.23 × 10 −4 1 + 7.52 × 10 3 where λ is the light wavelength in nm, and K GD varies weakly with the wavelength. The refractive index field calculated by Equation (2) is shown in Figure 2b, where λ is selected as 532 nm. The propagating path of light in any medium with a nonuniform distribution of the refractive index can be expressed by ray equation [8] d ds n dr ds = ∇n (3) where s is the arc length of the ray propagation path, r is the position vector of the ray propagation, and ∇n is the refractive index gradient. Under the assumption of the ray vector T = n(dr/ds), Equation (3) can be expressed as first-order differential equations. Equation (4) can only be solved analytically when the refractive index satisfies a particular distribution. The approximate solution for the disturbance flow field with random changes the in refractive index is typically obtained by numerical solutions. By introducing a new parameter dt = ds/n, Equation (4) can be expressed as where α, β, and γ are the angles between the light and the x-axis, y-axis, and z-axis, respectively. Assuming D(r) = n∇n, the fourth order Runge-Kutta method [20] could be carried out to solve Equation (5). The approximate solution is expressed as K j and L j j = (1, 2, 3, 4) can be obtained by The Runge-Kutta method for ray tracing is described by Equations (7)- (10), where h is the step length. If the initial position vector r 0 = (x 0 , y 0 , z 0 ), ray vector T 0 = n[cos α 0 , cos β 0 , cos γ 0 ] and step length h were specified, the propagation path of the light can be obtained by iterating Equations (6)- (10). Considering the axisymmetry of the model, the two-dimensional refractive index field can be transformed into a threedimensional refractive index field for ray tracing. It is crucial to calculate the gradient of the refractive index at each point in the disturbance flow field to facilitate the ray tracing procedure. The gradient of the refractive index ∇n can be calculated using the Barron gradient operator [21] As mesh nodes cannot contain every point in the disturbance flow field, it is required to interpolate the refractive index or refractive index gradient using the distance-weighted averaged interpolation algorithm with the 8 nearest nodes, which can be expressed as [22] Phase Difference and Imaging Blurring Derivation As light propagates through the disturbance flow field, the optical distance through which it passes can be expressed by the optical path length [23] where n i is the refractive index at r i and h is the step length of ray tracing. The optical path difference of the light can be used to evaluate the phase difference and can be expressed as [23] where OPL 0 is the optical path length that the light propagates in free space. Thus, the phase difference can be expressed as [23] ϕ(x, y) = 2π λ OPD (16) where λ is the wavelength of light. The pupil function can be determined by calculating the phase difference generated by all rays on the image plane and can be expressed as [24] P(x, y) = A(x, y)e iϕ(x,y) where A(x, y) is the amplitude distribution and D is the diameter of the pupil. According to the Huygens principle, the far-field approximation of the amplitude distribution of light waves within the pupil of the light on the image plane is expressed as [24] where f is the focal length of the optical system. Apparently, U(x , y ) is the Fourier transform of P(x, y). The point spread function can be expressed as [24] PSF Applying the Fourier transform to the point spread function, the optical transfer function OTF can be obtained [24] OTF f x , f y = PSF x , y e −i2π f x x + f y y dx dy (20) The amplitude distribution of the image plane can be expressed as [24] I(x 0 , where O(x 0 , y 0 ) is the amplitude distribution of the object plane, * represents the convolution. The peak signal-to-noise ratio is used to evaluate the quality between blurred image and original image and can be expressed as [25] PSNR(i, j) = 10 log 10 where L is the maximum valid value for a pixel, MSE is the mean squared error of the image and can be expressed as [25] wheref (i, j) and f (i, j) represent the blurred image and the original image, respectively. M and N represent the length and width of the image, respectively. Background-Oriented Schlieren and Imaging Deviation Computation Background-Oriented Schlieren (BOS) uses the deflection of light to identify changes in the refractive index (or density) of a flow field. When light passes through the disturbance flow field, it undergoes deflection from its original path, causing a shift in the position of the light incident on the CCD camera. This shift results in a disparity between the background images captured by the CCD camera in the presence of a disturbance flow field (referred to as the experimental image) and in its absence (referred to as the reference image) [26]. In this study, Background-Oriented Schlieren is employed for the imaging deviation analysis. Figure 3 illustrates the schematic of Background-Oriented Schlieren (BOS) [26]. In the figure, Z O represents the distance from the background to the center of the disturbance flow field, W denotes the length of the disturbance flow field, Z B corresponds to the distance between the background and the camera lens, and Z I signifies the distance from the lens to the imaging plane, which is approximately equal to the focal length f . The dotted line in the figure represents the path of light in the absence of a disturbance flow field, while the solid line depicts the actual path of light propagation through the disturbance flow field. The y-direction deflection angle between these two rays is denoted by ε y . The parameter ∆y represents the displacement in the y direction between the corresponding point on the reference image and the experimental image. ∆y denotes the virtual shift of ∆y relative to the background. Similarly, ε x represents the deflection angle in the x direction, ∆x is the displacement in the x direction, and ∆x is the virtual displacement of ∆x. After ray tracing, the x direction deflection angle and y direction deflection angle can be expressed as [26] Based on the geometry in the diagram, the relationship between ∆x and ∆x and ∆y and ∆y can be expressed as [26] ∆x For minimal deflection angle, it can be expressed approximately as [26] Thus, the displacement can be expressed as [26] The corresponding pixel shift on the CCD plane can be expressed as where a is the pixel size of CCD. The relationship between the gray value of the deviated image and the original image satisfies [27] I o (x + ∆u, y + ∆v) = I d (x, y) Using the given image as the original image I o , and the displacement field ∆u, ∆v as the displacement label, the deviated image I d can be generated using interpolation. Imaging characteristics of blurring and deviation were performed utilizing an ideal optical system with the parameters listed in Table 2. The Results of Imaging Blurring The initial position vector r 0 is fixed at the bottom of the disturbance flow field, and the incident rays are along the positive z axis. The phase difference after ray tracing is obtained by Equation (16). Figure 4 shows the distribution of the phase difference under different flight conditions. Table 1 for flight conditions from (a-f). The phase difference exhibits a decreasing trend with an increasing flight altitude due to the thinning of the atmosphere at higher altitudes, resulting in a reduction of the refractive index and optical path difference of light. Similarly, as the Mach number progressively rises, the atmosphere experiences enhanced compression, leading to an augmentation in the phase difference. The phase difference is influenced by the density and refractive index distribution within the disturbance flow field. Specifically, a higher degree of fluctuation in the refractive index distribution along the propagation path corresponds to a larger optical path difference of the light. Remarkably larger values of the phase difference are symmetrically distributed downstream of the disturbance flow field, where the disturbance flow field extends extensively, consequently inducing larger optical path differences. The negative phase difference arises due to the comparatively lower atmospheric density in the wake of the target compared to free space, which aligns with the principles of fluid dynamics. Figures 5-7 show the results of imaging blurring, which were computed using Equation (21), against three typical backgrounds: desert, ocean, and city. In comparison to the corresponding original images Figures 5-7, the resultant images exhibit varying degrees of blurriness, and as the flight altitude increases, the background noise gradually diminishes. The stark contrast observed between the resultant images and their original counterparts significantly enhances the identification of space-based remote sensing images. As presented in Table 3, the Peak Signal-to-Noise Ratio (PSNR) is computed for each image to provide a more accurate quantification of the degree of imaging blurring. The results reveal that across three typical backgrounds, the PSNR demonstrates an improvement with increasing flight altitude and Mach number. In reference to Figure 4, at a Mach number of 12, the phase difference attains its maximum value of 252.0 rad, accompanied by a wave aberration of 40.1λ. However, the disturbance flow field area is found to be the smallest within the detection range, leading to results that contradict the phase difference trend. This discrepancy can be attributed to fluid dynamics, where the angle of the shock wave decreases as the Mach number increases, subsequently resulting in the smallest flow field area at 12 Ma within the detection range. The Results of Imaging Deviation According to Table 2 and Figure 3, the distance from the background to the center of the disturbance flow field, denoted as Z O , corresponds to the flight altitude and is set at 20 km, 30 km, and 50 km, respectively. The length of disturbance flow field W is 6 m. The distance from the background to the camera lens Z B is the orbit height and is set to 400 km. The pixel size of the CCD camera is 5 µm and the focal length of the camera lens is 1.0 m. Figures 8 and 9 show the results of the x direction and y direction deflection angles under different flight conditions calculated by Equations (24)-(28), where the positive value represents the positive deflection towards x or y direction, while the negative value represents the opposite direction. Apparently, as flight altitude increases, the deflection angle decreases, and as the Mach number increases, the deflection angle increases, which follows the same pattern as the phase difference. In addition, the y direction deflection angle is an order of magnitude greater than the x direction, indicating that the former has a more pronounced influence on the imaging deviation. The deflection angle is also affected by the density or refractive index distribution of the disturbance flow field. The greater the fluctuation in the refractive index distribution along the path of propagation, the larger the deflection angle of light. The larger values of the deflection angle are similarly symmetrical in the downstream of the disturbance flow field. When specifying the Mach number or flight altitude, the disturbance flow field attains its highest density at 20 km or 12 Ma, resulting in the greatest imaging deviation. Figure 8. The x direction deflection angle distribution: Refer to Table 1 for flight conditions from (a-f). Figure 9. The y direction deflection angle distribution: Refer to Table 1 for flight conditions from (a-f). The pixel shift is calculated by dividing the displacement by the CCD pixel size. The displacement results indicate that the majority of the pixel shifts are at the sub-pixel level. Figures 10-12 illustrate the imaging deviation results, based on the pixel shift, for the desert, ocean, and city backgrounds, respectively. For comparison with the natural background, Figure 13 demonstrates imaging deviation with a speckle pattern as the background. Although no significant visual difference is observed between the deviated image and the original image, the profile of the disturbance flow field can be extracted through background subtraction. The profiles of the disturbance flow field, shown in the last row of Figures 10-13, are found to be consistent with the existing experimental studies. These profiles can provide insights into the presence of a hypersonic target in flight, serving as an alternative method for hypersonic target detection. It is evident that the imaging deviation is influenced by the background image, with a more speckled background image retaining more information about the extracted disturbance flow field. In addition, according to the Gladstone-Dale relationship, the refractive index proves to be insensitive to wavelength variation, allowing for the utilization of the infrared band for nighttime imaging. Table 4 presents the Peak Signal-to-Noise Ratio (PSNR) of imaging deviation, which varies with flight altitude and Mach number. In contrast to imaging blurring, PSNR of imaging deviation demonstrates an increasing trend with flight altitude, yet slightly decreases with Mach number, and is several magnitudes larger. As stated in Equation (28) and mentioned previously, the imaging deviation is influenced by several factors, including the deflection angles ε, focal length f , orbit height Z B , flight altitude Z O and pixel size a. However, the influence of deflection angles and flight altitude on imaging deviation is mutually constrained. As the flight altitude increases, the atmospheric density becomes sparser, leading to a significant reduction in the deflection angles. Moreover, the magnitude of the decrease in deflection angles far surpasses that of the increase in flight altitude. Additionally, flight altitude is often uncontrollable; thus, our primary focus is on exploring the impact of other parameters on imaging deviation. It is observed that a lower orbit height leads to a larger pixel shift; however, it also reduces the detection range. Therefore, selecting a low orbit or sun-synchronous orbit for detection is deemed appropriate. Figure 14 shows the effect of pixel size and focal length on y direction pixel shift under different flight conditions. When the focal length is specified, the pixel shift in the y direction increases as the pixel size decreases, whereas when the pixel size is specified, the pixel shift in the y direction increase as the focal length increases. Meanwhile, the influence of flight altitude and Mach number on the pixel shift remains consistent with the previously discussed trends. To achieve an image that preserves more deviation characteristics, the camera necessitates a large focal length and a small pixel size. However, it is crucial to acknowledge that both focal length and pixel size cannot be adjusted indefinitely. Increasing the focal length leads to a larger camera system volume, which may pose challenges for the satellite and escalate research and development costs. Conversely, decreasing the pixel size results in a reduction of luminous flux, and there exists a trade-off between these two parameters. Hence, a comprehensive consideration of the system requirements and constraints is essential when determining the optimal focal length and pixel size for the camera. Summary In this study, we have thoroughly examined the remote sensing imaging of the disturbance flow field surrounding a hypersonic target under five different flight conditions. Utilizing Fourier Optics and Background-Oriented Schlieren techniques, we have presented imaging blurring and deviation against three typical backgrounds. The imaging quality has been evaluated using the Peak Signal-to-Noise Ratio (PSNR). We have quantitatively analyzed the influences of flight conditions and imaging system parameters on the imaging characteristics. The key findings of our investigation are as follows: The disturbance flow field significantly impacts the background images, resulting in varied degrees of blurring and deviation. The extracted profiles of the disturbance flow field from the deviated images are in accordance with current experimental studies. As the flight altitude increases, the phase difference and deflection angle decrease, leading to a reduced blurring of the images and higher PSNR, owing to the lower atmospheric density at higher altitudes. Conversely, as the Mach number increases, the phase difference and deflection angle increase due to the greater compression of the atmosphere surrounding the hypersonic target. However, the degree of image blurring decreases due to the larger flow field area. The majority of pixel shifts in imaging deviation are at the sub-pixel level, making them indistinguishable to the naked eye. Nevertheless, the profile of the flow field can be extracted from the deviated images. The imaging deviation is closely related to the focal length, pixel size, and orbit height of the space-based imaging system. Our quantitative analysis highlights the importance of a space-based imaging system with a large focal length and a small pixel size, while a low orbit or sun-synchronous orbit is preferred. The imaging results confirm the feasibility of detecting hypersonic targets through disturbance flow field imaging, providing an alternative detection approach when traditional infrared and radar systems lose effectiveness. Moreover, the remote sensing detection of the flow field profile around hypersonic targets not only contributes to the field of high-speed target detection but also enables an in-depth analysis and optimization of the performance and flight characteristics of hypersonic targets. This research also offers theoretical support for experimental studies in this area.
5,640.6
2023-07-31T00:00:00.000
[ "Physics" ]
Strain induced polarization chaos in a solitary VCSEL Physical curiosity at the beginning, optical chaos is now attracting increasing interest in various technological areas such as detection and ranging or secure communications, to name but a few. However, the complexity of optical chaos generators still significantly hinders their development. In this context, the generation of chaotic polarization fluctuations in a single laser diode has proven to be a significant step forward, despite being observed solely for quantum-dot vertical-cavity surface-emitting lasers (VCSELs). Here, we demonstrate experimentally that a similar polarization dynamics can be consistently obtained in quantum-well VCSELs. Indeed, by introducing anisotropic strain in the laser cavity, we successfully triggered the desired chaotic dynamics. The simplicity of the proposed approach, based on low-cost and easily available components including off-the-shelf VCSELs, paves the way to the wide spread use of solitary VCSELs for chaos-based applications. A. VCSEL holder construction details In this section, we describe the custom VCSEL holder used to apply anisotropic strain on packaged VCSELs. This holder is similar to the one used in 1 and has been designed for VCSELs in TO46 packages and might need to be adjusted for other packaging. The holder comprises a main metal plate and a lid. Typical dimensions are displayed in Figure 1. Two screws are used to fix the metal lid onto the VCSEL. Then, by placing a small metal rod behind the VCSEL, we can induce anisotropic strain and tune the level of applied strain by fastening or loosening the screws that fix the metal lid. Finally, a thermistor is placed inside the metal plate and a Peltier element glued on the back of the plate to control the temperature of the system. B. Polarization chaos statistics: residence time estimate In Figure 2 , we give a rough estimate of the average residence time -also called dwell-time -for the polarization chaos dynamics obtained in a stressed QW VCSEL. The dataset is relatively small, and therefore we limit our analysis to the apparent trend keeping in mind that the accuracy of the result is not sufficient for a detailed analysis as done in 2 . The global trend for the average dwell time is quite clear as shown in panel (a), where a good fit is obtained with a linear approximation. This outcome confirms the exponential decrease of the residence time as the current is increased. The results shown in (b) are obtained considering separately the lower and upper levels. We can clearly observe a strong gap between the two set of data as a difference of about 2 orders of magnitude is recorded between the residence times of the two levels. Strong fluctuations are also observed but, again, the trend is clear and suggests an asymmetrical behavior as analyzed in 2 . Finally, in the last plot we show the number of jumps versus injection current, which is the number of jumps in the recorded time-series. For low current values, the number of jumps is small but for current levels above 7.5 mA, hundreds of jumps per point have been recorded. 1-Estimation of the Largest Lyapunov Exponent Similarly to what has been done in 3 , we used the so-called Wolf's algorithm 4 to estimate the largest Lyapunov exponent (LLE) from experimental time-series. The LLE characterizes how fast two nearby trajectories diverge in the system phase space, thus from a theoretical point of view: chaotic systems will exhibit a finite positive (non-zero) LLE while a purely stochastic process will have an infinite LLE and a stable process will have a negative LLE. In the figure below, we give the evolution of the estimated LLE when increasing the injection current. Although very low LLEs are obtained at low current values, we observe a clear increase at higher injection currents. As discussed in previous work 5 , the complexity of the dynamics mostly arises from the jumps between the two scrolls of the chaotic attractor. Thus, we observe a clear correlation between the estimated LLE value and the average residence time as discussed in the next section of the supplementary information. Overall, the use of Wolf's algorithm clearly yields a finite non-zero value of the Largest Lyapunov exponent coherent with the chaotic interpretation of the dynamics. 2-Hidden Markov Processes Another statistic-based approach to discriminate a deterministic mode-hopping from a stochastic mode hopping consists in modelling the hopping dynamics as Hidden Markov Processes (HMP). Whereas a deterministic dynamics can be modelled without hidden processes, the modeling of a stochastic process does require the inclusion of hidden processes 3,6 . In practice, we use the Baum and Welch algorithm to estimate the corresponding 2x2 transmission and emission matrices of the model, that we will identify as A and B respectively. In our case, we focus on the anti-diagonal terms of matrix B: values close to 0 indicate that no hidden processes appear while non-zero terms indicate otherwise. When using this approach on the recorded time-series, the terms of the anti-diagonal of matrix B appear to be close to 0: typically, well below 10 -4 and always below 10 -2 . These results therefore indicate that no hidden processes are required to accurately model the modehopping dynamics as a two-level Markov process, unlike what would be expected for a noiseinduced dynamic. 3-Grassberger Procaccia Algorithm The Grassberger Procaccia (GP) Algorithm is typically used to estimate the so-called K2 or Kolmogorov entropy from time-series data, and in case the value of the K2-entropy converges, the algorithm provides an estimate of the correlation dimension of the chaos investigated 7,8 . We use the same approach and same notations as those described in 3 , including in particular the re-embedding procedure introduced in 9 before processing the experimental data with the GP algorithm. As can be seen in Figure 4, we obtain a result that is very similar to 3 : the K2entropy converges along with the correlation dimension D2. Based on these results, we obtain a K2-entropy about K2 = 5.2 10 -3 ns -1 with the corresponding correlation dimension D2 ≈ 2.04. As already briefly discussed in the main text, the correlation dimension is close to the one reported in 3 and the Kolmogorov entropy is strictly positive, which confirms the chaotic nature of the dynamics. However, the value of the K2-entropy is three orders of magnitude smaller than the one reported previously, but this can be easily explained considering the different time-scales of the dynamics. In 3 the average dwell-time is of the order of the nanosecond while in this report we only reach the microsecond scale, we therefore use a different sampling rate for the two cases. The delay constant  used to the computation of the K2-entropy is therefore three orders of magnitude larger 3,7 , hence leading to a significantly smaller value for the slower dynamics. D. Polarization and frequency-resolved optical spectra In this section, we provide additional details on the emergence of higher-order modes and their polarization. In Figure 5, we show the frequency resolved LI curves for four different projections at 0, 90, 45 and -45 ° respectively. We use the same convention as described in the text. As already mention, the second switching which appears around 6.2 mA, is clearly a switching of the fundamental mode: while we see a large exchange between polarization at 0 and 90° for the fundamental mode, the second order mode sees only very little changes. In addition, we observe that the projections at 45 and -45° for the fundamental modes are almost identical which confirms that the fundamental mode is almost linearly polarized until the second switching, i.e. below 6.2 mA. Nevertheless, looking closer around the switching point, we see a short transition through a slightly elliptical polarization as shown in the inset of panel (b) in which we observe a small increase for the polarization at 90° just before the switching point. E. List of features confirming the chaotic dynamics In summary, the dynamics observed from off-the-shelf VCSELs subjected to mechanical strains proved to show the following features: 1. Dynamics following a polarization switching event of type II. We observed double PS event which is typically a type I switching followed by type II. 2. An abrupt frequency shift and a second PS event appear simultaneously. This shift of frequency is similar to the bistability limit cycle observed in chaotic QD-VCSELs 10 . 3. The dynamics appears as a random-like hopping between two polarization modes, and the average residence time decreases exponentially for increasing currents. 4. Using Wolf's algorithm, we obtain a positive largest Lyapunov exponent for the dynamics. 5. Modelling the dynamics as a Markov process confirms that no hidden processes are required to obtain and accurate modelling. 6. The Grassberger-Procaccia algorithm converges to a non-zero value of the K2-entropy (5.2 10-3 ns -1 ), and a corresponding correlation dimension D2 equals to 2.01. 7. Frequency resolved measurements confirm that the results reported are mostly due to the evolution of the fundamental mode despite the emergence of a second order mode. Using frequency resolved measurements, we observe a short transition through elliptically polarized states consistent with the route to polarization chaos. All these features are in excellent agreement with theoretical models and previous observations of polarization chaos in quantum dot VCSELs, hence allowing us to conclude that the observed dynamics is indeed polarization chaos.
2,171.4
2017-10-25T00:00:00.000
[ "Engineering", "Physics" ]
Secured Perimeter with Electromagnetic Detection and Tracking with Drone Embedded and Static Cameras Perimeter detection systems detect intruders penetrating protected areas, but modern solutions require the combination of smart detectors, information networks and controlling software to reduce false alarms and extend detection range. The current solutions available to secure a perimeter (infrared and motion sensors, fiber optics, cameras, radar, among others) have several problems, such as sensitivity to weather conditions or the high failure alarm rate that forces the need for human supervision. The system exposed in this paper overcomes these problems by combining a perimeter security system based on CEMF (control of electromagnetic fields) sensing technology, a set of video cameras that remain powered off except when an event has been detected. An autonomous drone is also informed where the event has been initially detected. Then, it flies through computer vision to follow the intruder for as long as they remain within the perimeter. This paper covers a detailed view of how all three components cooperate in harmony to protect a perimeter effectively, without having to worry about false alarms, blinding due to weather conditions, clearance areas, or privacy issues. The system also provides extra information of where the intruder is or has been, at all times, no matter whether they have become mixed up with more people or not during the attack. Introduction In recent years, intrusion detection sensing systems have expanded significantly to meet the growing demand for improved security. Increasing investments for infrastructure development have widened the scope for the growth of the perimeter security market. Between 2014 and 2025, around 78 trillion USD is projected to be spent on infrastructure development worldwide [1]. Reducing the vulnerabilities of critical infrastructure and increasing its resilience is one of the primary objectives of the European Union (EU). The European Programme for Critical Infrastructure Protection (EPCIP) sets the overall framework for activities to improve the protection of critical infrastructures in Europe-across all EU States and in all relevant sectors of economic activity [2]. With increasing security threats in Europe and worldwide, organizations managing critical infrastructures such as hydroelectric, solar, nuclear, or thermal power plants and oil refineries or wastewater treatment plants have a growing need for enhanced perimeter protection systems with reduced false alarm rates and preventive detection capabilities. Modern integrated surveillance and security management systems require the implementation of smart detectors, information networks and controlling software. While only "real" alarms, representing threats, should be detected by the sensors in an ideal case, the reality is that the devices generate many unnecessary warnings. These events can be classified as false alarms or as nuisance alarms. False alarms are produced by events that should not trigger an alarm. On the other hand, nuisance alarms are generated by a legitimate cause, but without it representing a real threat. In electronic security systems for critical facilities and infrastructure protection, the sensors most frequently used by the intrusion detection subsystems are the passive and active infrared, accelerometer, microwave, ultrasonic, optical fiber (FBG) sensors and perimeter protection based on buried sensors [3][4][5][6][7][8]. Infrared and motion sensors suffer severe range reduction in rain and fog. They are limited to direct line of sight detection and can be triggered by any kind of moving object. Fiber-optic and microphonic cable detection systems suffer from elevated fault alarm rates. Ground radar systems, microwave barriers and buried systems require a large clearance area free of vegetation and obstacles to operate properly. Camera-based systems usually require human supervision, and they are often in conflict with the GDPR. Electrostatic field disturbance sensors have a nearly zero fault alarm rate but cannot track the intruder once inside the property. Therefore, the efficacy of most of them is often limited by high false alarm rates (>40% false alarms, on average) and the detection range. Regarding the electrostatic field sensors, it is a known approach of capacitive sensing based on the disturbance in the electric field created between a driving electrode and a sensing electrode [9,10]. Capacitive sensing has gained increasing importance in the last decades and is successfully employed in various applications in industrial and automotive technologies because it involves low-cost, design simplicity, high sensitivity, no direct contact with the measured object, and frequency stability [11]. Several works have been reported on this detection principle [12][13][14][15][16]. Seat occupancy detection for air-bag control in vehicles based on electrostatic field sensing has been presented in [12], providing information about the presence, type and position of an occupant using eleven transmitting electrodes and a common receiving electrode. In [14], a capacitive proximity sensor based on the transmit mode for the application of human−computer interaction is presented. A combination of inductive and capacitive sensing modes can help to distinguish different objects and obtain distance information [15], since inductive sensors are a recognized technology for sensing metallic objects, yet their sensitivity to materials that possess low conductivities or are nonmagnetic, such as the human body, are much lower [16]. The integration of multiple intrusion-detection technologies into a given sensing solution provides it with better robustness and reliability when compared with approaches based on a single technology. However, attempting to reduce false alarms can be detrimental to detection probability. On the other hand, camera systems tend to be the preferred solution for the security systems of critical facilities. They present high costs in both operation and maintenance, while having a low probability of preventing threats. This low probability is related to the system dependency of threat identification on the operator's recognition of the camera images presented on various screens and the cabling needs of the devices, which can generate restrictions on the areas that can be monitored. Some solutions propose a multiple-sensor platform with video analysis to detect and classify mobile objects [17][18][19][20]. These technologies are not power efficient in operation because sensors and cameras must always be operational to be prepared for a possible intrusion (resulting in a 60-80% energy inefficiency). The proposed approach aims to achieve a scalable and cost-effective security solution to be deployed on large perimeters of critical infrastructure with reduced operational costs since it can reduce the amount of video monitoring data to be processed and decrease the workforce costs to the automatic intruder tracking. The CEMF technology comes into play with a more extended detection range (20 m/sensor) and reduced false alarms (<10%) while being able to work on all terrains and independent of weather conditions. This technology can differentiate between different types of events, objects, animals, or persons. This ability enables a preventive feature that allows it to generate an alarm before an intrusion occurs. Finally, as cameras remain inoperative except when an event is detected in its field of Sensors 2021, 21, 7379 3 of 28 view, the system decreases power consumption by reducing the use of video surveillance by around 95.8% in comparison to other systems (none of them can deactivate the video surveillance), as well as the amount of data that has to be saved in data centers. Across this study, a detailed view on how the disruptive CEMF technology works is obtained, and how the usage of an ensemble that joins traditional algorithms, a random forest and a neuronal network, can provide a great level of discrimination between events. Such achievement allows for a sophisticated video tracking system to be triggered on any dangerous alarm. This complementary video tracking system combines fixed cameras and a drone embarked camera to provide a continuous stream of video showing where the intruder is at all times, no matter whether they have moved away from the fixed cameras. This paper also describes the different techniques used to analyze video, and how the algorithms used for fixed cameras differ from the ones used for the drone camera. This combination of the novel CEMF discriminative perimeter technology, plus a highly innovative camera system presents an unprecedented security system. System Overview The proposed system integrates three main parts. A high-performance perimeter sensing network based on CEMF and capable of detecting and differentiating preventive and intrusive events. A fixed camera network, installed along the perimeter that remains off at all times except when the sensing network detects an event, a moment in which only the camera/s covering that location turn on to send the video data to the server. An unmanned aerial vehicle (UAV) reacts to the detection of an event by flying to the intruder's location and follows the intruder during the attack, providing video data even when the intruder has walked away from the perimeter. The rest of the necessary parts to complete the system: a local server in charge of processing the video streams to perform the intruder tracking and request cameras to switch on/off to maintain the maximum number of cameras powered off while ensuring video coverage of the intruder at all times; a cloud server to store a large dataset from which to supply information to a machine learning algorithm; a monitoring station to communicate events to the operator and provide video data of the active cameras. In Figure 1, the six main elements of the proposed solution are represented. Although only three CEMF units are drawn, there can be as many units as required to cover the entire perimeter in a real installation. The electrodes are arranged at two different heights to cover the entire perimeter. The top-level expands its electromagnetic field over the fence, while the low level covers the lower part. The active detection zone extends 1.5 m away from both electrode levels. The units are separated, 20 m from each other (10 m per electrode), and a camera can be optionally installed with each perimeter unit. The cameras and the UAV are activated with the location of the CEMF sensor triggered by the intruder. Both systems provide video streaming of the intruder to the local server and the monitoring station that can be located at any place where an intranet connection is available. In contrast, a connection to the internet is required to connect to the remote cloud server. This ability enables a preventive feature that allows it to generate an alarm before an in trusion occurs. Finally, as cameras remain inoperative except when an event is detected in its field of view, the system decreases power consumption by reducing the use of video surveillance by around 95.8% in comparison to other systems (none of them can deactivat the video surveillance), as well as the amount of data that has to be saved in data centers Across this study, a detailed view on how the disruptive CEMF technology works i obtained, and how the usage of an ensemble that joins traditional algorithms, a random forest and a neuronal network, can provide a great level of discrimination between events Such achievement allows for a sophisticated video tracking system to be triggered on any dangerous alarm. This complementary video tracking system combines fixed cameras and a drone embarked camera to provide a continuous stream of video showing where th intruder is at all times, no matter whether they have moved away from the fixed cameras This paper also describes the different techniques used to analyze video, and how the algorithms used for fixed cameras differ from the ones used for the drone camera. Thi combination of the novel CEMF discriminative perimeter technology, plus a highly inno vative camera system presents an unprecedented security system. System Overview The proposed system integrates three main parts. A high-performance perimete sensing network based on CEMF and capable of detecting and differentiating preventive and intrusive events. A fixed camera network, installed along the perimeter that remain off at all times except when the sensing network detects an event, a moment in which only the camera/s covering that location turn on to send the video data to the server. An un manned aerial vehicle (UAV) reacts to the detection of an event by flying to the intruder' location and follows the intruder during the attack, providing video data even when the intruder has walked away from the perimeter. The rest of the necessary parts to complet the system: a local server in charge of processing the video streams to perform the intrude tracking and request cameras to switch on/off to maintain the maximum number of cam eras powered off while ensuring video coverage of the intruder at all times; a cloud serve to store a large dataset from which to supply information to a machine learning algorithm a monitoring station to communicate events to the operator and provide video data of th active cameras. In Figure 1, the six main elements of the proposed solution are represented. Although only three CEMF units are drawn, there can be as many units as required to cover th entire perimeter in a real installation. The electrodes are arranged at two different height to cover the entire perimeter. The top-level expands its electromagnetic field over th fence, while the low level covers the lower part. The active detection zone extends 1.5 m away from both electrode levels. The units are separated, 20 m from each other (10 m pe electrode), and a camera can be optionally installed with each perimeter unit. The camera and the UAV are activated with the location of the CEMF sensor triggered by the intruder Both systems provide video streaming of the intruder to the local server and the monitor ing station that can be located at any place where an intranet connection is available. In contrast, a connection to the internet is required to connect to the remote cloud server. CEMF Secured Perimeter The basics of the CEMF sensing technology are the simultaneous generation, control, and monitoring of an electromagnetic field. This characteristic has been developed in a highly innovative technology capable of sensing tiny changes within its electromagnetic field lines. This technology can be easily adapted to work on industrial security, explicitly targeting critical infrastructures, enabling the surveillance of large perimeters. The principle of operation relies on the fact that the human body exhibits its own electrical characteristics like any other existing object. Electromagnetic charge migrates from one object to another when they interact, causing changes in the field. Thus, the CEMF technology continuously measures these differences. It can distinguish and classify different types of events and discriminate between the presence of humans, animals or objects by analyzing the profile generated by the event. This property enables the system to detect intruders while ignoring animals or other inoffensive events. In the same way, the system can distinguish whether a human is standing next to the fence or is climbing it, creating a truly preventive and intrusive perimeter sensor. The proposed solution has specific characteristics that make this device a unique one: (a) Invisibility: The electromagnetic field generated crosses objects and walls (the frequency of operation is in the range of kHz), allowing the sensor to be hidden behind or even inside a wall, which is not possible with the conventional IR or microwave sensors. (b) Long Range: The sensor can detect an object up to 10−20 m along the length of a wall and 1.5 m beyond the breadth of it. (c) Discriminative: The technology can detect the perturbation corresponding to a person's disruption in the electromagnetic field, different from an object or even an animal perturbation. As a result, the sensor can distinguish between a person and any other type of intruder, thus reducing false alarms from 40% (other technologies) to 10%. (d) Preventive detection: Threats are detected even before any harmful actions occur, responding with different alarm levels. The grade of the threat can be measured by distinguishing between loitering, touching, or climbing the fence without the need of cameras. (e) Tracking and identifying the intruders: As an intruder moves along the wall, the units track its route along the perimeter. (f) Personal data protection law-friendly: The cameras can remain switched off most of the time and switch back on in the event of an alarm. Architecture The hardware was developed according to its size efficiency, low cost, and low power usage requirements. Power consumption is critical because the intrusion system regulation UNE 50,131 requires devices to have a battery life of at least 30 h. Lowering the power consumption reduces the battery capacity, which results in a smaller device size and a lower cost. Figure 2 shows the hardware overview diagram of the system. The system board has four independent CEMF channels, and it performs field generation, data acquisition, signal conditioning, algorithm, communications (both wired and wireless), data storage and power management. A 32-bit PIC32MZ2048 microprocessor from Microchip governs the system, capable of running up to 200 MHz and with 120 I/O pins (48 analog), UART, SPI, I2C and USB as well as an embedded real-time clock (RTC). The usage of the Microchip PIC32 was decided in order to reuse the know-how, tooling and a set of stable libraries already developed and validated. Then, within the PIC32 family, a large memory (flash and ram) and DSP core were required, as neuronal networks had to be executed. A large number of GPIOs were also required to control all the devices and acquire four CEMF channels as well as an Ethernet MAC to provide connectivity. At the time of the architectural design, when all the microcontroller's requirements were applied, the PIC32MZ2048 was the clear choice taking into account the lead time, compatible parts as well as pricing. The system board has four independent CEMF channels, and it performs field generation, data acquisition, signal conditioning, algorithm, communications (both wired and wireless), data storage and power management. A 32-bit PIC32MZ2048 microprocessor from Microchip governs the system, capable of running up to 200 MHz and with 120 I/O pins (48 analog), UART, SPI, I2C and USB as well as an embedded real-time clock (RTC). The usage of the Microchip PIC32 was decided in order to reuse the know-how, tooling and a set of stable libraries already developed and validated. Then, within the PIC32 family, a large memory (flash and ram) and DSP core were required, as neuronal networks had to be executed. A large number of GPIOs were also required to control all the devices and acquire four CEMF channels as well as an Ethernet MAC to provide connectivity. At the time of the architectural design, when all the microcontroller's requirements were applied, the PIC32MZ2048 was the clear choice taking into account the lead time, compatible parts as well as pricing. The communications have been designed in a redundant way to avoid losing connectivity, and thus, the mainboard incorporates the following interfaces: Ethernet, Wi-Fi, mobile data (3G/4G), USB and Bluetooth. The first three provide connectivity to the local/cloud server. Ethernet is the preferred method, followed by Wi-Fi and 3G/4G. The other two interfaces are designed to provide in situ maintenance, so that a technician can connect and configure/debug the unit during installation or failure analysis. The board also has an SD card that can be used for data recording and firmware upgrades on isolated units. A group of four relays with latch functions is used to communicate with legacy security systems. The user can configure their behavior in terms of NO/NC output and concept mapped to each relay. The system is supplied with AC voltage converted into 5 V DC by a flyback power supply. Several DC/DC converters have been used to achieve the 3.3 V required to supply most digital circuits, 18V for the operation of the CEMF sensors, 1.2 V for the Ethernet switch and 3.8 V for the 3G communications. The power management module also incorporates a battery charger able to handle a large LiPo battery of 10 Ah and 3.7 V calculated to provide enough energy to continue operating for up to 30 h. This battery ensures the unit never loses connection and sends alerts even if the power grid is disconnected in a power outage. Finally, the device includes several tamper sensors to detect manipulation attempts. An accelerometer to detect position changes, a standard switch to detect the enclosure The communications have been designed in a redundant way to avoid losing connectivity, and thus, the mainboard incorporates the following interfaces: Ethernet, Wi-Fi, mobile data (3G/4G), USB and Bluetooth. The first three provide connectivity to the local/cloud server. Ethernet is the preferred method, followed by Wi-Fi and 3G/4G. The other two interfaces are designed to provide in situ maintenance, so that a technician can connect and configure/debug the unit during installation or failure analysis. The board also has an SD card that can be used for data recording and firmware upgrades on isolated units. A group of four relays with latch functions is used to communicate with legacy security systems. The user can configure their behavior in terms of NO/NC output and concept mapped to each relay. The system is supplied with AC voltage converted into 5 V DC by a flyback power supply. Several DC/DC converters have been used to achieve the 3.3 V required to supply most digital circuits, 18V for the operation of the CEMF sensors, 1.2 V for the Ethernet switch and 3.8 V for the 3G communications. The power management module also incorporates a battery charger able to handle a large LiPo battery of 10 Ah and 3.7 V calculated to provide enough energy to continue operating for up to 30 h. This battery ensures the unit never loses connection and sends alerts even if the power grid is disconnected in a power outage. Finally, the device includes several tamper sensors to detect manipulation attempts. An accelerometer to detect position changes, a standard switch to detect the enclosure aperture, and a hall sensor to detect detachment from its mounting plate with an integrated magnet. Figure 3 shows the CEMF sensor board where the main blocks have been highlighted to provide a clear understanding of block size and complexity. Figure 4 shows the same board installed inside its enclosure (enclosure open, see a part of the lid on the left of the picture). aperture, and a hall sensor to detect detachment from its mounting plate with an integrated magnet. Figure 3 shows the CEMF sensor board where the main blocks have been highlighted to provide a clear understanding of block size and complexity. Figure 4 shows the same board installed inside its enclosure (enclosure open, see a part of the lid on the left of the picture). aperture, and a hall sensor to detect detachment from its mounting plate with an integrated magnet. Figure 3 shows the CEMF sensor board where the main blocks have been highlighted to provide a clear understanding of block size and complexity. Figure 4 shows the same board installed inside its enclosure (enclosure open, see a part of the lid on the left of the picture). Algorithms The performance of the CEMF technology relies on both hardware and algorithms. The algorithms are responsible for setting the resolution, sample rate, power consumption, raw signal noise, compensation, filtering, feature extraction, event classification, and application profiling to fulfil the application policies such as the enable, disable, arm or guard times. Figure 5 shows how the data flows across the different blocks of the algorithm to generate the algorithm output; this section covers, one by one, each of the blocks here stated. The performance of the CEMF technology relies on both hardware and algorithms. The algorithms are responsible for setting the resolution, sample rate, power consumption, raw signal noise, compensation, filtering, feature extraction, event classification, and application profiling to fulfil the application policies such as the enable, disable, arm or guard times. Figure 5 shows how the data flows across the different blocks of the algorithm to generate the algorithm output; this section covers, one by one, each of the blocks here stated. The most critical CEMF configuration settings are resolution and sample rate. These define the sensitivity, noise and power consumption of the system. Other parameters, such as start and stop times, can be adjusted to fine tune the result. A thorough characterization has been performed to ensure the correct value of these parameters to fit the application requirements. This characterization considers a large number of different approximations of an object for calibration to the electrode. With the data collected throughout the experiments, a statistical study is conducted to understand the performance of the raw CEMF technology. These steps have been repeated several times until the best performance has been found. Figure 6 shows the CEMF signal along with one of these iterations. The top graph represents raw sensitivity in CEMF units vs. distance. The second graph is the SNR sensitivity vs. distance, and the last one represents the noise vs. distance in the CEMF units. The most critical CEMF configuration settings are resolution and sample rate. These define the sensitivity, noise and power consumption of the system. Other parameters, such as start and stop times, can be adjusted to fine tune the result. A thorough characterization has been performed to ensure the correct value of these parameters to fit the application requirements. This characterization considers a large number of different approximations of an object for calibration to the electrode. With the data collected throughout the experiments, a statistical study is conducted to understand the performance of the raw CEMF technology. These steps have been repeated several times until the best performance has been found. Figure 6 shows the CEMF signal along with one of these iterations. The top graph represents raw sensitivity in CEMF units vs. distance. The second graph is the SNR sensitivity vs. distance, and the last one represents the noise vs. distance in the CEMF units. As shown on the diagram, the CEMF data starts its journey through the block. It uses temperature and humidity information plus some of the CEM pensate for the CEMF readings. This process guarantees that data from an As shown on the diagram, the CEMF data starts its journey through the Compensation block. It uses temperature and humidity information plus some of the CEMF data to compensate for the CEMF readings. This process guarantees that data from any sensor look approximately the same regardless of whether it is installed in a rainy, cold or dry location. The same applies to fast and irregular changes due to direct sun and shadow transitions or rain, or any environmental change that could occur to the sensor. An exponential weighted moving average (EWMA) low-pass filter was chosen to eliminate undesired noise and focus on the recent past as shown in Figure 7. The magic of this filter is assigning a weight to each acquired sample, the weight decreases in an exponential way for previous samples. This feature not only gives greater importance to recent events, but also removes the necessity of storing infinite values, as at some point their weight is so small that they can be just ignored, as a result, the EWMA can be calculated with a fixed buffer size. As shown on the diagram, the CEMF data starts its journey through the Compensation block. It uses temperature and humidity information plus some of the CEMF data to compensate for the CEMF readings. This process guarantees that data from any sensor look approximately the same regardless of whether it is installed in a rainy, cold or dry location. The same applies to fast and irregular changes due to direct sun and shadow transitions or rain, or any environmental change that could occur to the sensor. An exponential weighted moving average (EWMA) low-pass filter was chosen to eliminate undesired noise and focus on the recent past as shown in Figure 7. The magic of this filter is assigning a weight to each acquired sample, the weight decreases in an exponential way for previous samples. This feature not only gives greater importance to recent events, but also removes the necessity of storing infinite values, as at some point their weight is so small that they can be just ignored, as a result, the EWMA can be calculated with a fixed buffer size. The EWMA can be obtained from the SMA formula (see Equation (1)). The SMA can be expressed as shown in (1) where symbolizes the moving average at point . The variance of the SMA is shown in (2). To obtain a weighted moving average (WMA) from the SMA, weights ( ) can be given to previous points used to obtain the mean. Equation (3) shows the formulation of WMA, where express the weight at point . The variance of the WMA is shown in (4). The EWMA can be obtained from the SMA formula (see Equation (1)). The SMA can be expressed as shown in (1) where µ i symbolizes the moving average at point x i . The variance of the SMA is shown in (2). To obtain a weighted moving average (WMA) from the SMA, weights (w) can be given to previous points used to obtain the mean. Equation (3) shows the formulation of WMA, where w i express the weight at point x i . The variance of the WMA is shown in (4). If the weights are defined as (5), we get the EWMA where the weights decrease in an exponential way for previous values. Thus, the EWMA is presented in (6). In addition, the variance for the EWMA is shown in (7). As can be seen, the weights are defined by α, which defines the depth of the filter. The value of alpha will result in an hyperparameter for the implementation, that needs to be adjusted. As a rule of thumb, the value of alpha that gives a high weight to n number of previous samples can be obtained using (8). Figure 8 demonstrates the performance of the filter implementation with actual data captured and filtered on the device. The green represents raw data and is the input of the filter, while the yellow represents the filtered data delivered at the output of the filter. Observe how the high frequency changes produced on the input signal are not present on the output. In addition, the variance for the EWMA is shown in (7). As can be seen, the weights are defined by α, which defines the depth of the filter. The value of alpha will result in an hyperparameter for the implementation, that needs to be adjusted. As a rule of thumb, the value of alpha that gives a high weight to number of previous samples can be obtained using (8). Figure 8 demonstrates the performance of the filter implementation with actual data captured and filtered on the device. The green represents raw data and is the input of the filter, while the yellow represents the filtered data delivered at the output of the filter. Observe how the high frequency changes produced on the input signal are not present on the output. The following stage extracts features from all CEMF channels and their relationships. These features help the classifier to separate events that may appear similar by looking only at the CEMF signals directly: • Derivative signal for all channels (differentiation). Figure 9 shows an example of the skewness analysis. Observe how this analysis helps to separate classes 1 and 2 from classes 3 and 4. Other features help separate different classes, providing different points of view to help the classifier increase its performance. The following stage extracts features from all CEMF channels and their relationships. These features help the classifier to separate events that may appear similar by looking only at the CEMF signals directly: • Derivative signal for all channels (differentiation). Figure 9 shows an example of the skewness analysis. Observe how this analysis helps to separate classes 1 and 2 from classes 3 and 4. Other features help separate different classes, providing different points of view to help the classifier increase its performance. All these features are continuously extracted and feed into the classifier for more accurate detection. The classifier has been implemented as an ensemble of three different techniques. All of them contribute to increase the number of true positives and reduce the number of false negatives. The techniques used are random forest, artificial neural network (ANN) and traditionally based on thresholds. A multiclass random forest has been implemented to distinguish between four classes: idle, prowling, intrusion and cars. The number of trees in the model has been optimized to reduce the memory footprint as much as possible. For this, the OOB (out of bag) classification error of the model with respect to the number of trees has been used. As shown in Figure 10, the OOB remains constant from 100 trees onwards. In the right graph, a representation of the final separation made by the random forest is shown. All these features are continuously extracted and feed into the classifier for more accurate detection. The classifier has been implemented as an ensemble of three different techniques. All of them contribute to increase the number of true positives and reduce the number of false negatives. The techniques used are random forest, artificial neural network (ANN) and traditionally based on thresholds. A multiclass random forest has been implemented to distinguish between four classes: idle, prowling, intrusion and cars. The number of trees in the model has been optimized to reduce the memory footprint as much as possible. For this, the OOB (out of bag) classification error of the model with respect to the number of trees has been used. As shown in Figure 10, the OOB remains constant from 100 trees onwards. In the right graph, a representation of the final separation made by the random forest is shown. The second classifier is an ANN network. The input and output layer are fixed to input parameters and output values, respectively. As output can take four values only (idle, prowling, intrusion and car), four neurons were used. The input layer takes the compensated and filtered signal plus the six features mentioned above for each one of the four All these features are continuously extracted and feed into the classifier for more accurate detection. The classifier has been implemented as an ensemble of three different techniques. All of them contribute to increase the number of true positives and reduce the number of false negatives. The techniques used are random forest, artificial neural network (ANN) and traditionally based on thresholds. A multiclass random forest has been implemented to distinguish between four classes: idle, prowling, intrusion and cars. The number of trees in the model has been optimized to reduce the memory footprint as much as possible. For this, the OOB (out of bag) classification error of the model with respect to the number of trees has been used. As shown in Figure 10, the OOB remains constant from 100 trees onwards. In the right graph a representation of the final separation made by the random forest is shown. The second classifier is an ANN network. The input and output layer are fixed to input parameters and output values, respectively. As output can take four values only (idle, prowling, intrusion and car), four neurons were used. The input layer takes the compensated and filtered signal plus the six features mentioned above for each one of the four The second classifier is an ANN network. The input and output layer are fixed to input parameters and output values, respectively. As output can take four values only (idle, prowling, intrusion and car), four neurons were used. The input layer takes the compensated and filtered signal plus the six features mentioned above for each one of the four channels. The signals are passed through a max-pooling process in order to downsample the vectors to four values each. Therefore, the input layer has 56 inputs (plus bias). As input and output are fixed, the number of hidden neurons is the main hyperparameter to optimize. It was important to consider that the complexity of the algorithm grows exponentially with the number of neurons and layers, but its accuracy also increases with it. In this case, the best obtained tradeoff was considering only one dense hidden layer with seven neurons. This number is small enough to be used in real time in the PIC32, which was the hardest restriction for this block. The outputs are softmax nodes, as many works have demonstrated that it offers the best performance for categorical classification, where only one output neuron has to be active for each input pattern. The rest of the neurons of the network use tanh function. Figure 11 represents the diagram of the described ANN. this case, the best obtained tradeoff was considering only one dense hidden layer with seven neurons. This number is small enough to be used in real time in the PIC32, which was the hardest restriction for this block. The outputs are softmax nodes, as many works have demonstrated that it offers the best performance for categorical classification, where only one output neuron has to be active for each input pattern. The rest of the neurons of the network use tanh function. Figure 11 represents the diagram of the described ANN. The last part of the ensemble is the traditional algorithm that has been coded using a set of thresholds and hysteresis distributed on the different CEMF channels and the extracted features. These thresholds, hysteresis and other parameters are dynamically enabled, disabled, or adjusted through a set of relationships between them and the input signals. The result is an adaptive algorithm capable of delivering good performance with low computing power. The combination of the three techniques provides a very accurate classification of events. In Table 1, it is possible to see how the algorithm has performed in the test dataset. The dataset was built using some of the installed units considered training units. These units work together with continuous video recording of the perimeter they protect. Both CEMF and video data is continuously pushed to a private database. In the server side, a very simple algorithm analyzes the CEMF stream looking for any kind of activity in the signal. The server labels the events as unclassified. Later on, a custom application presents the CEMF pattern and the video in a clean UI interface used to manually review and classify all unclassified events. Once the event is classified, a metadata record is stored in a MongoDB where the training and validation dataset are hold. Accuracy on intrusion and car detection is 100%, while the performance between idle and prowling is a little bit lower. In particular, one idle test, corresponding to a rainy and windy capture, has been misclassified as prowling. The other scenario means that 7.3% of the prowling events have been classified as idle. This has been balanced by design, as the priority was to reduce the false alarm rate to the minimum. The last part of the ensemble is the traditional algorithm that has been coded using a set of thresholds and hysteresis distributed on the different CEMF channels and the extracted features. These thresholds, hysteresis and other parameters are dynamically enabled, disabled, or adjusted through a set of relationships between them and the input signals. The result is an adaptive algorithm capable of delivering good performance with low computing power. The combination of the three techniques provides a very accurate classification of events. In Table 1, it is possible to see how the algorithm has performed in the test dataset. The dataset was built using some of the installed units considered training units. These units work together with continuous video recording of the perimeter they protect. Both CEMF and video data is continuously pushed to a private database. In the server side, a very simple algorithm analyzes the CEMF stream looking for any kind of activity in the signal. The server labels the events as unclassified. Later on, a custom application presents the CEMF pattern and the video in a clean UI interface used to manually review and classify all unclassified events. Once the event is classified, a metadata record is stored in a MongoDB where the training and validation dataset are hold. Accuracy on intrusion and car detection is 100%, while the performance between idle and prowling is a little bit lower. In particular, one idle test, corresponding to a rainy and windy capture, has been misclassified as prowling. The other scenario means that 7.3% of the prowling events have been classified as idle. This has been balanced by design, as the priority was to reduce the false alarm rate to the minimum. Architecture The goal of the camera network system is to monitor a configured area with the cooperation of the CEMF sensor network. It makes use of computer vision algorithms to detect intruders and track them. All the components are connected through a local network with static IP addresses using Ethernet cables. Different communication protocols are used on this network depending on what should be transmitted. The camera network system is composed by: • Cameras: IP cameras were selected since they are a very extended surveillance solution (almost a standard) and also for the characteristics that they may offer. Specifically, the chosen camera is the HIKVISION Network Cam Model DS-2CD2055FWD. Among all its features, it is important to highlight its capacity of changing resolution and other characteristics such as the wide range of supported protocols (HTTP, RTSP . . . ). It is waterproof, making it compatible with an outdoor system. It has a graphical web interface via HTTP where all the parameters can be set. For this work, a resolution of 640p has been set. The frame rate was set to 12 fps to reduce the network load and the required processing. Regarding the installation position, it is advisable to place them at a minimum height of 3 m and point them slightly downwards to help the computer vision algorithm perform its task. • Data processing unit: The data processing unit is where all the information is processed. Due to the high computing needs required by computer vision algorithms, an Intel NUC was used. Of course, just one Intel NUC does not have enough computing power to process the video stream of many cameras simultaneously. Because of this, the system is scalable with extra processing units to distribute the load. Currently, each Intel NUC can simultaneously handle the load of five cameras. • External monitoring unit: Basically, it is a PC connected to the network to receive the output video broadcast by the data processing units. This device must be configured in the same IP domain range and have some generic software (such as a web browser . . . ) that can reproduce HTTP streams. The general system scheme explained above is shown in Figure 12. As can be seen, every device is connected with each other through the Ethernet fabric. The dashed lines between IP cameras, processing units and subsystems represent the possibility of introducing new devices of the same type. The dashed line connecting the Ethernet fabric with the monitoring unit indicates that the system can perfectly operate without it, making it an optional complement that does not impact the system if, for any reason, it fails. Algorithms As mentioned in 4.1, the processing units are in charge of managing the entire video stream and alarms generated by the CEMF perimeter, tracking the intruder, and extracting the video stream of the camera with the best view of the intruder. Each processing unit uses Ubuntu version 16.04 LTS as the operating system. The development is carried out using C++ and Python programming languages. In addition, specific external libraries and frameworks were used to facilitate the development and tuning of the system. Among them, the most important are OpenCV and ROS (Robot Operating System). The software system was developed with a multiprocessing concurrent pattern in which each process has a specific function. This pattern helps optimize the processor load and improves overall performance. Each process is known as a node and uses the ROS framework to deploy and communicate with others. Figure 13 shows a diagram with the Algorithms As mentioned in 4.1, the processing units are in charge of managing the entire video stream and alarms generated by the CEMF perimeter, tracking the intruder, and extracting the video stream of the camera with the best view of the intruder. Each processing unit uses Ubuntu version 16.04 LTS as the operating system. The development is carried out using C++ and Python programming languages. In addition, specific external libraries and frameworks were used to facilitate the development and tuning of the system. Among them, the most important are OpenCV and ROS (Robot Operating System). The software system was developed with a multiprocessing concurrent pattern in which each process has a specific function. This pattern helps optimize the processor load and improves overall performance. Each process is known as a node and uses the ROS framework to deploy and communicate with others. Figure 13 shows a diagram with the primary nodes of the system. stream and alarms generated by the CEMF perimeter, tracking the intruder, and extracting the video stream of the camera with the best view of the intruder. Each processing unit uses Ubuntu version 16.04 LTS as the operating system. The development is carried out using C++ and Python programming languages. In addition, specific external libraries and frameworks were used to facilitate the development and tuning of the system. Among them, the most important are OpenCV and ROS (Robot Operating System). The software system was developed with a multiprocessing concurrent pattern in which each process has a specific function. This pattern helps optimize the processor load and improves overall performance. Each process is known as a node and uses the ROS framework to deploy and communicate with others. Figure 13 shows a diagram with the primary nodes of the system. Only one System manager runs in the main data processing unit, while the Camera subsystem is replicated once per camera and can be instantiated in any data processing unit. The function of each node is as follows: Only one System manager runs in the main data processing unit, while the Camera subsystem is replicated once per camera and can be instantiated in any data processing unit. The function of each node is as follows: IP Stream Converter. This node is responsible for establishing a connection with its associated IP camera. The camera sends the frames to this node through the RTSP protocol, converting them to a format suitable for processing by the rest of the nodes. If the connection between the camera and the node is interrupted or cannot be established, the process keeps trying to establish the connection. b. Image Detection. It receives frames from the IP Stream Converter and processes them to extract the detections. This process with the heaviest load is performed by a pipeline with the algorithm stages shown in Figure 14. Preprocessing. The first step is the preprocessing stage. It uses an edge-preserving smoothing technique to reduce resolution, accelerate the image process, and clean up the image to avoid pixel noise impacting the next pipeline stage (Figure 15). Background subtraction. After preprocessing has been done, a foreground object detection technique is used to remove static parts of the image. The technique used is MOG (mixture of Gaussians) [21,22]. This algorithm aims to apply background subtraction to detect shadows and just background and foreground pixels ( Figure 16). It creates a Gaussian model for each one of them. As it considers small object movements as part of the model, a longer training time is required. This tolerance to small changes makes it more reliable in a range of more realistic situations, such as changes in light or movements of branches of a tree. associated IP camera. The camera sends the frames to this node through the RTSP protocol, converting them to a format suitable for processing by the rest of the nodes. If the connection between the camera and the node is interrupted or cannot be established, the process keeps trying to establish the connection. b. Image Detection. It receives frames from the IP Stream Converter and processes them to extract the detections. This process with the heaviest load is performed by a pipeline with the algorithm stages shown in Figure 14. Preprocessing. The first step is the preprocessing stage. It uses an edge-preserving smoothing technique to reduce resolution, accelerate the image process, and clean up the image to avoid pixel noise impacting the next pipeline stage (Figure 15). Background subtraction. After preprocessing has been done, a foreground object detection technique is used to remove static parts of the image. The technique used is MOG (mixture of Gaussians) [21,22]. This algorithm aims to apply background subtraction to detect shadows and just background and foreground pixels ( Figure 16). It creates a Gaussian model for each one of them. As it considers small object movements as part of the model, a longer training time is required. This tolerance to small changes makes it more reliable in a range of more realistic situations, such as changes in light or movements of branches of a tree. lished, the process keeps trying to establish the connection. b. Image Detection. It receives frames from the IP Stream Converter and processes them to extract the detections. This process with the heaviest load is performed by a pipeline with the algorithm stages shown in Figure 14. Preprocessing. The first step is the preprocessing stage. It uses an edge-preserving smoothing technique to reduce resolution, accelerate the image process, and clean up the image to avoid pixel noise impacting the next pipeline stage (Figure 15). Figure 15. Image detection-preprocessing. Background subtraction. After preprocessing has been done, a foreground object detection technique is used to remove static parts of the image. The technique used is MOG (mixture of Gaussians) [21,22]. This algorithm aims to apply background subtraction to detect shadows and just background and foreground pixels ( Figure 16). It creates a Gaussian model for each one of them. As it considers small object movements as part of the model, a longer training time is required. This tolerance to small changes makes it more reliable in a range of more realistic situations, such as changes in light or movements of branches of a tree. Postprocessing. The result of the background removal step is far from ideal. Different morphological operations can be performed on the image, such as dilation or erosion. The dilation operation looks for white areas in the image and expands their edges to group regions of interest. On the other hand, the erosion performs the inverse operation, reducing the small regions that do not provide relevant information (Figure 17). Segmentation. Contour detection is the technique [23] that has been applied to the output of the postprocessing to detect and locate where activity is taking place. Over the image, a previous black and white image are used, and a process is applied to find the contours and frame them in identifying rectangles ( Figure 18). Postprocessing. The result of the background removal step is far from ideal. Different morphological operations can be performed on the image, such as dilation or erosion. The dilation operation looks for white areas in the image and expands their edges to group regions of interest. On the other hand, the erosion performs the inverse operation, reducing the small regions that do not provide relevant information (Figure 17). Postprocessing. The result of the background removal step is far from ideal. Different morphological operations can be performed on the image, such as dilation or erosion. The dilation operation looks for white areas in the image and expands their edges to group regions of interest. On the other hand, the erosion performs the inverse operation, reducing the small regions that do not provide relevant information (Figure 17). Segmentation. Contour detection is the technique [23] that has been applied to the output of the postprocessing to detect and locate where activity is taking place. Over the image, a previous black and white image are used, and a process is applied to find the contours and frame them in identifying rectangles (Figure 18). Segmentation. Contour detection is the technique [23] that has been applied to the output of the postprocessing to detect and locate where activity is taking place. Over the image, a previous black and white image are used, and a process is applied to find the contours and frame them in identifying rectangles ( Figure 18). Segmentation. Contour detection is the technique [23] that has been applied to the output of the postprocessing to detect and locate where activity is taking place. Over the image, a previous black and white image are used, and a process is applied to find the contours and frame them in identifying rectangles ( Figure 18). c. Tracker. This process is in charge of processing the events generated by the image detection node and establish a relationship between them. This node has to make decisions based on different characteristics such as the lifetime, size and position of the detections or their speed. When a detection vanishes (e.g., such as the intruder walking behind a column), a Kalman filter is used to estimate the current position of the c. Tracker. This process is in charge of processing the events generated by the image detection node and establish a relationship between them. This node has to make decisions based on different characteristics such as the lifetime, size and position of the detections or their speed. When a detection vanishes (e.g., such as the intruder walking behind a column), a Kalman filter is used to estimate the current position of the intruder and maintain a tracking until the Image Detection can locate the intruder again. Another important point is the activation of nearby cameras when an intruder leaves the field of view of an active camera towards that of an inactive camera. d. Data Synchronizer. This node merges the images and detections, creating an output video stream with the processed information of the detections. This video is sent to the HTTP Video Converter. System Manager a. Comms. Communicates with the CEMF sensors. It creates a virtual map that is kept updated with each message. This information is then served to the manager handler. b. Manager Handler. This is the first node to boot. It reads the configuration file that includes IP addresses and ports of the cameras and processing units and the information about how everything is related. All that data starts the communications node and all the required data processing units and boots the camera subsystems. c. HTTP Video Converter. This node converts the output video and executes an HTTP server where connections can be made, and the video can be viewed in a standard web browser. Autonomous Drone System The autonomous drone system aims to provide a video stream of the intruder when the ground security cameras are out of reach. Figure 19 shows the diagram of the high-level state machine in charge of executing the missions. The initial state assumes the system remains in the idle state just after initial configuration and safety checks. When any CEMF perimeter sensor detects an alarm, the drone automatically takes off. The ground system communicates the intruder geolocation to the drone, which initially knows the location of the CEMF sensor that detected the event. While the drone is flying to this location, the ground camera system keeps track of the intruder and updates its location (see Section 4 for details). Once the drone reaches the intruder location, it keeps track of it. Once the mission is complete, the drone goes back to the take-off position and lands. to the drone, which initially knows the location of the CEMF sensor that detected the event. While the drone is flying to this location, the ground camera system keeps track of the intruder and updates its location (see Section 4 for details). Once the drone reaches the intruder location, it keeps track of it. Once the mission is complete, the drone goes back to the take-off position and lands. The ground system is always updated with the geolocation of the drone itself and the intruder's location when available. Figure 19. High-level mission state machine. Architecture One critical point to guarantee the success of this block was the right selection of the drone. Therefore, one of these leading players had to be chosen: Others: 3dRobotics, senseFly, Intel… By 2015, DJI already had 50% of the market share, rapidly rising to 74% [24], by 2017 it had more than 75% of the nonhobbyist drones [25] as shown in Figure 20. The ground system is always updated with the geolocation of the drone itself and the intruder's location when available. Architecture One critical point to guarantee the success of this block was the right selection of the drone. Therefore, one of these leading players had to be chosen: Others: 3dRobotics, senseFly, Intel . . . By 2015, DJI already had 50% of the market share, rapidly rising to 74% [24], by 2017 it had more than 75% of the nonhobbyist drones [25] as shown in Figure 20. For this reason, DJI was the preferred brand, and among all commercial drones offered by this maker, only the Matrice family offered direct access to the video stream. The 200 series was the only one of all the Matrice family members capable of flying under rain. As custom autonomous navigation and rain support were mandatory, the DJI Matrice 210 was the chosen one. The DJI M210 is an 887 × 880 × 378 mm drone complemented with a powerful Zenmuse 4XS camera installed on its 3-axis gimbal ( Figure 21). As mentioned before, the autonomous drone system is based on a DJI Matrice 210 and a Zenmuse 4XS camera, but another important component completes the architecture For this reason, DJI was the preferred brand, and among all commercial drones offered by this maker, only the Matrice family offered direct access to the video stream. The 200 series was the only one of all the Matrice family members capable of flying under rain. As custom autonomous navigation and rain support were mandatory, the DJI Matrice 210 was the chosen one. The DJI M210 is an 887 × 880 × 378 mm drone complemented with a powerful Zenmuse 4XS camera installed on its 3-axis gimbal ( Figure 21). For this reason, DJI was the preferred brand, and among all commercial drones offered by this maker, only the Matrice family offered direct access to the video stream. The 200 series was the only one of all the Matrice family members capable of flying under rain. As custom autonomous navigation and rain support were mandatory, the DJI Matrice 210 was the chosen one. The DJI M210 is an 887 × 880 × 378 mm drone complemented with a powerful Zenmuse 4XS camera installed on its 3-axis gimbal ( Figure 21). As mentioned before, the autonomous drone system is based on a DJI Matrice 210 and a Zenmuse 4XS camera, but another important component completes the architecture (Figure 22). It is the embedded computer responsible for processing the video stream and control the pilotage. As mentioned before, the autonomous drone system is based on a DJI Matrice 210 and a Zenmuse 4XS camera, but another important component completes the architecture (Figure 22). It is the embedded computer responsible for processing the video stream and control the pilotage. The Jetson Tx2 is assembled in the drone and connected to it through only three cables, as shown in Figure 23: -Power cable is a standard power cord running from the external power connector provided by the drone to the power supply connector of the computer. -Video cable is a standard USB cable running from the USB provided by the drone to one of the USB host ports of the computer. - Control cable is a USB to UART TTL cable used to control the drone through DJI's SDK, and it runs from the TTL port of the drone to any USB host port of the computer. Algorithm The embedded computer is running an Ubuntu OS, and its communications are secured with SSH to prevent hackers from manipulating the navigation system or even taking control of the drone. The system also has the standard remote control supplied by DJI. This allows the change to manual control if required by the security guards at the monitoring stations. There is also a return to home possibility to cancel the flight at any time. As mentioned, the Jetson TX2 is in charge of running the entire software for the drone system, establishing continuous communication with both the drone autopilot and the The embedded computer chosen for the project was an NVIDIA Jetson Tx2-BOXER-8110AI. It provides the required IO connectivity and processing power (thanks to an NVIDIA CUDA) in a compact size. This computer has been equipped with Wi-Fi and 3G/4G interfaces to maintain reliable communication with the ground station. The Jetson Tx2 is assembled in the drone and connected to it through only three cables, as shown in Figure 23: -Power cable is a standard power cord running from the external power connector provided by the drone to the power supply connector of the computer. -Video cable is a standard USB cable running from the USB provided by the drone to one of the USB host ports of the computer. -Control cable is a USB to UART TTL cable used to control the drone through DJI's SDK, and it runs from the TTL port of the drone to any USB host port of the computer. The Jetson Tx2 is assembled in the drone and connected to it through only three cables, as shown in Figure 23: -Power cable is a standard power cord running from the external power connector provided by the drone to the power supply connector of the computer. -Video cable is a standard USB cable running from the USB provided by the drone to one of the USB host ports of the computer. -Control cable is a USB to UART TTL cable used to control the drone through DJI's SDK, and it runs from the TTL port of the drone to any USB host port of the computer. Algorithm The embedded computer is running an Ubuntu OS, and its communications are secured with SSH to prevent hackers from manipulating the navigation system or even taking control of the drone. The system also has the standard remote control supplied by DJI. This allows the change to manual control if required by the security guards at the monitoring stations. Algorithm The embedded computer is running an Ubuntu OS, and its communications are secured with SSH to prevent hackers from manipulating the navigation system or even taking control of the drone. The system also has the standard remote control supplied by DJI. This allows the change to manual control if required by the security guards at the monitoring stations. There is also a return to home possibility to cancel the flight at any time. As mentioned, the Jetson TX2 is in charge of running the entire software for the drone system, establishing continuous communication with both the drone autopilot and the ground station. Furthermore, other processes related to intruder detection and tracking, mission control or autonomous movement simultaneously run in the onboard computer. To provide a clean and effective implementation of the navigation system, the software has been written using the following set of programming languages and libraries: The software blocks deployed on the onboard computer can be synthesized in Figure 24. ground station. Furthermore, other processes related to intruder detection and tracking, mission control or autonomous movement simultaneously run in the onboard computer. To provide a clean and effective implementation of the navigation system, the software has been written using the following set of programming languages and libraries: -DJI SDK. This node works as a communication bridge, deserializing the autopilot messages (flight data, images, drone status), converting them into a format manageable by the rest of the blocks. It also sends the action commands from the onboard system to the autopilot. - Vision. This block receives all the images and flight data, processes them and detects the intruder. It also reports to the mission manager. -Mission manager. It implements the high-level state machine previously shown in Figure 19, controlling the operation of the mission at all times and taking the appropriate actions in case a pilot is required to take over and manually control the aircraft. -Movement control. This node is in charge of the control, calculating the action commands that must be sent to the autopilot depending on the state of the mission in which the system is at a specific moment. - Communications. It establishes and maintains communications with the ground station. It processes external alarm signals and activates the entire system when required. In parallel, it encodes the images and flight data to send it to the ground to monitor accordingly. The computer vision algorithm runs the pipeline shown in Figure 25. The input data is the flight and image data, and the output is images plus location -DJI SDK. This node works as a communication bridge, deserializing the autopilot messages (flight data, images, drone status), converting them into a format manageable by the rest of the blocks. It also sends the action commands from the onboard system to the autopilot. - Vision. This block receives all the images and flight data, processes them and detects the intruder. It also reports to the mission manager. -Mission manager. It implements the high-level state machine previously shown in Figure 19, controlling the operation of the mission at all times and taking the appropriate actions in case a pilot is required to take over and manually control the aircraft. -Movement control. This node is in charge of the control, calculating the action commands that must be sent to the autopilot depending on the state of the mission in which the system is at a specific moment. - Communications. It establishes and maintains communications with the ground station. It processes external alarm signals and activates the entire system when required. In parallel, it encodes the images and flight data to send it to the ground to monitor accordingly. The computer vision algorithm runs the pipeline shown in Figure 25. which the system is at a specific moment. - Communications. It establishes and maintains communications with the ground station. It processes external alarm signals and activates the entire system when required. In parallel, it encodes the images and flight data to send it to the ground to monitor accordingly. The computer vision algorithm runs the pipeline shown in Figure 25. The input data is the flight and image data, and the output is images plus location information relative to the image and absolute. The process involved in the algorithm are: -Preprocessing: It receives the images from the autopilot. These images are way too large to be processed by the onboard computer at the frame rate provided. Therefore, The input data is the flight and image data, and the output is images plus location information relative to the image and absolute. The process involved in the algorithm are: -Preprocessing: It receives the images from the autopilot. These images are way too large to be processed by the onboard computer at the frame rate provided. Therefore, the preprocessing performs downscaling to 416 × 416 pixels and provides one image each 50 ms approximately. It also applies the image correction required, with the parameters obtained with the calibration tool, to make sure all image distortion goes away. -Detection: Unlike the video from the static camera network, the drone camera remains in constant motion during the intruder detection and tracking process, so the algorithm based on background subtraction cannot be applied. Given the recent developments in object recognition, which are outperforming classic approaches, such as [26,27], the use of a deep neural network that analyzes each video frame has been chosen. Specifically, the YOLO V3-Tiny [28] architecture was implemented and the retraining of the network from the initial weights provided by the author was done. To work with this network, we used Darknet, which is an environment created by the original authors of YOLO. It is implemented in C language and uses the CUDA library, which uses graphics card resources to operate quickly and efficiently. YOLO V3-Tiny is made up of a total of 23 layers, 13 of them being convolutional. As input, it receives an image of size 416 × 416 × 3, which is a color image with three channels. In the output layer, a vector is generated indicating the regions of interest where the people detected are located and the probability associated with each one. -Tracking: In this stage, images are synchronized with the flight data so that each frame can be associated with its flight height and position. Once the intruder has been detected and the image pixels where it is located are known, a tracking stage is performed so that the drone continuously keeps the intruder in the center of the acquired image. A proportional controller is used, which continuously calculates the distance in pixels between the intruder and image center. The output of that controller is converted into speed commands for the drone, which is moved accordingly. A hysteresis cycle was implemented to avoid continuous drone movements when the intruder is very close to the center of the image. CEMF Results A set of marauding or intrusion events were carried out to create a dictionary of real data. This data was organized and fed to a virtual environment used to validate each version of the algorithm. It was run through real data just acquired by a real sensor as if the event was happening. Thanks to this approach, hundreds of hours of marauding and jumping over a fence were saved. This fact also allowed us to benchmark different implementations over precisely the same circumstances. Figure 26 shows the detail of one of the tests included in the test dictionary. Red shows raw CEMF signal, light blue shows filtered signal, green shows the baseline signal, while the other two are internal threshold limits used by the traditional algorithm. To flag where real events occurred, a color code of vertical background bands was used. In the current image, grey bands indicated noisy fragments and an orange band indicated preventive. Once the implementation passed all test cases, the system was deployed in the field. At this point, the proposed algorithm needs to be executed on a low-power microcontroller with reduced computational capabilities. Therefore, the processing task needs to be very optimized, especially if the execution of these tasks is performed in real-time. In order to validate that the units were operating as expected, a healthcare service was programmed so that the data and the algorithm debug information is permanently A JSON file associated with each of the events identifies what type of event should be detected at each sample of the associated stream of CEMF data (and what should not be detected). This is used to launch all tests with a simple command, and so the verification framework can automatically calculate how accurate each implementation is. In Figure 27, the representation of the execution run displaying over 250 individual test plots like the one displayed in Figure 26. This image does not aim to provide details of the result execution but to provide a sense of the magnitude of the validation library. Once the implementation passed all test cases, the system was deployed in the field. At this point, the proposed algorithm needs to be executed on a low-power microcontroller with reduced computational capabilities. Therefore, the processing task needs to be very optimized, especially if the execution of these tasks is performed in real-time. In order to validate that the units were operating as expected, a healthcare service was programmed so that the data and the algorithm debug information is permanently recorded in an Influx DB. An automated process was created to help validate the system. It was executed every day to analyze the detected events of the previous day. It generated an image with a data slice that showed how the CEMF signal evolved around each detected event. Figure 28 shows two captures corresponding to prowling events. Blue and green are raw and filtered CEMF signals, pink is a preventive threshold of the traditional algorithm, and the yellow rectangle highlights the time while the device detected the threat. These images are precisely labeled with sensor ID, CEMF channel and timestamped. This fact allows for a manual cross-verification using the video system to validate the cause of the event. Figure 29 shows the signal of a person jumping over the fence (green) as well as the filtered (blue) and a traditional threshold (pink). As a result, an intrusion event was detected and highlighted within a red rectangle in the automatically generated images. recorded in an Influx DB. An automated process was created to help validate the system. It was executed every day to analyze the detected events of the previous day. It generated an image with a data slice that showed how the CEMF signal evolved around each detected event. Figure 28 shows two captures corresponding to prowling events. Blue and green are raw and filtered CEMF signals, pink is a preventive threshold of the traditional algorithm, and the yellow rectangle highlights the time while the device detected the threat. These images are precisely labeled with sensor ID, CEMF channel and timestamped. This fact allows for a manual cross-verification using the video system to validate the cause of the event. Figure 29 shows the signal of a person jumping over the fence (green) as well as the filtered (blue) and a traditional threshold (pink). As a result, an intrusion event was detected and highlighted within a red rectangle in the automatically generated images. An important part of this field validation was to estimate the performance of the CEMF sensors system. Figure 30 shows the sensitivity and specificity diagram. In order to do so, two different units and three different days were chosen: -Day 1-This was a windy day that caused the signal of these devices to be altered by external agents such as vegetation in the area. None of these phenomena should be detected by either of the two units. The system should only issue an alarm message if a person is in the perimeter. Therefore, this day will be used to see the robustness of the system against external agents. -Day 2-During this day a rain shower fell on the site. The CEMF signal is sensitive to rain, so it is crucial to observe the response of each unit under rain conditions. As with the agents mentioned above, rain is also an external agent that should not trigger any alarm. -Day 3-This was a test day where several persons were asked to perform prowling and intrusion events on each of these two units. This day will be used to validate the success rate of each system against real events. In summary, the first two days were used to validate the robustness of the system against external agents that may generate false positives, and the last day to assess whether it was able to detect all the events that take place and whether, once detected, it classified them correctly. As a result of the above analysis, the following statistics and quality indices were obtained: • True positives (TP): the number of ranges of the ground truth in which a certain event was required and it was effectively detected. • True negatives (TN): the number of ranges of the ground truth in which a certain event was restricted and it coincided that it was not detected. • False positives (FP): the number of ranges of the ground truth in which a certain event was restricted and, instead, it was detected. • False negatives (FN): the number of ranges of the ground truth in which a certain event was required and, instead, it was not detected. • Sensitivity (Sen): was the proportion of ground truth event ranges that were correctly classified as detected from among all those that were required. • Specificity (Spec): is the proportion of ground truth event ranges that were correctly classified as undetected from among all those that were restricted. • Accuracy (Acc): is the proportion of total hits, without distinguishing between true positives and true negatives. tected event. Figure 28 shows two captures corresponding to prowling events. Blue and green are raw and filtered CEMF signals, pink is a preventive threshold of the traditional algorithm, and the yellow rectangle highlights the time while the device detected the threat. These images are precisely labeled with sensor ID, CEMF channel and timestamped. This fact allows for a manual cross-verification using the video system to validate the cause of the event. Figure 29 shows the signal of a person jumping over the fence (green) as well as the filtered (blue) and a traditional threshold (pink). As a result, an intrusion event was detected and highlighted within a red rectangle in the automatically generated images. An important part of this field validation was to estimate CEMF sensors system. Figure 30 shows the sensitivity and speci to do so, two different units and three different days were chosen -Day 1-This was a windy day that caused the signal of these external agents such as vegetation in the area. None of these detected by either of the two units. The system should only if a person is in the perimeter. Therefore, this day will be use of the system against external agents. -Day 2-During this day a rain shower fell on the site. The C to rain, so it is crucial to observe the response of each unit un with the agents mentioned above, rain is also an external age ger any alarm. -Day 3-This was a test day where several persons were ask and intrusion events on each of these two units. This day wil success rate of each system against real events. In summary, the first two days were used to validate the ro against external agents that may generate false positives, and whether it was able to detect all the events that take place and w classified them correctly. As a result of the above analysis, the following statistics an obtained: • True positives (TP): the number of ranges of the ground truth As can be seen in Table 2, the closer these quality indices are to 100%, the more reliable the algorithm is, since it means that it allows better discrimination between the two populations of required and restricted events. In particular, the overall performance of the system across the validation days was very good. With a sensitivity, specificity and accuracy over the 95% it proved that the system is very robust against false alarms and very accurate for true alarms. The respective formulas that allow the calculation of the quality indices described above are: Sen = TP TP + FN (9) Acc = TP + TN TP + TN + FP + FN As can be seen in Table 2, the closer these quality indices are to 100%, the more reliable the algorithm is, since it means that it allows better discrimination between the two populations of required and restricted events. In particular, the overall performance of the system across the validation days was very good. With a sensitivity, specificity and accuracy over the 95% it proved that the system is very robust against false alarms and very accurate for true alarms. Table 2. Sensitivity and specificity of the CEMF system. Specificity 96% Accuracy 97% Vision Results In order to work with neural networks, an initial training step is required for the network to learn the features of the objects that need to be detected. To perform the training most optimally, a large number of images that correctly represent the target problem that the network must solve are required for this process. In this project, we counted on a large number of aerial images in which people were present. In addition, to carry out the training with these images, they had to have the associated metadata to flag the pixels where persons appeared in each frame. For training the network, a data set was created using a combination of downloaded public datasets and multiple images of people acquired by our drone at different heights, with a camera tilt of approximately 45 • pointing towards the ground. The ratio of public images to own images was approximately 85/15. In the case of the public dataset, the Okutama Action dataset [29] was used. Approximately 60,000 images were used, separating 48,000 images for training and 12,000 images for validation (checking that the network worked as expected). In this way, the 80% training-20% validation criterion was followed. Figure 31 shows some samples of this dataset. where persons appeared in each frame. For training the network, a data set was created using a combination of downloade public datasets and multiple images of people acquired by our drone at different heights with a camera tilt of approximately 45° pointing towards the ground. The ratio of publi images to own images was approximately 85/15. In the case of the public dataset, th Okutama Action dataset [29] was used. Approximately 60,000 images were used, separat ing 48,000 images for training and 12,000 images for validation (checking that the networ worked as expected). In this way, the 80% training-20% validation criterion was fol lowed. Figure 31 shows some samples of this dataset. During the network training, several metrics were analyzed in each iteration, such a intersection over union (IoU) and the precision, recall, F1-score. By analyzing the valida tion set in each complete iteration of the network training, the graphs in Figure 32 wer obtained. As can be seen, the algorithm behaved quite well for the validation set, evolvin positively to stay above 50% in the case of the IoU. On the other hand, the precision, th recall, and the F1-score equally evolved positively until settling in the range of 0.70-0.76 It should be noted that, when detecting people for the scope of this project, the point o During the network training, several metrics were analyzed in each iteration, such as intersection over union (IoU) and the precision, recall, F1-score. By analyzing the validation set in each complete iteration of the network training, the graphs in Figure 32 were obtained. As can be seen, the algorithm behaved quite well for the validation set, evolving positively to stay above 50% in the case of the IoU. On the other hand, the precision, the recall, and the F1-score equally evolved positively until settling in the range of 0.70-0.76. It should be noted that, when detecting people for the scope of this project, the point of interest may not be to achieve a great precision when obtaining the area of interest marking a person, but rather that the number of false positives and false negatives are kept as low as possible. interest may not be to achieve a great precision when obtaining the area of interest marking a person, but rather that the number of false positives and false negatives are kept as low as possible. Conclusions A complete perimeter electronic security system has been proposed to identify dangerous intrusions based on disruptive CEMF sensor technology and video information. The surveillance system exhibits long-distance monitoring, high sensitivity, and reduced false alarms since the CEMF technology has a high detection range and can discriminate between animals, humans, and objects. The technology is developed to accurately detect the motion of people in a protected area in an outdoor environment. It provides intelligent mechanisms for reducing false alarms and improving the security system's effectiveness in detecting and preventing potential intrusions. The system also includes cameras and video content analytics for automatic intruder tracking. Additionally, the advantageous properties of the CEMF sensors provide reduced false alarms and allow for the cameras to be deactivated most of the time. A mobile camera on an autonomous drone system allows better intruder detection and tracking when the threat is out of range of the perimeter units and the static ground cameras.
19,573.4
2021-11-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Dysfunctional cardiac mitochondrial bioenergetic, lipidomic, and signaling in a murine model of Barth syndrome. Barth syndrome is a complex metabolic disorder caused by mutations in the mitochondrial transacylase tafazzin. Recently, an inducible tafazzin shRNA knockdown mouse model was generated to deconvolute the complex bioenergetic phenotype of this disease. To investigate the underlying cause of hemodynamic dysfunction in Barth syndrome, we interrogated the cardiac structural and signaling lipidome of this mouse model as well as its myocardial bioenergetic phenotype. A decrease in the distribution of cardiolipin molecular species and robust increases in monolysocardiolipin and dilysocardiolipin were demonstrated. Additionally, the contents of choline and ethanolamine glycerophospholipid molecular species containing precursors for lipid signaling at the sn-2 position were altered. Lipidomic analyses revealed specific dysregulation of HETEs and prostanoids, as well as oxidized linoleic and docosahexaenoic metabolites. Bioenergetic interrogation uncovered differential substrate utilization as well as decreases in Complex III and V activities. Transgenic expression of cardiolipin synthase or iPLA2γ ablation in tafazzin-deficient mice did not rescue the observed phenotype. These results underscore the complex nature of alterations in cardiolipin metabolism mediated by tafazzin loss of function. Collectively, we identified specific lipidomic, bioenergetic, and signaling alterations in a murine model that parallel those of Barth syndrome thereby providing novel insights into the pathophysiology of this debilitating disease. penetrating mechanistic insights into the role of tafazzin in regulating mitochondrial lipidomics, signaling, and bioenergetic function have been defi ned, thereby identifying the complexity of alterations resulting from tafazzin loss of function and the multiple pathologies manifest in Barth syndrome patients. Materials Synthetic phospholipids used as internal standards in mass spectrometric analyses were purchased from Avanti Polar Lipids (Alabaster, AL). Solvents for sample preparation and mass spectrometric analysis were purchased from Burdick and Jackson (Muskegon, MI) as well as Sigma Aldrich (St. Louis, MO). Induction of the doxycycline inducible Taz KD mouse model of Barth syndrome Developmental doxycycline induction of the Taz KD mouse model was performed in utero and maintained postnatally as previously described in detail ( 24 ). A syngeneic transgenic colony was generated by breeding several generations onto a C57BL/6J mouse background, which were used for all studies. Briefl y, dams were fed a 625 mg/kg doxycycline diet (Harlan Teklan) for 5 days prior to mating. Upon initiation of mating with a shRNA tafazzin-positive heterozygote male, the diet was removed for 3 days until a confi rmation of insemination was obtained at which time the male was removed from the cage and the doxycycline diet was returned to the breeding cage and maintained until the pups were weaned. The genotype of the mice was confi rmed by PCR as previously described ( 24 ) and male wild-type littermates and Taz KD mice were maintained on the doxycyline diet until two months of age at which time lipidomics and biochemical experiments were performed for developmental characterization. Additional experiments were performed utilizing double genetic crossed mice [Taz KD×cardiolipin synthase transgenic (CLS-TG) and Taz KD×iPLA 2 ␥ knockout (KO)]. In these experiments, male mice were raised until 2 months of age without doxycycline induction and at 2 months of age the mice were induced with doxycyline until 4 months of age at which time the mice were sacrifi ced and experiments were performed. All wild-type mice were also maintained on a 625 mg/kg doxycycline diet as control. All animal procedures were performed in accordance with the Guide for the Care and Use of Laboratory Animals and were approved by the Animal Studies Committee at Washington University School of Medicine. Multidimensional mass spectrometry-based shotgun lipidomic analysis of the cardiac lipidome Briefl y, myocardial tissue was removed, washed with 10× diluted PBS and freeze clamped in liquid nitrogen for lipidomic analysis. Lipidomic analyses were performed as previously described, using a modifi ed Bligh and Dyer extraction protocol (26)(27)(28). Individual lipid extracts were reconstituted with 1:1 (v/v) CHCl 3 /CH 3 OH, fl ushed with nitrogen, and stored at Ϫ 20°C prior to electrospray ionization-MS using a TSQ Quantum Ultra Plus triple-quadrupole mass spectrometer (Thermo Fisher Scientific, San Jose, CA) equipped with an automated nanospray apparatus (Advion Biosciences Ltd., Ithaca, NY) and customized sequence subroutine operated under Xcalibur software. Enhanced MDMS-SL analysis of cardiolipins was performed with a mass resolution setting of 0.3 Thomson as described previously in detail ( 29 ). novel therapeutic approaches for treatment of this lethal disease. Regulation and maintenance of the mitochondrial lipidome is critical for bioenergetic effi ciency, cellular signaling, and multiple other mitochondrial processes (e.g., fusion and fi ssion) (10)(11)(12)(13). Mitochondria are comprised of a unique double bilayer membrane structure that facilitates the compartmentalization of multiple processes to effi ciently integrate mitochondrial function with cellular energy needs ( 11,14,15 ). A prominent lipid regulator of mitochondrial inner membrane surface charge, molecular dynamics, and membrane curvature is cardiolipin, which contains a unique tetra-acyl structure ( 12,15,16 ). Cardiolipin is a doubly charged mitochondrial phospholipid comprised of two phosphates, three glycerol groups, and four acyl chains (17)(18)(19). Regulation of the content and molecular species composition of cardiolipin is critical for electron transport chain efficiency, adenine nucleotide translocase activities, mitochondrial protein import, and uncoupling, as well as TCA cycle fl ux ( 20,21 ). The molecular species composition of cardiolipin is dynamically regulated by integrated cellular control of cardiolipin de novo synthesis, phospholipasemediated deacylation, and membrane remodeling by the subsequent actions of either transacylase or acyltransferase activities that are coordinately regulated to lead to a mature cardiolipin molecular species distribution ( 22 ). Additionally, because the remodeling of cardiolipin through transacylation harvests acyl chains from choline and ethanolamine glycerophospholipids, the dynamic balance of cardiolipin remodeling by transacylation versus acyltransferase activity is critical for the maintenance of mitochondrial membrane architecture, surface charge, and molecular dynamics. Thus, the precisely regulated balance of cardiolipin synthesis, remodeling, and degradation exerts tight regulatory control of mitochondrial membrane structure and function. Herein, we examined the bioenergetic, lipidomic, and signaling mechanisms that were altered in a tafazzin lossof-function mouse model (23)(24)(25) that was predicted to recapitulate the pathology of Barth syndrome in an animal model thereby facilitating a greater understanding of the multiple processes contributing to hemodynamic dysfunction in Barth syndrome. Through utilization of integrated molecular, chemical, and lipidomic approaches in conjunction with high-resolution respirometry, multiple novel mechanistic roles of tafazzin in regulating cardiolipin and lysocardiolipin homeostasis and myocardial signaling have been identifi ed and their resultant effects on mitochondrial electron transport chain function, bioenergetics, and cardiac transcriptomic networks delineated. Additionally, we employed double crosses of genetic models of key enzymes involved in the cardiolipin remodeling process, namely cardiac myocyte-specifi c transgenic expression of cardiolipin synthase as well as the ablation of iPLA 2 ␥ in the Taz KD mouse model of Barth syndrome to investigate potential therapeutic strategies to attenuate maladaptive cardiolipin remodeling. Thus, through the utilization of complementary transgenic approaches, BSA, pH 7.4) and homogenized using 12-15 passes with a Tefl on homogenizer using a rotation speed of 120 rpm. Next, the homogenate was centrifuged for 5 min at 850 g , and the supernatant was collected and centrifuged at 7,200 g for 10 min. The pellet was collected and resuspended in MIB without BSA. Mitochondrial protein content was determined using a BCA protein assay (Thermo Fisher Scientifi c, San Jose, CA). High-resolution respirometry was performed using 50 µg of mitochondrial protein per 2 ml chamber with the substrate and inhibitor addition protocol previously described ( 27,31 ). Enzymatic characterization of electron transport chain and functional adenine nucleotide translocase activities Complex I. Complex I (NADH-ubiquinone oxidoreductase) activity was determined by measuring the decrease in the concentration of NADH at 340 nm and 37°C as previously described ( 32,33 ). The assay was performed in buffer containing 50 mM potassium phosphate (pH 7.4), 2 mM KCN, 5 mM MgCl 2 , 2.5 mg/ml BSA, 2 M antimycin, 100 M decylubiquinone, and 0.3 mM K 2 NADH. The reaction was initiated by adding purifi ed mitochondria (5 g). Enzyme activity was measured for 5 min and values were recorded 30 s after the initiation of the reaction. Specifi c activities were determined by calculating the slope of the reaction in the linear range in the presence or absence of 1 M rotenone (Complex I inhibitor). Complex II. Complex II (succinate decylubiquinone 2,6dichloroindophenol (DCIP) oxidoreductase) activity was determined by measuring the reduction of DCIP at 600 nm as previously described ( 33,34 ). The Complex II assay was performed in buffer containing 25 mM potassium phosphate (pH 7.4), 20 mM succinate, 2 mM KCN, 50 M DCIP, 2 g/ml rotenone, and 2 g/ml antimycin. Purifi ed mitochondria (5 g) were added prior to initiation of the reaction. The reaction was initiated by adding 56 M decylubiquinone. Specifi c activities were determined by calculating the slope of the reaction in the linear range in the presence or absence of 0.5 mM thenoyltrifl uoroacetone (Complex II inhibitor). Complex III. Complex III (ubiquinol-cytochrome c reductase) activity was determined by measuring the reduction of cytochrome c at 550 nm and 30°C. The Complex III assay was performed in buffer containing [25 mM potassium phosphate (pH 7.4), 1 mM EDTA, 1 mM KCN, 0.6 mM dodecyl maltoside, and 32 M oxidized cytochome c] using purifi ed mitochondria (1 g). The reaction was initiated by adding 35 M decylubiquinol. The reaction was measured following the linear slope for 1 min in the presence or absence of 2 M antimycin (Complex III inhibitor). Decylubiquinol was made by dissolving decylubiquinone (10 mg) in 2 ml acidifi ed ethanol (pH 2) and using sodium dithionite as a reducing agent. Decylubiquinol was further purifi ed with cyclohexane ( 32,33,35 ). Complex IV. Complex IV (cytochrome c oxidase) activity was determined by measuring the oxidation of ferrocytochrome c at 550 nm and 25°C. The Complex IV assay was performed in buffer containing [10 mM Tris-HCl and 120 mM KCl (pH 7.0)] using purifi ed mitochondria (2.5 g). The reaction was initiated by adding 11 M reduced ferrocytochrome c and monitoring the slope for 30 s in the presence or absence of 2.2 mM KCN (Complex IV inhibitor) ( 33,36 ). Complex V. Complex V (F1 ATPase) activity was determined using a coupled reaction measuring the decrease in Oxidized lipid metabolite analysis Tissues ( ‫ف‬ 100 mg) were quickly washed with cold PBS (pH 7.4) solution, blotted, snap-frozen in liquid nitrogen, and stored at Ϫ 80°C until extraction. For extraction, 2 ml of icecold methanol/CHCl 3 (1:1 v/v with 1% HAc) and 2 l of antioxidant mixture (0.2 mg/ml BHT, 0.2 mg/ml EDTA, 2 mg/ml triphenylphosphine, and 2 mg/ml indomethacin in a solution of 2:1:1 methanol/ethanol/water) were added to the tissue samples. Internal standards (250 pg each of TXB2-d4, PGE2-d4, LTB4-d4, 12-HETE-d8, 13-HODE-d4, and 9,10-DiHOME-d4 in 5 ul acetonitrile) were also added at this step . The samples were immediately homogenized and subsequently vortexed several times during a 15 min incubation on ice. Next, 1 ml of ice-cold water was added to the sample which was briefl y vortexed and centrifuged at 1,500 g for 15 min. The CHCl 3 layer was transferred to a new tube and the remaining methanol/water layer was reextracted with 1 ml of CHCl 3 and centrifuged at 1,500 g for 15 min. The combined CHCl 3 layers were dried down with N 2 and reconstituted in 1 ml of 10% methanol solution. The reconstituted solution was immediately applied to a Strata-X solid phase extraction cartridge that had been preconditioned with 1 ml of methanol followed by 1 ml of 10% methanol. The cartridge was then washed with 2× 1 ml of 5% methanol and additional solvent was fl ushed out with N 2 at a pressure of 5 psi. Eicosanoids were eluted with 1 ml of methanol containing 0.1% HAc. All cartridge steps were carried out using a vacuum manifold attached to a house vacuum line. After the organic solvent was evaporated with a SpeedVac, the residues were derivatized with N -(4-aminomethylphenyl)pyridinium (AMPP). The derivatization with AMPP was performed as previously described in detail ( 30 ). Briefl y, 12.5 l of ice-cold acetonitrile/ N , Ndimethylformamide (4:1, v:v) was added to the residue in the sample vial. Then 12.5 l of ice-cold 640 mM [3-(dimethylamino) propyl]ethyl carbodiimide hydrochloride in HPLC grade water was added. The vial was briefl y vortexed and 25 l of 5 mM N -hydroxybenzotriazole/15 mM AMPP in acetonitrile was added. The vials were vortexed briefl y and placed in a 60°C water bath for 30 min. Metabolites were analyzed using a hybrid tandem mass spectrometer (LTQ-Orbitrap, Thermo Scientifi c) via selected reaction monitoring in positive ion mode with sheath, auxiliary, and sweep gas fl ows of 30, 5, and 1, respectively. The capillary temperature was set to 275°C and the electrospray voltage was 4.1 kV. Capillary voltage and tube lens were set to 2 and 100 V, respectively. Instrument control and data acquisition were performed using the Thermo Xcalibur V2.1 software. Mitochondrial high-resolution respirometry Mice used for experiments were sacrifi ced and the hearts were immediately removed and dissected on ice (4°C ambient temperature). Briefl y, the dissected heart was placed in mitochondrial isolation buffer ( Tafazzin defi ciency results in altered choline and ethanolamine glycerophospholipid molecular species Cardiolipin molecular species remodeling involves the coordinated regulation of various phospholipase, acyltransferase, and transacylase activities. Mutants in tafazzin have previously been associated with defective transacylation of specifi c acyl chains from choline and ethanolamine glycerophospholipid molecular species ( 3,8 ). In the present study, loss of tafazzin enzymatic activity in the Barth syndrome mouse model results in the accumulation of specifi c choline diacyl (D) glycerophospholipid molecular species containing linoleic acid in the sn -2 position, specifi cally D16:0-18:2 and D18:0-18:2 ( Fig. 2A ). Due to the increased linoleic acid content in choline glycerophospholipid molecular species, the utilization of linoleic acid to synthesize arachidonic acid by acyl chain elongation is also altered as evidenced by an increased content of 20:3-(an intermediate in the synthesis of 20:4 from 18:2) and 20:4-containing molecular species (e.g., D18:0-20:3 and D18:0-20:4). Furthermore, molecular species containing docosahexaenoic acid at their sn -2 positions are decreased including D16:0-22:6 and D18:2-22:6, thus demonstrating an imbalance in the architectural restructuring of choline glycerophospholipid molecular species. The increased presence of -6 polyunsaturated fatty acids (i.e., linoleic acid and arachidonic acid) may partially account for the defi ciency of docosahexaenoic acid because the biosynthesis of both -6 and -3 polyunsaturated fatty acids compete for the same enzyme systems. Interestingly, analysis of ethanolamine glycerophospholipid molecular species also displayed an overall increase in molecular species containing NADH concentration at 340 nm and 37°C as previously described (37)(38)(39). The Complex V assay was performed in buffer containing (50 mM Tris-HCl, 25 mM KCl, 5 mM MgCl 2 , 4 mM Mg-ATP, 200 µM K 2 NADH, 1.5 mM phosphoenolpyruvate, 5 units pyruvate kinase, 5 units lactate dehydrogenase, 2.5 M rotenone, and 2 mM KCN) using purifi ed mitochondria (10 g). The reaction was initiated by the addition of mitochondria and the reaction was monitored for 6 min. The slope in the linear range was used to calculate the reaction rate. Oligomycin (2.5 mg/ml) (Complex V inhibitor) was added to designated cuvettes to calculate the specifi c Complex V activity. Microarray analysis of the cardiac transcriptome RNA was extracted from 2-month-old male WT and Taz KD mice using Trizol and the RNeasy extraction kit (Qiagen). RNA integrity was calculated and transcriptome analysis was performed using an Illumina BeadArray. Quantile analysis was utilized for postprocessing expression analysis. Statistical analysis Data were analyzed using a two-tailed unpaired Student's t -test. Differences were regarded as signifi cant at the * P < 0.05, **/ # P < 0.01. All data are reported as the means ± SEM unless otherwise indicated. Identifi cation of the cardiac cardiolipin phenotype of the developmental inducible Taz KD mouse model of Barth syndrome We used a multidisciplinary approach to investigate the biochemical and biophysical mechanisms leading to mitochondrial dysfunction resulting from tafazzin loss of function in mice. Mass spectrometric analysis of myocardial cardiolipin molecular species was performed by MDMS-SL analysis using the M + 1/2 isotopologue approach we previously developed ( 29 ). The results revealed dramatic alterations in cardiolipin content and molecular species distribution induced by tafazzin loss of function ( Fig. 1A ). Quantitative analysis of cardiolipin molecular species revealed a dramatic decrease in linoleic (18:2)-enriched molecular species, most notably tetra-18:2 (18:2-18:2-18:2-18:2), which is the major molecular species of cardiolipin in myocardium ( Fig. 1B ). Importantly, the decrease in tetra-18:2 molecular species is a hallmark characteristic of Barth syndrome ( 46 ). Selective cardiolipin molecular species containing dihomo-␥ -linolenic acid (20:3) or docosahexaenoic acid (22:6) Molecular species below 0.2 nmol/mg protein were omitted from the fi gure for visual clarity. C: Analysis of lysocardiolipins revealed an increase in both dilysocardiolipin and monolysocardiolipin molecular species. Molecular species below 0.1 nmol/mg protein were omitted from the fi gure for visual clarity. Values represent the mean quantitative value of molecular species ± S.E. (N = 3 hearts per group; black bars, wild-type littermates; white bars, Taz KD mice). * P < 0.05 level, ** P < 0.01 level . Decreased tafazzin activity results in altered myocardial generation of biologically potent oxidized signaling metabolites Signaling metabolites generated from the oxidation of linoleic, arachidonic, and docosahexaenoic acids are potent mediators of calcium homeostasis, infl ammation, and vascular regulation (47)(48)(49)(50)(51). Examination of Taz KD myocardium revealed the complex dysregulation of oxidized 18:2, 20:4, and 22:6 fatty acyl molecular species. Analysis of multiple eicosanoids revealed increases in 5-HETE and 11-HETE as well as a decrease in 15-HETE content in Taz KD compared with the wild-type littermate myocardium ( Fig. 3A ). Interestingly, cardioprotective EETs were unchanged in myocardium. Analysis of prostanoids revealed an increase in PGE 2 , PGF 2 ␣ , TXB 2 , 6keto-PGF 1 ␣ , and PGF 1 ␣ metabolites in the Barth syndrome mouse model that are likely to result in multiple pathologic alterations in infl ammation, ion channel function, and cellular signaling cascades. The glutamate-stimulated oxidation was increased by 25% during state 3 respiration in cardiac mitochondria isolated from the Taz KD mice compared with wild-type littermates, which suggests a dramatic shift toward the selection of amino acids for preferential substrate oxidation ( Fig. 4C ). In order to test the adaptability of mitochondria to the utilization of multiple substrates entering the TCA cycle, pyruvate and glutamate were employed to determine the dynamic flux of these TCA cycle substrates used in the wild-type and the Taz KD mouse model . Utilization of pyruvate and glutamate as substrates demonstrated a 15% decrease in state 3 respiration, suggesting that the redox capacity and metabolic fl exibility of the TCA cycle in isolated cardiac mitochondria from the Taz KD mice is defi cient relative to wild-type littermates ( Fig. 4D ). Comparison of multiple substrate combinations to drive state 3 respiration by measuring substrate control ratios demonstrated that fatty acid oxidation is markedly impaired in the Taz KD mouse model, yet amino acid fermentation utilizing glutamate appears to predominate as the preferential fuel to meet energetic demands ( Fig. 4E ). This selective shift in substrate oxidation will lead to multiple downstream bioenergetic repercussions, because normal myocardium generally utilizes fatty acids and glucose under physiological conditions and not amino acids as a primary fuel substrate. Inhibition of tafazzin expression precipitates alterations in Complex III, Complex V, and glutamate-stimulated adenine nucleotide translocase activities The effi ciency and enzymatic activity of the electron transport chain has been closely associated with alterations acid release by phospholipases. Analysis of the biologically potent oxidized metabolites of 18:2 fatty acid revealed decreases in 9-HODE, 9-oxoODE, and 9(10)-EpOME, but not other oxidized 18:2 derivatives, demonstrating selective metabolic channeling of the 18:2 fatty acyl chains present in phospholipids due to decreased tafazzin-mediated transacylation ( Fig. 3B ). Investigation of 22:6 oxidized aliphatic chain content, which would be prone to oxidation due to its high degree of unsaturation, reveals a selective decrease in the anti-infl ammatory metabolites RVD1 and RVD2 in the Taz KD compared with wild-type control myocardium ( Fig. 3C ). In contrast, DiHDoHE, DiHDPA, and HDoHE were unchanged. Tafazzin defi ciency leads to altered myocardial substrate utilization for respiration Alterations in the mitochondrial membrane lipidome precipitate bioenergetic ineffi ciency and impair adaptive alterations in substrate utilization during metabolic transitions. In the present study, redistribution of acyl chains in cardiolipin and mitochondrial glycerophospholipids in the Taz KD model resulted in a shift in mitochondrial metabolism in a substrate-specifi c manner. Pyruvate oxidation was unaltered in isolated mitochondria from Taz KD cardiac mitochondria compared with wild-type littermates ( Fig. 4A ). However, fatty acid oxidation utilizing palmitoyl-L -carnitine as substrate was decreased by 25% during state 3 respiration in Taz KD cardiac mitochondria compared with wild type. Moreover, this defi ciency was maintained upon addition of succinate, which would combine both Complex I and Complex II electron and proton donation through the respiratory chain ( Fig. 4B ). Surprisingly, to modulate ADP/ATP exchange in a substrate-specifi c manner to direct bioenergetic metabolite oxidation ( 52,53 ). To investigate the differences between state 3 substrate utilization in myocardium from the murine model of Barth syndrome, we measured functional ANT activity driven by pyruvate, glutamate, palmitoyl-L -carnitine, and succinate . Analysis of functional ANT activity revealed a selective 6-fold increase in glutamate-stimulated activity in isolated cardiac mitochondria from Taz KD mice compared with wild-type littermates ( Fig. 5B ). This suggests that altering the mitochondrial lipidome infl uences the substrate selectivity of the ANT leading to downstream changes in electron transport chain fl ux and coupling effi ciency. Inducible Taz KD results in compensatory alterations in the myocardial transcriptome To gain further molecular insight into the compensatory mechanisms that result from alterations in the mitochondrial lipidome and myocardial membrane remodeling, we examined the cardiac transcriptome in the Taz KD mouse model of Barth syndrome. Gene Set Enrichment Analysis (GSEA) ( 54, 55 ) revealed dramatic increases in various processes involved in amino acid synthesis, protein translation, and amino acid metabolism in addition to increases in nucleotide metabolism, GTP hydrolysis, and folate metabolism, all of which suggest dramatic compensatory metabolic alterations in response to changes in the mitochondrial lipidome and the accumulation of lysocardiolipin ( Table 1 ). Pathways that were transcriptionally downregulated included branched-chain amino acid catabolism as well as valine, leucine, and isoleucine degradation. Thus, the unexpected effects of increased amino acid synthesis and intraconversion in combination with the decreased catabolism of amino acids revealed dramatic alterations in amino acid and protein metabolism in response to altered lipid remodeling in the mitochondrial membrane which collectively precipitated alterations in substrate utilization. Removal of doxycycline from the diet for 2 months attenuates bioenergetic and lipidomic dysfunction in the inducible Taz KD mouse model Utilizing the inherent genetic malleability of the inducible Taz KD mouse model, we examined the effect of removal of doxycycline from the diet following treatment to determine if the distinctive bioenergetic and lipidomic phenotype observed in the Taz KD model was restored to wild-type levels after removal of doxycycline. Following removal of the doxycycline from the diet for 2 months, analysis of glutamate stimulated adenine nucleotide translocase activity in Taz KD cardiac mitochondria, which were 6-fold increased during knockdown ( Fig. 5B ), were attenuated to wild-type level (supplementary Fig. IA). Additionally, high-resolution respirometry analysis of state 3 respiration under various substrates revealed an attenuation of palmitoylcarnitine, glutamate, and pyruvate/ glutamate-stimulated state 3 respiration in Taz KD mice compared with wild-type mice which were removed from doxycycline treatment for 2 months (supplementary Fig. IB). in the lipid composition of mitochondrial membranes and in the content and molecular species composition of cardiolipin molecular species in particular ( 11,12 ). Due to extensive alterations in cardiolipin molecular species composition and the accumulation of lysocardiolipin in cardiac mitochondria isolated from Taz KD mice, we measured the activities of the electron transport chain complexes in wild-type mice and the mouse model of Barth syndrome. Examination of electron transport chain activities revealed a 45% decrease in Complex III activity and a 25% decrease in Complex V activity in cardiac mitochondria isolated from Taz KD mice compared with wild-type littermates ( Fig. 5A ). These results demonstrate the essential biophysical role of alterations in mitochondrial membrane lipid composition in the Barth syndrome mouse model. Regulation of ANT activity is partially regulated by the molecular composition of cardiolipin which has been shown Expression of cardiolipin synthase and inhibition of iPLA 2 ␥ in conjunction with Tafazzin defi ciency leads to altered cardiolipin and lysocardiolipin molecular species Regulation of cardiolipin remodeling involves several enzymatic steps that could be modulated by pharmacologic intervention to decrease maladaptive cardiolipin remodeling as observed in Barth syndrome due to Tafazzin defi ciency. To determine the potential therapeutic efficacy of increasing cardiolipin synthase (CLS) expression or blocking iPLA 2 ␥ expression as a possible treatment for Barth syndrome, we generated a doubly transgenic mouse strain crossing the inducible Taz KD mice with a transgenic mouse strain that expresses human CLS in a cardiac myocyte-specifi c manner which was previously demonstrated to increase cardiolipin remodeling ( 40 ). Additionally, we crossed the inducible Taz KD mice with a strain that was null for iPLA 2 ␥ , which is involved in the generation of monolysocardiolipin for the transacylation of acyl chains for cardiolipin remodeling ( 56 ). These genetic models were used to interpret the potential to restore alterations in mitochondrial cardiolipin and lysocardiolipin composition and function due to tafazzin downregulation in the murine model of Barth syndrome. Mass spectrometric analysis of the phospholipids of wild-type, Taz KD, Taz KD crossed with CLS-TG, and Taz KD crossed with iPLA 2 ␥ KO male mice at 4 months of age (2 months of doxycycline treatment) revealed distinct alterations in cardiolipin and lysocardiolipin molecular species. Analysis of immature cardiolipin molecular species enriched in 16:0, 16:1, and 18:1 acyl chains revealed that increased expression of cardiolipin synthase or ablation of iPLA 2 ␥ under conditions of Taz KD increased the content of immature cardiolipin molecular species demonstrating that CLS and iPLA 2 ␥ maintain critical roles in the initial stages of cardiolipin remodeling and synthesis that is partially independent of the presence of tafazzin ( Fig. 6A ) The corresponding analysis of the cardiac lipidome also demonstrated a return toward wild-type levels after removal of doxycycline from the diet. More specifi cally, highresolution MDMS-SL analysis of cardiolipin revealed that after 2 months of removal of doxycycline, cardiolipin, monolysocardiolipin, and dilysocardiolipin levels returned to wild-type levels in the inducible Taz KD mouse model (supplementary Fig. IC). 5. Regulation of electron transport chain and adenine nucleotide translocase activities in Taz KD mice. A: Electron transport chain activities in isolated cardiac mitochondria revealed a selective decrease in Complex III and Complex V activities in the Taz KD mouse model compared with wild-type littermates. B: Analysis of the functional ANT activities driven by various substrates revealed a dramatic increase in glutamate stimulated ANT activity in the Taz KD mouse model compared with wild-type littermates. Values represent the mean enzyme activity (nmol/min/mg mitochondrial protein, ETC) or (nmol cATR/mg protein, ANT) ± S.E. (N = 5 isolated cardiac mitochondria per group; black bars, wild-type littermates; white bars, Taz KD mice). ** P < 0.01 level. Data analyzed from GSE33452 using GSEA. and 18:1-18:1 DLCL compared with Taz KD alone, suggesting that iPLA 2 ␥ plays a critical role in the initial and rapid production of DLCL for cardiolipin remodeling in an acyl chain-specifi c manner ( Fig. 6B ). DLCL molecular species in CLS-TG×Taz KD mice were similar compared with Taz provide further mechanistic insight into the role of tafazzin in the temporal lifecycle of cardiolipin as well as identify potential phospholipid substrate donors used for the transacylation of lysocardiolipin acceptors. Tafazzin deficiency results in the dynamic redistribution of unsaturated acyl chains in the mitochondrial lipidome (primarily in choline and ethanolamine glycerophospholipids) thereby impacting membrane biophysical properties and signaling through alterations as a reservoir of linoleic, arachidonic, and docosahexaenoic fatty acids for release by phospholipases and subsequent oxidation. Interestingly, loss of tafazzin function in myocardium leads to changes in the mitochondrial lipidome resulting in the dysregulated generation of potent oxidized derivatives of polyunsaturated fatty acids. Thus, tafazzin serves as a previously unrecognized regulator of multiple processes leading to changes in the vasoresponsive and infl ammatory capacity of myocardium in Barth syndrome presumably through its ability to infl uence acyl chain location in phospholipids, the activity of distinct phospholipases, and/or channeling of polyunsaturated fatty acid substrates to a variety of lipoxygenases, cyclooxygenases, and cytochrome P450 enzymes. We specifi cally note that oxidized lipid metabolites also serve as key regulators of ion channel function as well as calcium homeostasis which likely modulate myocardial function in complex metabolic disease such as Barth syndrome ( 50,51,68 ). Additionally, because these oxidized molecules originate from the mitochondria, it would appear that mitochondria in Barth syndrome may also impact mitokine signaling precipitating maladaptive alterations in lipid metabolism, signaling, and bioenergetics. A comprehensive interrogation of mitochondrial bioenergetics in Barth syndrome myocardium has previously been hindered due to lack of suffi cient appropriate specimens to adequately investigate the full spectrum of bioenergetic capacity. The inducible Taz KD mouse model described in the present study represents a valuable tool for investigation of mitochondrial function as an experimental model of Barth syndrome. Herein, we demonstrate that cardiac mitochondria isolated from tafazzin-defi cient mice are capable of undergoing effective coupled respiration even with a severe defi ciency of mature tetra-acyl cardiolipin as well as the accompanied accumulation of lysocardiolipin species (MLCL and DLCL). High-resolution respirometric analysis of isolated cardiac mitochondria revealed defi ciencies in fatty acid oxidation, which was compensated by increased glutamate-stimulated metabolism, thus demonstrating a characteristic shift in the fl ux of the TCA cycle and substrate preference of cardiac mitochondria to select amino acid fermentation over the normally preferred fatty acid metabolism. This phenomenon was further supported by alterations in key transcriptional pathways indicating that tafazzin defi ciency precipitates altered cardiolipin remodeling thereby resulting in a preferential substrate shift toward de novo amino acid biosynthesis as well as increased amino acid utilization by the TCA cycle. These data appear to support the previous fi nding of increased whole-body protein 18:2-18:2-18:1 MLCL molecular species compared with Taz KD mice, thus demonstrating a critical role of iPLA 2 ␥ in the temporal and sequential remodeling of DLCL and MLCL toward a mature cardiolipin molecular species distribution. DISCUSSION Deconvolution of the biophysical, temporal, and integrated roles of cardiolipin, its metabolic intermediates (MLCL and DLCL), as well as the integrated processes by which cardiolipin is remodeled represents a paramount goal to understanding the mechanisms by which the mitochondrial membrane regulates bioenergetic homeostasis ( 57 ). Alterations in cardiolipin molecular speciation are evident in a variety of metabolically complex diseases such as diabetes, heart failure, Tangiers disease, cancer, hyperthyroidism, and neurodegeneration, as well as Barth syndrome ( 21,33,(58)(59)(60)(61)(62)(63)(64)(65)(66). Thus, associating the specifi c roles of cardiolipin molecular species with their causative effects on bioenergetic capacity and metabolic fl ux is a critical objective to develop therapeutic strategies targeting the mitochondrial lipidome to reestablish bioenergetic homeostasis in a variety of complex metabolic diseases. The results of the present study investigating cardiac bioenergetic, lipidomic, and signaling mechanisms in the Taz KD mouse model of Barth syndrome demonstrate: i ) clear resemblance of the mouse model to the human condition resulting in the accumulation of MLCL with the unexpected accumulation of DLCL; ii ) altered distribution of acyl chains in choline and ethanolamine glycerophospholipids; iii ) dysregulated generation of potent oxidized lipid metabolites critical for hemodynamic function; iv ) a shift in preference for glutamate-stimulated oxidation; and v ) an inability of the regulation of cardiolipin synthetic or mitochondrial phospholipase activities to attenuate altered cardiolipin remodeling in the tafazzin shRNA Barth syndrome mouse model. The phenotype associated with Barth syndrome is intricately intertwined with the loss of tafazzin function, which sculpts and maintains the optimal cardiolipin molecular species distributions to coordinate metabolic homeostasis ( 3 ). The primary cause of death in those affl icted with Barth syndrome is heart failure; however, tools to experimentally dissect the complex molecular pathophysiology of this phenotype did not exist until the generation of the experimental mouse model of Barth syndrome which mimics the pathophysiologic condition in humans ( 23,24 ). Importantly, the inducible Taz KD mouse model presents with cardiomyopathy as well as several other traits characteristic of Barth syndrome ( 25 ). A hallmark of Barth syndrome is the characteristic accumulation of MLCL, which is primarily quantifi ed to confi rm a diagnosis ( 67 ). Untargeted MDMS-SL analyses of cardiolipin species in myocardium from the Taz KD mouse model revealed a dramatic depletion of tetra-acyl CL species as well as a signifi cant increase in MLCL species in addition to an unexpected increase in DLCL molecular species. These results components of the initial remodeling machinery to maintain a homeostatic balance of cardiolipin molecular species. Although the doxycycline-inducible knockdown construct provides a malleable genetic tool for the investigation of bioenergetic and lipidomic remodeling associated with tafazzin defi ciency, several additional caveats should be considered in regard to the use of tetracycline-inducible promoters used in numerous genetic models. Tetracyclines, including doxycycline, have previously been associated with modulation of secretory phospholipases ( 84,85 ) as well as the inhibition of metalloproteinases, downregulation of cytokines, and cell proliferation ( 86 ), all of which should be considered in regard to the phenotypic characterization of the Barth syndrome mouse model. Because the inducible Taz KD mice were compared with wild-type age-matched littermates also fed a doxycycline-enriched diet, the differential phenotype displayed represents the pathological changes induced by cardiac myocyte tafazzin defi ciency. Furthermore, it was previously reported that the level at which doxycycline or tetracyclines inhibited these biological processes in vitro far exceeds the pharmacological dose that would be administered in vivo ( 86,87 ), thus demonstrating the strength of the pathological fi ndings manifest during tafazzin defi ciency that are reversible upon its reexpression. In summary, the inducible Taz KD mouse represents an effi cacious model system that recapitulates many of the underlying myocardial lipidomic and bioenergetic phenotypes present in Barth syndrome. Moreover, the use of this model in conjunction with integrated analytic technologies has allowed increased understanding of the complexity of molecular alterations resulting from tafazzin loss of function that likely exist in Barth syndrome patients. Our results demonstrate that tafazzin loss of function results in profound alterations in the myocardial lipidome, deleterious changes in bioenergetic fl ux, and altered signaling processes that collectively contribute to the pathology of Barth syndrome. catabolism in Barth syndrome patients ( 69 ). Pyruvate metabolism was unchanged in tafazzin-defi cient mouse mitochondria indicating that cardiolipin is not obligatory for pyruvate utilization, but does appear essential for fatty acid oxidation. This is predominantly due to the unique role of cardiolipin in maintaining the mitochondrial trifunctional complex, which is essential for effi cient fatty acid oxidation ( 70 ) and cannot be compensated with lysocardiolipin species (i.e., MLCL and DLCL). Alterations in cardiolipin have previously been associated with the regulation of electron transport chain complex components as well as several other pivotal metabolic enzymes in the mitochondrial membrane (71)(72)(73)(74)(75)(76)(77). Previously, a decrease in Complex III activity in Barth syndrome fibroblasts from two patients was reported in addition to several other minor metabolic deficiencies ( 78 ). In addition, whole-body and oxidative capacity, indirect measurements of mitochondrial function, were signifi cantly decreased during exercise in humans with Barth syndrome ( 5 ). However, mitochondrial function in highly metabolically active tissues such as myocardium is closely integrated with physiologic demands and likely determines the underlying alterations in bioenergetic capacity in vivo in Barth syndrome patients. To identify the upstream mechanism underlying the shift in preference toward glutamate oxidation in the Taz KD mouse, we investigated the adenine nucleotide translocase which exhibits differences in substrate selectivity between various tissues as well as a dependence on cardiolipin for its catalytic activity ( 53,74,79,80 ). The reorganization of cardiolipin and lysocardiolipin molecular species in this Barth syndrome mouse model likely precipitates a dramatic shift in glutamate preference driving ADP/ATP exchange in the mitochondria, thus linking cardiolipin to the substrate-specifi c regulation of respiration. A distinct advantage of utilizing an inducible shRNA knockdown mouse model of Barth syndrome is its ability to be combined with other genetic tools to investigate therapeutic strategies to target cardiolipin metabolism or other aspects of the disease. Recently, genetic models either expressing human CLS in myocardium or eliminating iPLA 2 ␥ expression have been investigated regarding their participation in cardiolipin biosynthesis and remodeling in the heart ( 40,56 ). Characterization of the cardiac-specifi c human CLS-transgenic mouse revealed a signifi cant increase in CL remodeling and tetra-18:2 cardiolipin content compared with wild-type mice, thus identifying a compensatory mechanism to ameliorate the defi ciency in tetra-18:2 cardiolipin found in Barth syndrome ( 40 ). Furthermore, phospholipases have been suggested as pharmacologic targets to prevent deacylation of cardiolipin, thereby preventing monolysocardiolipin accumulation which is a hallmark characteristic of Barth syndrome (81)(82)(83). In the current study, transgenic expression of human CLS in myocardium or ablation of iPLA 2 ␥ in conjunction with tafazzin defi ciency did not prevent the decreased cardiolipin content (predominantly tetra-18:2 CL) present in the Taz KD mouse, thus demonstrating that CLS and/or iPLA 2 ␥ are likely independent of tafazzin as
8,727.6
2013-02-14T00:00:00.000
[ "Biology", "Chemistry" ]
Research on the Impact Mechanism and Application of Financial Digitization and Optimization on Small- and Medium-Sized Enterprises Background. With the continuous advancement of digital technology and the accelerated development of digital finance, the rise of digital finance has had a vital impact on the true evolution of SMEs. *e digital economy has a significant positive impact on the productivity of SMEs.Method. *is article first analyzes the digital level of SMEs, studies the incentive effect of digital finance on the level of technological revolution of SMEs, and analyzes the mitigation effect of digital finance evolution on the financing constraints of SMEs. At the same time, it also studies how to develop the digital economy and achieve high-quality business evolution. Result. *e digital economy can promote the growth of enterprise productivity through four indirect ways: scale economy effect, scope economy effect, technological revolution effect, and management benefit effect. Conclusion. *e Financial Technology Optimization program helps financial leaders adopt new digital technologies to optimize financial processes while minimizing disruption. Introduction e deep incorporation of new generation of Internet of ings, big data, Tencent Cloud computing, artificial intelligence, blockchain, and traditional industries has made commercial and social evolution move towards networking, digitization, and intelligence and gradually formed a new form of digital economy. Digital finance has the attribute of low financing cost, high effectiveness, and free from time and space constraints, so it has attracted wide attention of the society. SMEs, with its unique volume advantage, are not only an important driving force for commercial growth but also the backbone of technological revolution. Financial technology has shown great evolution potential and space [1][2][3]. e rapid evolution of financial technology can alleviate information asymmetry and broaden the scope of financial services. All sectors of society in China are very concerned about the evolution of SMEs because SMEs can play an important role in promoting commercial growth, stimulating market vitality, promoting scientific and technological progress, and expanding employment. ere has been a phenomenon that the concept is greater than action in the evolution of digital finance because digital inclusive finance is faced with global common problems such as high cost, low effectiveness, and unbalanced service [4,5]. How to balance policy support and market evolution is quite difficult. Digital finance refers to various and significant commercial activities that utilize digital knowledge and information as critical production factors, state-of-the-art information network as a critical carrier, and the effective use of information and communication technology as an important driving force for effectiveness improvement and commercial structure optimization. e impact of digital economy on productivity is closely related to the evolution of information technology. Digital finance can curtail the related costs, solve the problem of information asymmetry, and provide the ability to predict threats through the application of intelligent technologies such as computer technology, data communication, big data analysis, and cloud computing in the financial field. Compared with traditional nondigital services, digital finance can better provide appropriate and effective financial services for SMEs at affordable costs. With the continuous revolution of digital technology and the booming popularity of digital economy, digital finance has become the only way for financial evolution. Digital technology provides solutions to overcome financial difficulties. e digital platform can evaluate the credit threat of hundreds of millions of users through big data analysis technology, which greatly curtails the cost of customer threat control and improves the feasibility of inclusive financial evolution. Digital finance is the deepening of inclusive finance, and inclusive finance must develop in the direction of digital finance. Research on the evolution of digital finance has certain practical significance for the evolution of SMEs. e stability structure of China's financial science and technology has basically formed, the incorporation of underlying technologies has accelerated, and the application pilot has continued. e blockchain industry has ushered in a new round of growth, which provides a strong impetus for commercial evolution. erefore, studying the conjunction between digital finance and SMEs has an important guiding role and practical significance for the government to innovate the local financial market environment and formulate the financial evolution strategy of SMEs. Materials and Methods e state attaches great importance to the evolution of revolution and entrepreneurship activities and has issued many policies to encourage the public to actively carry out revolution and entrepreneurship activities. However, since revolution and entrepreneurship require substantial financial support, most SMEs face varying degrees of financing difficulties in the early stages of entrepreneurship. In this context, the birth of digital finance has brought new hope to the financing of SMEs. By constructing the index of technological and financial evolution level of SMEs, it is found that financial technology can increase enterprise revolution by alleviating the financing constraints of SMEs and improve the revolution effect of government tax return. In the new era, the rapid incorporation of big data technology and financial activities and the deep incorporation of information technology and financial industry have promoted the evolution of financial science and technology, a new type of financial industry, greatly changed the traditional financial service mode, and broken many restrictions on traditional financial services. In the R&D stage of technological revolution in SMEs, enterprises will carry out research and evolution, test new technologies or new products with great uncertainty, face high technical threats, and need a lot of human capital and fixed equipment investment. Enterprises need the support of revolution threat capital, so at this stage, financing constraints will have a significant impact on technological revolution of SMEs [6][7][8]. e evolution of digital finance can alleviate the financing constraints of SMEs, and the level of commercial evolution and legal environment are important factors restricting digital finance to alleviate the financing constraints of SMEs. Digital finance can give full play to the advantages of low cost, high speed, and wide coverage through scenarios and data, curtail the threshold and cost of financial services, improve the financing environment of SMEs [9], and more effectively serve inclusive financial entities. e level of financial evolution is one of the key factors because in the underdeveloped financial market, enterprises will face higher external financing costs, and a good financial evolution environment will help ease the financing constraints faced by enterprises. Digital inclusive finance is the deepening of inclusive finance [10,11]. e combination of data and financial revolution products more effectively curtails the information asymmetry between capital reservoir and demand and more significantly curtails the threshold and cost of financial services [12,13]. erefore, digital inclusive finance can ease the financing constraints of SMEs. e impact mechanism of financial evolution on technological revolution: on the one hand, some scholars believe that financial evolution may encourage SMEs to achieve technological revolution by opening trade or reducing transaction costs, easing financing constraints of SMEs, and loosening R&D capital investment [14]. On the other hand, some scholars believe that the attribute of technological revolution projects, such as high investment and long cycle, make small-and medium-sized innovative enterprises have a strong dependence on external financing. Countries (regions) with a sound financial market system and high level of financial evolution will optimize the allocation of financial resources, reasonably guide the flow of funds, and curtail the problem of information asymmetry. e indirect mechanism of the influence of the digital economy on enterprise productivity is shown in Figure 1. With the commercial society moving from the industrial economy era to the digital economy era, the traditional way of achieving performance growth through factor expansion is difficult to meet the needs of high-quality evaluation of enterprises [15,16]. Under this background, enterprises realize the automation and intelligence production and service through a digital transformation, curtail the dependence on labor, directly curtail the production cost, and improve the production effectiveness [17,18]. e indirect mechanism of the digital economy on the productivity of small and medium enterprises is shown in Figure 1. At the same time, the data resources generated by the digital transformation of enterprises can not only participate in the production process as production factors together with capital, labor, and other resources, directly driving productivity growth, but also improve the utilization effectiveness and allocation effectiveness of traditional production factors such as capital and labor, thus improving the productivity of enterprises. In the new era, the accelerated incorporation of big data technology and financial activities and the deep incorporation of information technology and financial industry have promoted the evolution of digital finance, a new type of financial industry, which has greatly changed the traditional financial service model and broken many limitations of traditional financial services. First of all, due to its own limitations, traditional financial institutions are generally difficult to penetrate into financially poor areas. As a product of the combination of digital technology and finance, digital finance has strong geographical penetration, breaks the time and space constraints, and can participate in financial activities anytime and anywhere to improve the availability of financial services. With the breakthrough evolution of science and technology revolution, the coverage of digital finance will be further expanded, which can curtail the financing cost of SMEs, greatly alleviate the financial exclusion faced by SMEs, and provide financial support for the technological revolution of SMEs. e influence mechanism of digital finance on small and medium enterprises is shown in Figure 2. Secondly, financial science and technology can fully understand SMEs based on big data analysis, improve enterprise information transparency, alleviate information asymmetry between financial institutions and SMEs, curtail adverse selection and moral hazard, transform resources in the market into effective reservoir, curtail resource, optimize resource allocation, and provide necessary conditions for improving technological revolution of SMEs. First, digital finance enables SMEs to obtain the same market opportunities as large enterprises, helps them establish market credit, and improves sales revenue. e third is to promote the internal information construction of enterprises, improve the standardization and effectiveness of management, and curtail the management cost of SMEs. e fourth is to make the revolution of products and services become the key to win the market competition and improve the revolution consciousness of SMEs. e increase and diversity of consumption create market opportunities for SMEs. Digital finance can promote the technological revolution of SMEs by promoting e-commerce, affecting the total consumption and structure, easing financing constraints and technology spillover. e empirical results show that the evolution of digital finance in China significantly promotes the technological revolution of SMEs, and the influence mechanism is as follows: first, digital finance improves the profitability of enterprises by increasing sales revenue and reducing management costs. Second, by reducing the cost of borrowing and improving the structure of borrowing, digital finance makes the structure of enterprise borrowing longterm and eases the credit constraints of enterprises. ird, the payment, monetary fund, insurance, credit, and other business functions of digital finance have significantly promoted the technological revolution of enterprises. In addition, the classification study found that the evolution of digital finance in China is very uneven among regions, especially in the central and western regions. Results China's complete industrial system is supported by the industrial chain cluster composed of large-and mediumsized enterprises and small enterprises with meticulous division of labor, professional, and orderly. In the reservoir chain of the industrial chain cluster, it is inevitable that the core large enterprises monetize their dominant position in the market, such as extending the accounting period, which leads to the shortage of funds for SMEs. Digital finance focuses on the confirmation of core enterprise contracts or commercial bills in the reservoir chain. With the help of blockchain technology, it can cover the multilevel enterprises in the reservoir chain and solve the liquidity replenishment problem of some SMEs. e evolution of digital finance also makes it possible to build a public credit information platform for SMEs. e public credit information platform can not only get through all kinds of scene data but also cooperate with all kinds of financial institutions to develop models, investment, and loan linkage and provide SMEs with life-cycle training, guidance, and financing intermediary services, as shown in Figure 3. Digital technology improves the revolution ability of SMEs, and the improvement of technological revolution level is one of the main ways to improve productivity. In the R&D mode, the open digital R&D management system helps SMEs turn from the traditional closed revolution mode to the open revolution mode with the participation of all departments, even the whole industry chain and the whole society, so that R&D and design activities can be carried out in a multidimensional collaborative network, to realize integrated and networked revolution and improve enterprise revolution ability. In the R&D process, digital design tools such as digital twin and digital simulation can accurately simulate various physical parameters of physical entities and display them in a visual way, to achieve R&D revolution in a variety of scenarios in a dynamic and uncertain environment and improve the accuracy of R&D. Digital finance can provide financial basis for revolution activities and create more entrepreneurial opportunities. Information technology is an important factor to promote the evolution of business model. Internet big data technology has greatly weakened the cost in search, evaluation, transaction, and other aspects, making great changes in the traditional business model. Digital finance breaks the space limitation of traditional transactions, enables businesses and consumers to trade online, curtails the financial delivery link in the traditional business model, and improves the effectiveness of financial transactions. Similarly, taking Alipay as an example, the appearance of Alipay changed the way of payment and promoted the evolution of e-commerce. It laid the foundation for the transformation of traditional financial business. e evolution of online car hailing, bike sharing, and other fields all benefit from the evolution of digital payment technology. It can be seen that digital finance is of great significance to promote revolution and create employment opportunities for SMEs. e role of digital finance in promoting SMEs makes up for the problem that traditional financial institutions cannot take care of backward SMEs and self-employed households, which can affect the effective evolution of local revolution and entrepreneurship activities to a large extent. e transformation and upgrading of industrial chain not only needs the high-quality evolution of SMEs but also needs the coordinated evolution of industrial Under the background of digital finance, data analysis saves information cost and curtails credit threat AI With the help of artificial intelligence technology, the threat assessment cost of financial enterprises to entrepreneurs is significantly curtaild. The influence mechanism of digital Finance on small and medium enterprises Digital finance promotes e-commerce, and then improves the profitability and revolution consciousness of SMEs. Big Data Digital finance can improve the credit constraints of small and medium-sized enterprises, and is conducive to technological revolution of small and mediumsized enterprises. Digital finance promotes the technological progress of related industries and enterprises through technology spillover, which is conducive to the technological revolution of small and mediumsized enterprises. The evolution of digital finance stimulates consumption, promotes the upgrading and diversity of consumption structure, improves the sales revenue of small and medium-sized enterprises, and promotes technological revolution. clusters. erefore, it is necessary to build an "industrial digital finance" consortium, aggregate the real industry, finance, science, and technology, build an ecological community, based on the industrial chain and industrial ecology, relying on the industrial Internet platform and financial service platform, deeply integrate the industrial chain, revolution chain, and capital chain, and jointly serve and empower the industrial cluster. In this ecological community, finance should play the role of an accelerator. Based on various scenarios of the industrial chain, it should provide science and technology support, data penetration, financial matching for the industry, empower the entity enterprises, and realize the improvement of production effectiveness, product revolution, and service upgrading. For example, through bill settlement and payment settlement platform, digital finance can improve the effectiveness of capital turnover in industrial clusters and provide comprehensive financial services of "stock debt loan investment" for the scale expansion of core leading enterprises. Furthermore, we should provide SMEs in the chain with financing products such as "order loan" to solve the problem of capital turnover. e high-quality evolution of SMEs is an important guarantee to promote high-quality commercial evolution. e central government attaches great importance to the revolution and evolution of SMEs and puts forward higher requirements for financial support to SMEs. In particular, facing the COVID-19 pandemic, protecting small and medium enterprises is to ensure employment. Protecting employment is to protect people's livelihood. SMEs need financial support. e emergence of digital finance provides new financing channels for the revolution and evolution of small and microenterprises and provides an effective means for small and microenterprises to balance revolution and evolution, prevent, and resolve financial threats. Conclusion In the critical period of the transformation of new and old kinetic energy, we should continue to strengthen the construction of relevant financial infrastructure, give certain policy support to SMEs, increase investment in 5G, big data, cloud computing, blockchain, and other fields, improve the ability of independent research and evolution of technology, and promote the integrated evolution of digital technology and financial market. At the same time, we should strengthen the precise support for SMEs and private enterprises, realize one-to-one docking, establish a diversified financial system, provide rich and high-quality financing channels, provide enterprises with lower cost and more convenient financial services, and fully release the vitality of financial technology to promote technological revolution and commercial growth. Digital finance has a significant role in promoting the financing of SMEs and individual entrepreneurs, which can effectively curtail the financing threshold and improve the financing availability of entrepreneurs. It also helps to curtail part of the financing cost and improve the utilization rate of resources. In order to further strengthen the application effectiveness of digital finance in the field of entrepreneurship, in addition to improving the corresponding laws and regulations, we should speed up the construction of a reasonable digital financial system, build professional entrepreneurial digital financial institutions, pay attention to the revolution of the market and the improvement of the order, and provide the most basic guarantee for the entrepreneurship of all kinds of main bodies in China. Data Availability Data sharing not applicable to this article as no datasets were generated or analysed during the current study. Conflicts of Interest e author declares no conflicts of interest.
4,232.2
2021-10-29T00:00:00.000
[ "Business", "Computer Science", "Economics" ]
Investigation morphological, electrical, and optical properties of Mn-doped ZnO thin film by sol–gel spin-coating method In this study, ZnO was doped with 0.01% Mn and it is grown on p-Si by the sol–gel spin-coating method. Obtained the thin film was studied that to understand the effect of 0.01% Mn-doping ratio on the optical and electrical properties of ZnO structure. In this context, first, the morphological structure of the thin film was studied with the use of atomic force microscopy (AFM). The surface structure was obtained homogeneous, and roughness and fiber size were calculated between 27.2–33.6 and 0.595–0.673 nm, respectively. Second, the optical properties were characterized via ultraviolet– visible (UV–Vis) spectrophotometry. Third, the effect of light intensity on junction properties of the photodiode was studied. The current–voltage (I–V) of the photodiode was measured under dark and at the different intensities of illumination. Obtained results showed that the current of photodiode was increased with the intensity of illumination from 6.41 9 10 to 5.32 9 10 A. These results indicate that photocurrent under illumination is higher than the dark current. After that, the other parameters of the photodiode such as barrier height and ideality factor were determined from forwarding I–V plots using the thermionic emission model that the barrier height and the ideality factor were found 0.74 eV and 5.3, respectively. On the other hand, the capacitance–voltage (C–V) was measured at the different frequencies. The C– V characteristic shown that C–V characteristic of the photodiode was changed depends on increasing frequency. In addition, the interface density (Dit) value was decreased by increasing frequency too. Similarly, the serial resistance of the photodiode was also decreased by increasing frequency. Received all these results indicated that Mn-doped ZnO thin film sensitive to light and due to this property, it can be used for different optoelectronic applications as a photodiode and photosensor. Introduction In the recent years, there is a lot of research on the nanostructure materials, because properties of the nanostructure materials could be controlled by the use of doping ratio of the different materials [10][11][12][13]. Because of these properties, nanostructure materials can be adapted to different applications for optoelectronic devices [1][2][3]. Especially, among nanostructures materials that ZnO is attractive and widely studied as a transparent kind of II-VI semiconductor materials. To date, many materials were doped to ZnO that to change optical and electrical properties of its. When we check the literature among these materials, there are a few studies that Mn-doped ZnO had been investigated optical and electrical properties. Therefore, in this study, we have investigated electrical and optical properties of the 0.01% Mn-doped ZnO nanostructure the thin film. ZnO is available for this research, because ZnO is an n-type semiconductor material [15][16][17][18], bandgap (E g ) is 3.37 eV at room temperature, and exciton binding energy is 60 meV [2][3][4][5][6]. Therefore, ZnO is well known for different electronics applications, for example, gas sensor, varistors, transistor, optic sensor, solar cell etc. [3][4][5][6][7][8]. In addition, ZnO has a lot of advantages; for instance, it is not expensive, and it can easy processability etc. Due to these properties, it has been attractive by most of the researchers and it has been using for different applications [7][8][9][10]. The main aim of this study is investigating the effect of 0.01% Mn-doping ratio on the electrical and optical properties of ZnO. For this, 0.01% ratio Mn-doped to ZnO that was obtained Mn-doped ZnO solution and using this solution that thin film was deposited on the surface of p-Si and Al/p-Si/MnZnO/Al photodiode was produced. And then, the optical and electrical properties of the photodiode were investigated and obtained the results showed that the optical and electrical properties of the ZnO were changed by 0.01% Mn-doping ratio. The current-voltage (I-V), the capacitance-voltage (C-V), the conductance-voltage (G-V), the series resistance-voltage (R s -V), and the density of interface states (D it ) of the Al/p-Si/MnZnO/Al photodiode were drawn graphs using received results. All results showed that the Al/p-Si/MnZnO/Al photodiode can be used not only as a photodiode but also an optical sensor for different applications in the optoelectronic devices. Experimental details The p-Si wafer and Mn-doped ZnO solution were used to produce Al/p-Si/MnZnO/Al photodiode. For this process, first, HF solution was prepared to clean p-Si wafer, and then, it was dipped into this solution left for 10 s. After that, to the clean p-Si wafer, RC-cleaning procedure was applied [10][11][12][13][14][15]. For this, process was used acetone, methanol, and deionized water, respectively, with an ultrasonic bath for 5-10 min [13][14][15][16][17][18]. After the chemically cleaning process is completed, Al ohmic contact was formed on the rear surface of the p-Si wafer with the use of VAKSIS thermal evaporator system at the pressure of 5 9 10 -5 T. On the other hand, to obtain Mn-doped ZnO homogeneous solution, 2-methoxethanol was used as a solvent and monoethanolamine (MEA) was used as a stabilizer. For this process, first, Zink acetate dehydrate Zn (CH 3 CO 2 )2 2H 2 O was dissolved in 2-methoxethanol for 10 min and second, manganese acetate (CH 3 CO 2 )2 4H 2 O has been added this solution and stirring 10 min together. Finally, monoethanolamine has added this solution and it was stirred for 1 h at the 60°C temperature. After 1 h, a homogeneous solution was obtained which this solution was used to produce Al/p-Si/MnZnO/Al photodiode. For this, first, Mn-doped ZnO solution was grown onto the p-Si wafer with the use of a spin coating that speed was set 1100 rpm for 25 s. The p-Si substrate was put on the hotplate at 150°C for 5 min this was repeated three times. And then, followed by a secondary anneal in the oven at 450°C for 1 h to obtain a fully cured thin film on the p-Si substrate. After that, Al ohmic contact was formed on to MnZnO thin film. In this way, the production of the Al/p-Si/MnZnO/Al photodiode was completed and its schematic diagram is shown in Fig. 1. In addition, the characteristic of the I-V, I-t, C-V, G-V, and R-S has been measured via Keithley semiconductor characterization system. Moreover, the surface morphology of the thin film was investigated via PARK system XE 100E atomic force microscopy (AFM) that the roughness of film was determined via PARK system XEI analysis software. The optical measurements of the thin film were taken with the use of Shimadzu UV-Vis-NIR 3600 spectrophotometer. Obtained all the results were shown on the plotted graphs. Surface morphology Atomic force microscopy (AFM) was used to measure the surface roughness of the Mn-doped ZnO thin film over 5 lm 9 5 lm area using a PARK system AFM XEI software programming. Surface image of the thin film is shown in Fig. 2. As shown in Fig. 2, the thin film has composed of nanofibers. When we compared this result to another study in the literature, for example, Yang and Fri (http://dx.doi.org/10.1016/j.jmmm.2013.01.026), they had been studied Mn-doped ZnO for different doping ratios that in their study, the image of AFM has obtained nanoparticle that grain size had been found 36 and 32 nm (for x = 0.01 and x = 0.03), respectively. Mansour and Fri had been studied ZnO thin film is formed nanofibers and the diameter of the fibers had been found from 250 to 475 nm. In this study, obtained the thin films include in nanofibers and its average size was measured from 0.595 to 0.673 nm. This shows the size of nanofiber small than another study. Optical properties Optical properties of the thin film were studied to determine transmittance, absorbance, reflectance, and bandgap with the use of UV3600 SHIMADZU UV-Vis-NIR Spectrophotometer. As known, by the transmittance is determined the transparency of the thin film, by the reflectance is determined the reflection of the thin film, and by the absorbance is determined the absorption of the thin film. To determine these properties of the thin film using received optical results drawn transmittance, absorbance, reflectance, and bandgap spectra of the thin film. The transmittance of the thin film is shown in Fig. 3. As shown in Fig. 3a, the thin film has transparent, especially between 400 and 700 nm wavelength. That average percentage of the transmittance was determined about between 87 and 90.2% and between 470 and 600 nm wavelength. This A r h i v e o f S I D shows that the thin film is a well transparent. The reflectance of the thin film is shown in Fig. 3b. As shown in Fig. 3b, the reflection of the thin film has a peak at the about 400 nm wavelength. This shows that the thin film has maximum reflection point and is about at the 380-410 nm wavelength. On the other hand, the absorbance of the thin film is shown in Fig. 3c. As shown in Fig. 3c, the absorption of the thin film showed a limit of absorption at the about 400 nm wavelength. This shows that the thin film has a limit of absorption in the visible region. The bandgap of the Mn-doped ZnO thin film was calculated data of the transmittance versus wavelength plot. The absorption coefficient (a) of the thin film is determined by the following equation: where T is transmittance and d is the thickness of the film. The absorption coefficient and the incident photon energy are related to the following equation: where A is constant, E g is the bandgap of the film, and n is depending on the type of transition. The bandgap of the Mn-doped ZnO thin film was determined by plotting (ahv) 2 versus h. As shown in Fig. 3d, the bandgap was calculated 3.28 eV. In this study obtained the bandgap value, when compare to the literature that with undoped ZnO value, this value is smaller than undoped ZnO bandgap value. This showed that the doping ratio of Mn was decreased the bandgap of ZnO value. Furthermore, not only in the dark but also under the different intensities of light was studied to determine a current-voltage characteristic of the photodiode. Obtained I-V results are shown in Fig. 4. As shown in Fig. 4, the photodiode showed behavior to light sensitive. This optical behavior of the I-V characteristic can be analyzed by the following equation: where n is ideality factor, q is an electronic charge, k is Boltzmann constant, T is temperature, V is applied voltage, R s is series resistance, and I 0 is reverse saturation current analyzed by the following equation [1]: where / b is barrier height, A* is effective Richardson constant that it is equal to 32 A/cm 2 K 2 for p-Si, and A is active contact area. The ideality factor and barrier height of the Al/p-Si/MnZnO/Al photodiode were found to be 5.3 and 0.74, respectively. As shown in Fig. 4, the reverse current of the Al/p-Si/MnZnO/Al photodiode increased with increasing light intensity. This change is from 6.4 9 10 -7 to 5.32 9 10 -4 A. This situation was showed that the current has been changed via the intensity of illumination. Because of this, it can be used for different optoelectronic applications. The variation photocurrent of the Al/p-Si/MnZnO/Al photodiode is shown in Fig. 5 and the photocurrent of photodiode was analyzed by the following equation [5]: A r h i v e o f S I D where I ph is photocurrent, A is constant, m is an exponent, and P is the light intensity. The value of m was determined from the slope of Log(I ph ) versus Log(P) plot and was found 1.3. To better understand the effect of Mn doping on the photodiode characteristics of the devices, the transient photocurrent measurements were performed under different light intensities. This current is shown in Fig. 6a. As shown in Fig. 6a when the light turned on, the current quickly rises to a certain value, and then, when the light turned off the current decreased to the initial state again. The transient photocurrent of the photodiode I on /I off is shown in Fig. 6b. This showed that the current has been changing with the use of light intensity, and because of this situation, it can be used as an optical sensor in different electronic circuits to detect the light. The capacitance-voltage (C-V) was measured at the different frequencies and it is shown in Fig. 7. As shown in Fig. 7, the capacitance does not change with frequency in the positive region, but at the negative region not only changed but also increased with decreasing frequency. As shown in Fig. 8a, C-V and b, G-V curve of Al/p-Si/ MnZnO/Al photodiode were corrected by following equation [5][6][7][8][9][10]: where C adj is corrected capacitance, G adj is corrected conductance, C m is measured capacitance, G m is measured conductance, w is angular frequency, and a is variable parameter, and it is depending on the C m , the G m , and the R s parameters and it can be defined by the following equation: As shown in Fig. 8b, G adj plots were showed a peak that this had confirmed the presence of interface states and the interface state density (D it ) which it can be defined by the following equation: where C m is the measured capacitance, C ox is the capacitance of insulator layer, w is the angular frequency, and A is the contact area of the photodiode. The D it values of the A r h i v e o f S I D photodiode were calculated from G adj -V plots using Eq. (9). The plot of D it is shown in Fig. 9. As shown in Fig. 9, the density of the interface states was decreased with increasing frequency. This behavior is indicated that the interface states change with the use of the frequency. This shows that the interface states density strongly depends on frequency. The R s -V plot is shown in Fig. 10. R s value was calculated from capacitance and conductance values in the accumulation region. As shown in Fig. 10, the series resistance depends on the bias voltage as well as on the applied frequency. In addition, the serial resistance is higher at the low frequency. This is because of the interface states can follow the Ac signal at the low frequency. Whereas at the high frequencies, the low serial resistance is because the interface states cannot follow the AC signal and do not contribute to interface states [3][4][5][6][7][8][9][10]. The R s plots were indicated a peak that the peak position shifted with increasing frequency. This shows that the series resistance of the Al/p-Si/MnZnO/Al photodiode is decreased with increasing frequency. Conclusion The Mn-doped ZnO thin film was fabricated using sol-gel spin-coating technique. After obtaining the homogeneous solution, it is grown on the p-Si substrate with the use of the spin-coating technique. After that, the Al contact was formed on the thin film and produced Al/p-Si/MnZnO/Al photodiode. After this process was completed that investigated electrical and optical properties of the thin film. Surface morphology of the Mn-doped ZnO thin film was studied with the use of PARK system XE 100E atomic force microscopy (AFM). The roughness of the Mn-doped ZnO thin film was determined using a PARK system XEI analysis software programming. The optical measurements were taken with the use of Shimadzu UV-Vis-NIR 3600 A r h i v e o f S I D spectrophotometer. The current-voltage (I-V) characteristic of the photodiode was performed using Keithley semiconductor characterization system. As a result, obtained all outcomes from this study showed that the Al/p-Si/MnZnO/ Al photodiode indicated that an optical behavior depends on the light intensity. Because of this sensitive to light, it can be improved and used for different applications of the optoelectronic devices. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creative commons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
3,735.2
2017-12-01T00:00:00.000
[ "Materials Science" ]
Influence of Gamma Irradiation on Localization of Enzymatic Activity During Spermatogenesis of Green Vegetable Stink Bug Nezara viridula (Hemiptera: Pentatomidae) Egyptian Academic Journal of Biological Sciences is the official English language journal of the Egyptian Society for Biological Sciences ,Department of Entomology ,Faculty of Sciences Ain Shams University . Histology& Histochemistry Journal include various morphological, anatomical, histological, histochemical, toxicological , physiological changes associated with individuals, and populations. In addition, the journal promotes research on biochemical and molecularbiological or environmental, toxicological and occupational aspects of pathology are requested as well as developmental and histological studies on light and electron microscopical level, or case reports.. www.eajbs.eg.net Provided for non-commercial research and education use. Not for reproduction, distribution or commercial use. INTRODUCTION The polyphagous cosmopolitan, Nezara viridula is one of the most important pentatomid insect pests in the world.It infests many important vegetable crops in more than 30 families, with preference for legumes and brassicas (Panizzi, 1997).It feeds on all parts of the plant, including stems, leaf veins, growing shoots, immature fruits, seeds and even flowers.Efficient pest control of N. viridula is achieved mainly by the use of insecticides applied over wide areas and somewhat suppressed by biological control agents ( Meglič et al., 2001 andKnight andGurr, 2007). Autocidal methods and genetic manipulation can be effective against lowdensity populations dispersed across wide ranges and against high density pests over a limited range (Knipling, 1971).The Sterile Insect Technique (SIT) is one of various strategies to achieve minimally toxic control of insect pests (Klassen and Curtis, 2005).Gamma Radiation Enzymes Acid Phosphatase (ACP) Glucose-6-Phosphatase (G-6-PDH) Ultrastructure (TEM) Cytochemistry Spermatogenesis Nezara viridula Nezara viridula which is serious pentatomid pest for many agricultural crops had irradiated with 40 Gray (Gy) dose of gamma radiation.The ultrastructure and cytochemical studies were carried out to evaluate the activities of acid phosphatase (ACP) and gloucose-6-phosphatase (G-6-PDH) enzymes, during the spermatogenesis of non-irradiated and irradiated adult males.These studies were emphasis on the early and late spermatid stage and sperm flagellum, which involve the axoneme and mitochondria derivatives.The reaction products of (ACP) activity was associated with Golgi complex and nuclear membrane, which are relatively at the same level for both control and irradiated individual at early spermatid stage.The localization of (ACP) activity on late spermatid and spermatozoan flagellum showing strong reaction in the endoplasmic reticulum cisternae surrounding the axoneme and tail elements and was also observed on the mitochondria derivatives.The reaction products were approximately weak on irradiated male. Glucose-6-Phosphatase (G-6-PDH) was found labelling the nuclear envelope in irradiated early spermatid stage, however it was shown negative reaction in control individuals.The (G-6-PDH) was found mainly in the plasma membrane of sperm flagellum, endoplasmic cisternae surrounding axoneme and outer boarder of mitochondria derivatives.The activity of this enzyme was similar in both control and irradiated N. viridula. Very little is known on the effects of radiation-induced sterilization in Hemiptera.Reported studies have dealt mainly with the determination of effective sterilizing doses (Ameresekere et al., 1971).Partial sterility of adult hemipterans may be achieved after they are exposed to ionising radiation of 30-100 Gray (Gy) (Maudlin, 1976) and damage can be inherited (LaChance and Degrugillier, 1969).In N. viridula, radiation has been used to determine doses required for sterilization of eggs and adults (Mau et al., 1967 andDyby andSailer, 1999).However, as well as inducing sterility, radiation is damaging to other organs (Banu et al., 2006;Suckling et al., 2011and Paoli et al., 2014). The rationale for our study was in the context of the genetic methods of sterile insect technique for pest control.As radiation doses increase, the negative effect of radiation on different cells intensifies.It has been suggested that hemipteran insects display inherited sterility, whereby at low doses of radiation, irradiated insects can fertile, yet their offspring sterile (Bloem et al., 1999b andSoopaya et al., 2011).A reduction in exposure to radiation improves the fitness of the irradiated insect making them a more viable option for a population management strategy (Bloem et al., 1999a andKean et al., 2011).Drastic morphological changes of irradiated testes of N. viridula and spermatogenesis abnormalities had observed recently (Ibrahim et al., in press). Spermiogenesis is the result of a complex process of cellular differentiation of genetic and energetic material required for fertilization involving biochemical and cytochemical changes (Andre, 1963;Anderson et al., 1967;Phillips, 1970;Baccetti, 1972;Fawcett, 1975 andFernandes andBáo, 1998).This phenomenon involves the participation of several enzymes, including phosphatase enzymes (Fernandes and Báo, 1999). Phosphatase enzymes have been known by their important functions in metabolism, including the phosphate cycle, tissue transformation, growth, nerve action and synthesis of fibrous protein (Mohmoud, 1988). Acid phosphatase (ACP) is important in biological processes that need high level of energy, such as development, growth, maturation and histolysis (Ray et al., 1984).Also, it is play an important role in the intermediary metabolism and the transportation of protein in insects.The (ACP) is a hydrolase which participates in the metabolism of phosphate which could be used for flagellum motility (Sridhara andBhat, 1963 andRousell, 1971). Glucose-6-phosphatase(G-6-PDH) is an important enzyme in insects, mainly active in the fat body where it breaks down glucose as well as it appears to be a key enzyme in pentose metabolism (Horie, 1967).The activity of (G-6-PDH) has been shown in the endoplasmic reticulum and Golgi complex of insect spermatids (Báo andde Souza, 1994 andFurtado andBáo, 1996).Also in theaxoneme and mitochondrial derivatives of the spermatozoa of some invertebrate and vertebrate species (Anderson andPersonne, 1970 andBigliardi, et al., 1970).Evidence suggests that both enzymes are required for Spermiogenesis (Fernandes and Báo, 1999).The enzymatic activities of (ACP) and (G-6-PDH) for non-irradiated N. viridula had described early by Fernandes and Báo (1999), but no information is available of the effect of gamma irradiation on the activities of such enzymes.The selection of these enzymes as an indicator of sterility, is based on the enzyme importance in the reproductive system of insects Influence of Gamma irradiation on localization of enzymatic activity during spermatogenesis green 3 (Lambremont, 1959 andMoore andFrazier, 1976). The cytochemical approach is useful to determine the functional role of the different elements of spermatozoan and the role of enzymes in sperm movement and in fertilization process (Fernandes et al., 2001). The present study display the localization of acid phosphatase (ACP) and gloucose-6-phosphatase (G-6-PDH)and their activities during spermatogenesis of non-irradiated and irradiated N. viridula insects with high dose of gamma radiation. Insects Nezara viridula eggs were obtained from the wild and were placed in plastic containers with moistened filter paper and covered with screw-top lids.Nymphs and adults were fed on a diet of green beans and peanuts in an environmental chamber maintained at 25 + 2 o C and 50+5 % RH with photoperiod of 16:8 (L:D) hrs.(Panizzi and Mourão, 1999). Irradiation The fourth instar nymphs were irradiated to a dose of 40 Gy using a Theratron T-80 Co-60 teletherapy external beam treatment unit (ESR, Christchurch).To ensure unambiguous radiation damage to bug gonads, the insects were contained in petri dishes (90mm diameter, 15 mm deep), and placed at a distance of 50 cm from the radioactive point source.This point source geometry limited the dose gradient through the sample to 6%.A four millimetre thick piece of Perspex was added to the beam entrance side of the containers to ensure that full dose deposition to the insects occurred.The irradiated nymphs were returned to the aforementioned rearing conditions and were maintained until they moulted into the adult stage.All insects were dissected 24-48 h post final moult. Enzyme cytochemistry TEM Live control and irradiated male bugs were placed in a refrigerator at 4 o C for 15 min prior to dissection to slow them down.Bugs were placed directly in chilled dissection buffer (0.1 M phosphate, 3% sucrose, pH 7).Dissection was rapidly performed using scalpel and small scissors to remove the bug's head, legs and dorsal terga, wings, and abdominal integument to expose the viscera, where upon the testes were located and carefully removed into the buffer and were then transferred directly to a light fixative (1% glutaraledehyde 0.1 M cacodylate buffer) for 15 minutes on a rotator.Samples were washed in buffer (0.1 M cacodylate, 5 mM CaCl2, pH 7.2) and cytochemical samples were transferred to either an acid phosphatase assay solution (7 mM cytidine-5'monophosphate, 2 mM cerium chloride, 5% sucrose in a 0.1 M tris-maleate buffer at pH 5.0; following Pino et al., 1981) or into a glucose-6-phosphatase assay solution (5 mM glucose-6-phosphate, 5 mM manganese chloride, 4 mM cerium chloride, 5% sucrose in 0.1 M trismaleate buffer at pH 6.5; following Robinson and Karnovsky, 1983) for 1 hour at 37°C, then washed with buffer. Transmission electron microscopy (TEM) Cytochemically incubated and control samples were then processed for TEM by first placing in primary fixative (4% formaldehyde, 2.5% glutaraldehyde, 0.1 M cacodylate, 5 mM CaCl2, 3% sucrose, pH7.2) for 30 minutes at room temperature ≈ 20°C on a rotator and then overnight without fixation at 4°C.Subsequent steps before polymerisation were carried out at room temperature.Following primary fixation, samples were washed in buffer (0.1 M cacodylate, 5 mM CaCl2, pH 7.2), transferred to secondary fixative (1% osmium and 0.8% ferricyanide in 0.1 M cacodylate buffer) for 2 hours on the rotator, washed in ultrapure water, and dehydrated through an acetone series (70%, 80%, 90%, then 100% EM-grade dry acetone twice).Samples were infiltrated and embedded in procure 812-araldite 502 resin (50% resin/acetone, then thrice in 100% resin and polymerised for 22 hours at 60°C.Sections 80-100 nm thick were cut on a Leica Ultracut UCT fitted with a Diatome 45° diamond knife, post-stained briefly with 2% uranyl acetate then 0.02% lead citrate and viewed with a Morgagni (FEI) transmission electron microscope (TEM) operating at 80 kV.Efficacy of the cytochemical staining was assessed with respect to conventional TEM staining of the same structures in the control samples. RESULTS The spermatids of N. viridula undergo specific morphofunctional modification during Spermiogenesis.At early spermatid stage, the nucleus appears more or less round andhas dispersed areas of chromatin (heterochromatic), nucleolus adhere the nucleus envelope.The cytoplasm rich with crescent-shape Golgi complex and parallel cisternae of endoplasmic reticulum (ER) (Fig. 1a).Cluster of mitochondrial aggregates which occurs in the cyst, adheres to the nucleus (Fig. 1b).At this stage, the (ACP) activity is located at the Golgi vesicles and nuclear membrane (Fig. 1a).In irradiated individual, the (ACP) reaction products is associated with nuclear membrane and endoplasmic reticulum at the same level as control insect.The sperm flagellum consists of an axoneme and two mitochondria derivatives (Fig. 2).The axoneme follows the 9+9+2 pattern of microtubules arrangement (9 accessories, 9 doublets and 2 central).The mitochondrial derivatives are symmetric in diameter and formed by one large Para crystalline region between two electron lucent areas and one mitochondria cristae region limited to the periphery of the derivatives (Fig. 2).Electron-dense reaction product indicating the presence of acid phosphatase activity (ACP). The reaction product was seen in endoplasmic reticulum cisternae surrounding axoneme and the tail elements of late spermatid for control N. viridula.It also seen in association with mitochondria derivatives (Fig. 3 a, b).It observed in the mitochondria cristae which are perpendicular along the axis of the mitochondria derivatives and spaced at regular intervals (Fig. 3 b).The (ACP) reaction product is scattered in the remnants of cytoplasm (Fig. 3 b).At the same stage of irradiated individual, (ACP) activity is detected but diffuse weak reaction (Fig. 4 a, b, c).Deposition of (ACP) reaction product was localized in spermatozoan axoneme, tow mitochondria derivatives and plasma membrane which enclosed these elements at non-irradiated N. viridula (Fig. 5). The reaction of glucose-6phosphatase (G-6-PDH) activity was not seen in the nucleus of early spermatid stage of control N. viridula (Fig. 6 a).However, a light labelling of (G-6-PDH) was observed on the nuclear envelope of early irradiated spermatid (Fig. 6 b). Reaction product indicative of (G-6-PDH) activity was observed in the endoplasmic reticulum cisternae surrounding the axoneme and the tail elements.This reaction product was also detected on the mitochondrial derivatives of late spermatid for both non-irradiated and irradiated N. viridula (Fig. 7 a, b).The reaction exhibit the same level of activity. DISCUSSION The spermatogenesis process involves the structural and physiological transformation of organelles to more adapted forms at the fertilization process (Andre, 1963 andAnderson et al., 1967).Several enzymes may be involved in the remodeling as well as in the chemical changes which occur during this process. Hence the influence of gamma irradiation on localization of enzymatic activity during spermatogenesis are preserved, so the influence of radiation may attributed to the difference in sensitivity of enzyme loci that control the biosynthesis of the enzyme active protein (Abdel Megeed, 1987 andProtas andCharialo, 1991). During the early spermatid stage in N. viridula, Our observations showed that (ACP) was detected cytochemically in the Golgi complex and nuclear membrane at the same level for both control and irradiated individuals.(ACP) reaction product was strongly observed in the endoplasmic reticulum cisternae surrounding the axoneme and the tail element.Also, it associated with the mitochondria derivatives of the late spermatid (presperm) and spermatozoan of non-irradiated insects.However, (ACP) showed weak reaction product in the irradiated males.The (ACP) activity has been associated with the Golgi complex, which is the site where proteins finally exit to respective cellular sites such as plasma membrane, secretory granules and lysosomes in germinal cells (Griffiths andSimons, 1986 andGrab et al., 1997). The presence of (ACP) has been mainly related to the axoneme which corresponds to those described for the spermatozoa of other insects (Anderson et al., 1967;Bigliardi et al., 1970;Baccetti et al., 1971Baccetti et al., , 1973;;Beaulaton and perrin-Waldemar, 1973;Báo and Doler, 1990;Báo, 1991, Fernandes and Báo, 1998and Báo and de Souza, 1994).Acid phosphatase is important in the metabolism of phosphate compounds which is essential for flagellar motility and activities of other enzymes in the axial filament (Báo and de Souza, 1994).The (ACP) is a hydrolase which participates in the metabolism of phosphate which could be used for the tail motility.This enzyme has been localized in proacrosomal vesicles (Anderson et al., 1967;Anderson, 1968 andSouza et al., 1988), on the axoneme (Anderson et al., 1967 andBaccetti et al., 1971) and plasma membrane (Anderson et al., 1967 andBáo andde Souza, 1994). On comparing the activities of the (ACP) in the non-irradiated and irradiated N. viridula, it was found that control males had high level of activity than that in the irradiated males.These is might attributed to their direct or indirect role in energy production.The enzyme (ACP) acts as mean by which phosphates can be added to the phosphate pool which is necessary for production of energy needed to flight activity and another physiological processes (Gilbert and Huddleston, 1965;Baker and Lioyds, 1973;Bogitsh, 1974 andMoore andFrazier, 1976).Kumar (2012) studied the impact of cell phone radiation on various biochemical and physiological aspects of semen of drone honey bee Apis mellifera (L.).It was observed that the activities of seminal enzymes such as (ACP), (G-6-PDH), Alkaline phosphatase and Hexokinase had decreased leading to reduced utilization of the biomolecules and hence increase in their concentration.Chilton (1974) investigated the effects of gamma radiation on the amount of acid phosphatase (ACP) in the mid-gut of adult beet army-worms, Spodoptera exigua (Hübner). In the present study, early spermatid displayed glucose-6phosphatase (G-6-PDH) activity at nuclear membrane of irradiated N. viridula and were not detected in the non-irradiated insects.In contrast to the control, the irradiated male N. viridula was associated with increasing activity of enzyme assayed in the present study.The observed increase might reflect an accumulation of the unused enzyme which may result from the impairment of the normal growth, development and maturation of gonads.These finding was in agreement with (Mohammed, 2006), which detected accumulation of assayed enzymes in sterile female of cowpea beetle, Callosobruchus maculatus. The resistance to high doses of gamma irradiation is due to the higher capacity for DNA repair in damaged cells, possibly through significantly high levels of relevant enzymes (Cavalloro et al., 1985).Cytochemical studies have demonstrated the presence of sugar residues in intracellular compartment mainly in nucleus associated with dense chromatin (Vannier-Santos et al., 1991;Báo and de Souza, 1992;Craveiro andBáo, 1995 andBáoet al., 1997).The (G-6-PDH) was located in the endoplasmic reticulum in germinal cells of N. viridula (Fernandes and Báo, 1999).This enzyme is also detected on the axoneme and mitochondria derivatives of late spermatid in both non-irradiated and irradiated males.The presence of (G-6-PDH) in spermatid and spermatozoan indicates the presence of glycogenolytic pathways (Fernandes and Báo, 1999).The activity of (G-6-PDH) has been shown in the endoplasmic reticulum and Golgi complex of insect spermatids (Báo and de Souza, 1994;Furtado and Báo, 1996).Also in the axoneme and mitochondrial derivatives of the spermatozoa of some invertebrate and vertebrate species (Anderson andPersonne, 1970 andBigliardi, et al., 1970). Generally, inhibition of the enzyme activities after irradiation may attributed to the disturbance in some physiological and biochemical metabolism.This suggestion could be confirmed by the conclusion of La Brecque and Smith (1963) and Abdel Megeed et al. (1987) that the effect of gamma irradiation may be attributed to the interruption of any of the complicated steps of metabolic rate by hormonal, biochemical or genetic factors.The marked sensitivity of enzymes is suggestive of the potential gamma irradiation to interfere with various energy requiring processes.Thourburn (1972) had theorised that irradiation might interrupt energy supplies blocking of key enzymes which could stop the normal metabolism. Finally, it is recommended to carry out the quantitative and qualitative studies in the near future to estimate the activity of these two enzymes, (ACP) and (G-6-PDH) present in the germ cells of male N. viridulainsects before and after irradiation.
4,033.4
2016-12-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Mixed-frequency medium-voltage aging analysis of epoxy in the absence of partial discharges and dielectric heating Premature failures of polymeric insulation under inverter-type electrical stress are predominantly associated with partial discharge (PD) erosion or dielectric heating. In the present contribution, an approach for aging analysis in the absence of the aforementioned mechanisms is proposed and applied to anhydride-cured epoxy samples, which are designed with a recessed shape to achieve PD-free aging. Dielectric heating was found to be negligible under all applied experimental conditions. Aging of samples was performed with a specialized setup for the generation of mixed-frequency medium-voltage (MF-MV) waveforms under controlled temperature and humidity conditions. The health state of samples was evaluated before and after different aging sequences by analysis of potential aging markers, namely the short-term AC breakdown strength, the complex dielectric permittivity (real and imaginary part), the volume resistivity, the glass transition temperature and the characteristic absorbance peaks obtained by Fourier-transform infrared spectroscopy (FTIR). Of these, only the breakdown strength exhibited significant aging effects under hygroelectric stress, which is hypothesized to be attributed to localized microcracking caused by electromechanical stress. Pure electrical MF-MV stress (i.e. at room temperature and dry conditions) was not found to be critical under the applied experimental conditions. By means of FTIR, hydrolysis was excluded as a possible aging mechanisms. In summary, the proposed aging analysis approach was found to be suitable to reveal aging effects empirically as well as to give indications about the underlying aging mechanisms without the need for excessively long or accelerated lifetime testing. Introduction The growing contribution of power electronic devices in medium-voltage (MV) and high-voltage (HV) systems is driven by current trends in electric energy supply. They enable the integration of renewable energy sources [1,2] and allow increased converter power densities in areas where space or weight limitations apply, such as traction [3] and electric aircraft [4,5] applications. For example, solid-state transformers allow a more flexible power conversion at up to 100fold increased power densities and higher efficiencies compared to their low-frequency counterparts [3,6]. Similarly, inverter-fed MV rotating machines allow more flexible and efficient power conversion [7]. An example for the trend towards increasing voltage levels is the recent step for electric vehicles from 400 V to 800 V systems [8,9]. A main enabler for the increasing penetration of power electronic devices into new areas, as the MV and HV sector, is the recent progress in wide bandgap semiconductor technologies (SiC and GaN). It allows for reduced switching losses by increasing the switching speeds (>10 kV µs −1 ) [10,11] as well as for higher conversion power densities by increased switching frequencies (>10 kHz) [12]. Furthermore, higher blocking voltages of SiC-and GaN-based devices (>2 kV) pave the way for a growing number of MV and HV applications [13][14][15]. Review of failure mechanisms under inverter-type stresses The aforementioned maturation in power semiconductor technologies, however, comes with challenges for the involved insulation systems and materials [2] which need to be able to withstand novel types of electrical stresses with respect to higher switching frequencies, voltage levels and switching speeds. Moreover, the latter might lead to transient overvoltages if the slew rate dV/dt is high enough. In this context, three main aging regimes are identified. Partial discharge (PD) erosion. First of all, PDs, if incepted, lead to the erosion of the insulation material over time [16][17][18]. As a consequence, the insulation lifetime will decrease dependent on the repetition frequency of the PD events that mainly occur during the pulse slope [18][19][20][21][22][23]. In addition, it is reported that the PD amplitudes increase with increasing slew rate [20,24]. As systemic overvoltages might occur, e.g. at cable terminations, PDs might incept at nominal voltages lower than the PD inception voltage (PDIV) [25]. Furthermore, the PDIV decreases significantly with decreasing pressure, which leads to the risk of overestimation of the PDIV with respect to its true value in low-pressure applications, such as in aircrafts [26]. Dielectric heating. Another aging regime is caused by excessive heating of the insulation due to dielectric losses generated by high-frequency switching of high voltages [16,22,[27][28][29]. This phenomenon is especially pronounced for polar materials [28]. If the generated heat is not dissipated effectively enough, a thermal breakdown by thermal runaway occurs [30,31]. Even in cases without a thermal runaway, the heating of the insulation might enhance thermally activated aging processes [23,32] or, as in the Eagle Pass incident [33], cavities are opened due to drying of silicone grease, which leads to PD inception. 1.1.3. Electromechanical aging. Much less is known about aging of polymeric insulation under inverter-type voltage stress in the absence of PDs and/or dielectric heating. However, empirical observations of aging in this regime exist [34][35][36][37][38], but the underlying mechanisms are not yet fully understood. Some authors state a potential contribution of space charges to the observed aging mechanism [19,22,[36][37][38]. It is demonstrated by [36] that space charges are not only relevant at direct current (DC), but also at alternating electric field stress. Even if their density is shown to decrease with increasing frequency [39], the local electric field might be increased at increasing frequencies due to space charges [40]. The interaction of an applied electric field with (space) charges generates an electromechanical force in a solid dielectric material. Based on Griffith's theory of fracture mechanics [41], different models were developed to describe electromechanical aging in dielectrics [42][43][44][45]. These models have in common to describe microscopic morphological material changes, such as the formation and growth of microcracks in the (sub-)µm range, caused by the energy dissipation due to long-term electromechanical stress. These effects occur primarily in amorphous regions of a polymer [46,47]. As a direct breaking of intermolecular bonds such as covalent bonds is unlikely due to high energy barriers, it is rather a deformation/breaking of intermolecular van der Waals bonds with much lower energy barriers which is responsible for the onset of aging [44,47,48]. The energy barrier which has to be exceeded for bond breaking can be lowered by an applied electric field and space charges [43,47]. The broken bonds as well as possibly created free radicals are favorable to react with charges and dipoles, such as water [47][48][49]. Where voids/cracks reach a size that allows the inception of PDs, aging will be dominated by PD erosion with electrical treeing [42]. Thus, the electromechanical stress acts as the initiator for fatal aging. Alternating electric field stress introduces cyclic stress into the dielectric, which causes a fraction of polymer chains to occupy conformational states other than their equilibrium under constant stress. This might increase the free volume in parts of the polymer, thus leading to an acceleration in void formation in amorphous regions. The mechanical energy dissipation associated with this cyclic stress is proportional to the excitation frequency and the fourth power of the electric field [48]. In addition, charge trapping/detrapping can create microcracks due to the vibrational energy which is released during the capturing and escaping of charge carriers in and from traps, respectively. This effect is enhanced under alternating electric fields due to the additional repulsive field of trapped charges [49]. In addition to electric field and space charge, higher temperatures enhance the described (thermally activated) electromechanical aging processes [43,45,47]. Humidity, i.e. absorbed water, can be responsible for lowering of the activation energy for bond breaking and it is speculated that the highly polar water molecules accelerate fatigue processes due to alternating electric field oscillations [47]. [48] concludes that 'ingress of water [. . .] under the influence of the electric field, may induce some kind of mechanical cracking' (p 209-210). Formation of microcracks in polymers in the presence of absorbed water might also be caused by hydraulic pressure induced by alternating electric fields on water droplets [50] or by plasticization (which is typically reflected by a reduction in the glass transition temperature) [51]. If microcracks are filled with a (semi-)conductive medium, such as water, a (field-assisted) crack propagation can occur [42,52]. Furthermore, water-treeing in amorphous phase after stress-cracking is reported [45,53], and it is observed to be faster at higher frequencies [50,54]. Aim and procedure of this work In order to study aging effects of inverter-type stress on polymeric materials, epoxy is a suitable sample candidate due to its long-term use, e.g. as machine insulation of inverter-fed drives [23,55]. Considering the aforementioned aging mechanisms, stronger PD aging of epoxy insulation with increasing frequency was measured by other authors [56,57]. Thermal breakdowns for epoxy insulation were predicted [27,30,59] and observed [60,61] under high-frequency stress. Since much less is known about aging in the absence of PDs and dielectric heating, the focus of the present study is on aging of epoxy insulation under inverter-type stress in the absence of the aforementioned mechanisms. For this purpose, an approach for aging analysis of polymeric insulation materials is proposed and applied to anhydride-cured epoxy samples. It is noteworthy that extrapolation of typical lifetime-curves from high stress conditions for accelerated aging tests to lower stress conditions is questionable, as different aging processes might be at play [49]. This yields a challenge for realistic aging tests as failure times are expected to be considerably long. Consequently, in the present contribution a different approach is chosen, which is not based on failure time analysis, but on the analysis of (destructive or non-destructive) potential aging markers prior and after non-destructive aging sequences. Aging markers are selected which might reveal important physical properties of epoxy insulation, such as short-term AC breakdown strength, complex relative permittivity (real and imaginary part), volume resistivity, glass transition temperature as well as spectra obtained by Fourier-transform infrared spectroscopy (FTIR) in order to detect changes in the chemical composition, e.g. from hydrolysis [58][59][60][62][63][64][65][66][67][68][69]. The goal of the present contribution is to evaluate the usefulness of the proposed approach for aging analysis in the absence of PDs and dielectric heating with proof-of-concept measurements. The design and manufacturing process of suitable epoxy samples for PD-free aging is shown in section 2.1. Detailed information about an aging test bench designed for inverter-type stress with controlled ambient conditions is found in section 2.2, whereas the stress profiles are described in section 2.3. Moreover, thermoelectrical simulations are carried out to evaluate whether dielectric heating effects are relevant (section 2.4). The different diagnostic tools (built in-house and commercial) for the analysis of potential aging markers are described in section 2.5. Methods In the following section, the sample manufacturing, preparation and conditioning is explained. Subsequently, the simulation and measurement principles as well as the aging analysis and experimental tools are described. Sample manufacturing, preparation and conditioning A three-component epoxy system, consisting of unfilled epoxy resin, anhydride-based hardener and catalyst (ratio 100:85:0.5), provided as raw materials by ELANTAS, was used. The sample shape was chosen in a way that a recessed plate-to-plate geometry with a thickness of 200 µm and a diameter of 5 mm (of the flat sample part) is achieved, see figure 1. The recessed sample side was designed with a 5 mm radius. This ensured, after applying silver conductive paint (SCP Electrolube) on both sides, a high and uniform electric field stress at the recessed sample part during aging and breakdown tests. Furthermore, mechanical support from the thicker regions around the recessed area was achieved in this way. A cross-section of the designed mold with thickness adjustment steps as well as pictures of the recessed samples are shown in figure 1. All manufacturing steps were carried out under clean room conditions to minimize the amount of impurities in/on the sample. The mixed components were stirred for 5 min and degassed under vacuum (1 . . . 10 mbar) for 15 min. Afterwards, the mixture was filled in the mold, which was cleaned with ethanol and prepared with a silicone-based releasing agent (Huntsman QZ 13) beforehand. This was followed by another period of degassing under vacuum for 15 min with subsequent curing in an oven (6 h at 80 • C + 6 h at 160 • C) and demolding thereafter. Samples were cleaned with silicone cleaner (Silikon-EX 2609) in an ultrasonic bath for 10 min to remove potential residuals of the releasing agent. The thickness of every sample was measured individually (Sylvac Hi-Cal 300) and samples not within 200 µm ± 10 µm were discarded. Samples were subsequently dried at 50 • C for >24 h (according to ISO 62:2008 [70]) and conditioned at the desired humidity conditions for >24 h. In a last step, silver painting of samples with subsequent humidity conditioning for >24 h was carried out. It was found by gravimetric analysis (not shown here) that the used SCP is permeable to water. Gravimetric water absorption measurements at plane samples of 500 µm thickness allowed the calculation of the diffusion coefficient D = 8.64 × 10 −9 cm 2 s −1 , which in turn allowed the estimation of the water absorption saturation time of 6.5 h for the recessed samples of a thickness of 200 µm. This confirms that conditioning of the samples for >24 h is sufficiently long for saturated water absorption. The measurements and calculations followed the procedure described in [70,71]. Mixed-frequency medium-voltage (MF-MV) and ambient stress setup Multilevel inverter-type (or converter-type) voltage stress was synthesized by superimposing a low-frequency alternating current (AC) (50 Hz) sinusoidal voltage and a high-frequency pulse-width modulated (PWM) voltage, which resulted in socalled MF-MV stress. This voltage shape was chosen to enable systematic variations of the sinusoidal AC and the rectangular PWM voltage components. Simultaneous aging of eight samples was achieved with the setup shown in figure 2, which is based on the setup initially designed by Färber et al [19], and was modified during this work. It consists of a BEHLKE HTS 41-06-GSM half-bridge metal-oxide-semiconductor fieldeffect transistor (MOSFET) switch, which generates a unipolar PWM voltage from a DC source V p and a DC link capacitor C 1 = 1 µF. The PWM signal is supplied by an Arduino Yún. With the help of a coupling capacitor C 2 = 2.7 nF, the unipolar PWM voltage becomes bipolar and a 50 Hz AC voltage can be superimposed onto the PWM voltage at the device-under-test (DUT), i.e. the eight epoxy samples in parallel. The resistor R 1 = 220 Ω protects the internal diodes of the switch by forcing the current flow through D 1 /D 2 in case of a sample breakdown, which is detected by a current transformer (CT). The rise time of PWM voltage pulses can be varied by changing the value of R 2 (currently R 2 = 90 Ω) and decoupling of the AC and PWM voltages is achieved by R 3 = 600 kΩ. The capacitance of a total of eight samples was measured by means of dielectric spectroscopy (also see section 2.5) as C DUT ≈ 68 pF. The samples were placed inside a test cell with a temperature-controlled heating sleeve. The relative humidity (RH) inside the test cell can be varied between RH = 0 . . . 80% at ambient temperature (22 ± 1 • C). The desired RH level was set by a dried air flow that is separated in two branches out of which one is directed into a mixing chamber with an RH sensor, while the other passes a bath of de-ionized water beforehand. By adjusting the volume flow of both branches, the RH level can be controlled, see figure 2. The samples were conditioned according to the procedure described in section 2.1. Subsequently, they were placed inside individual sample holders with holes for air passage between two Aluminum electrodes with spring contacts for defined contact pressure. Note that the RH values in this work are given as fixed values, e.g. RH = 0%, even if slight differences cannot be fully excluded due to the inaccuracy (±3%) of the RH sensor. The MF-MV stress setup specifications are given in table 1. MF-MV and ambient stress Changes of the health state of the samples due to MF-MV and ambient stress were investigated by evaluation of potential aging markers (see section 2.5) before and after aging. At each aging sequence, eight samples were stressed simultaneously in the setup shown in figure 2. An example MF-MV waveform with special focus on the rise time of the PWM voltage pulses with frequency f PWM = 10 kHz and duty cycle D c = 0.5 is shown in figure 3. According to the measured rise time τ r ≈ 90 ns (time between 10% and 90% of the maximum voltage value) and the peak-to-peak PWM voltage V PWM,pp = 3.5 kV, the slew rate could be calculated as dV/dt = V PWM,pp /τ r = 38.9 kV µs −1 . The superimposed AC peak voltage was V AC,p = 5.75 kV, such that the total peak voltage was V p = 7.5 kV and the root-mean-square (RMS) voltage was V p = 5.3 kV for a duty cycle D c = 0.5 (represents the time fraction D c T p during a full PWM period T p for which a positive PWM voltage is applied). The aging sequences used in this work are listed in table 2, wherein t d is the aging duration, T the temperature and RH the RH inside the test cell (also applied during conditioning according to section 2.1). Aging was carried out under two different electric stresses and both dry (D) and high humidity (H) conditions. In case of an early breakdown, i.e. during intended nondestructive aging, the silver paint of the sample was removed and the sample surface was searched for PD traces. This indicates whether the failure was caused by PD erosion in an air gap between the sample surface and the potentially delaminated electrode or if the breakdown was due to one or more PD-free aging mechanism(s) (see section 3.2 for further analysis). Thermoelectrical simulations Two-dimensional thermoelectrical simulations were carried out with COMSOL Multiphysics. In general, the total dielectric losses P loss,sin are defined for a sinusoidal voltage of angular frequency ω and RMS value V RMS as In order to simulate the dielectric losses in the case of MF-MV stress, it is useful to describe the voltage waveforms by a Fourier series [30]. However, COMSOL Multiphysics can only simulate dielectric heating at a single frequency at a time. For this reason, an approach was chosen in the present work to split the total dielectric losses generated under combined AC (50 Hz) and PWM voltage stress P loss into a single term for the AC component and a sum term for the harmonics of the PWM voltage as wherein ω 0 = 2π · 50 Hz, C 0 is the vacuum capacitance, ε ′ ′ r the imaginary part of the complex relative permittivity. Due to the linearity of the Fourier series and by introducing a function g(n) as well as a factor c = (V PWM,pp /2)/V AC,RMS to relate the RMS values of the PWM and AC voltages, the coefficients V n,RMS read With the help of a frequency relation factor k and formulation of ω n = knω 0 , equation (2) can be written as which formally equals equation (1). Thus, introducing an equivalent imaginary part of the complex relative permittivity ε ′ ′ r,eq enabled us to enter the whole sum of equation (4) into the COMSOL model. The function g(n) was calculated according to [30] as by assuming a low-pass characteristic of the PWM voltages with the measured rise time τ r = 90 ns with The values of ε ′ ′ r at different frequencies, temperatures and humidities were measured by dielectric spectroscopy, whereas the volume resistivity values at different temperatures and humidities were obtained by polarization-depolarization current (PDC) measurements (both not shown here). The heat transfer properties of the epoxy samples were measured with a heat transfer analyzer (Isomet 2114) which resulted in the values shown in table 3. By using the approach and input values described above, it was straightforward to perform two-dimensional thermoelectrical simulations of the sample-electrode configuration shown in figure 5 under all possible MF-MV and ambient stress conditions. This reveals the loss density and temperature (increase) inside the epoxy sample. However, the calculated temperature increase in the epoxy sample (with rather low thickness and low dielectric losses) was very small and thus hard to measure experimentally. In order to validate the simulation model experimentally, an approach similar to [72] was applied. For this purpose, phenolic paper (Tufnol SRBF) samples of 400 µm thickness were used, as they exhibit much higher dielectric losses (e.g. by a factor of 30 . . . 66, measured at 1 kHz and 25 . . . 125 • C). It was expected that this leads to significantly higher dielectric heating, which might be easier to measure experimentally with a thermal camera. As for epoxy samples, the temperature-and humidity-dependent permittivities, dielectric losses and resistivities as well as heat transfer properties of phenolic paper samples were measured experimentally as input for the thermoelectrical simulation. The comparison between simulated and measured dielectric heating of phenolic paper samples is shown in section 3.1. Aging evaluation As mentioned in sections 1.2 and 2.3, the health state of the epoxy samples was assessed before and after each aging sequence described in table 2. It is emphasized that samples aged under high RH levels were dried at RH = 0% for >48 h at room temperature before measuring the potential aging marker in order to reveal irreversible instead of reversible aging effects. For this purpose, the following markers were tested for their suitability in the context of aging under MF-MV and ambient stress: short-term AC breakdown strength, relative permittivity (real and imaginary part), volume DC resistivity, glass transition temperature as well as FTIR spectra. The corresponding experimental setups and measurement procedures are explained in the following. breakdown. The samples and electrodes were immersed in insulating oil (Shell Diala S4 ZX-1) to prevent surface discharges. Note that the electrodes used for breakdown testing posses a plate-to-plate shape with the smaller electrode having a diameter of 5 mm and an edge radius of 5 mm. The resulting uniform electric field stress was assumed to lead to a rather high scattering of the breakdown strength data, which, in general, makes it difficult to detect small changes in the breakdown strength. In addition, the volume/thickness effect of the breakdown strength of solid insulation materials had to be considered. Consequently, the measured breakdown strength V bd,m for each sample with slightly varying thickness d m = 200 µm ± 10 µm was related to a thickness of 200 µm by [75] V bd,200 µm = Note that the thickness-related voltage differences corrected by equation (8) were a maximum of 5% for the sample thicknesses used in this work. Statistical analysis of breakdown strength measurements was carried out by representing the data as box plots, consisting of median, boxes between the 25th and 75th percentiles with whiskers up to 1.5 times the interquartile distance. Values outside of this range are separately shown as outliers. Dielectric permittivity measurements. Measurements of the real (ε ′ r ) and imaginary (ε ′ ′ r ) part of the complex relative permittivity at varying frequency, temperature and RH were performed by means of dielectric spectroscopy. For this purpose, a setup with guarded test cell and electric supplies was designed and built by Färber and Franck [76] and the measurement procedure is described in [71,77]. The silver electrodes used during aging were removed with ethanol and new electrodes with diameter D el = 4.18 . . . 4.54 mm were applied only at the flat part (thickness d = const.) of the plate-toplate sample to exclude any influence of the edges, where d ̸ = const.. The diameter of the new electrodes was measured by an image processing program (ImageJ) for each sample. This value was needed for a correct estimation of the sample capacitance and thus the absolute value of the complex relative permittivity. Reference measurements with and without guard ring electrode at the sample surface confirmed that the additional guard ring introduced no changes to the measurements (not shown here). It was thus justified to perform further measurements in this work without an additional guard ring. Volume DC resistivity measurements. Volume DC resistivity was determined by PDC measurements, for which a test bench was designed and built in an earlier work [78]. The measurement procedure consisted mainly of the measurement of the polarization current i p (d) during an applied DC voltage V DC = 1 kV over the course of 1 h with subsequent measurement of the depolarization current i d (t) for the same time. Their absolute difference i pd,1h ≈ |i p (1 h)| − |i d (1 h)| is related to the so-called apparent DC resistivity (after 1 h) ρ v,1h as described in [78]. Measurements were performed at RH = 0% and 90 • C. The elevated temperature led to higher current signals, which otherwise were too low to be measured at room temperature for epoxy samples of the used geometry. Example measurements for samples aged at different stress conditions are shown in figure 4. The solid lines represent the currents filtered with a moving median (ten data points) to eliminate the transient current signals (shown in lighter color), caused e.g. by switching events of the temperature controller. Note that the filtered current signals before application of the DC voltage were always below ±10 fA (not shown here), which confirms that no significant offset current component was present in the measurements shown in figure 4. Glass transition temperature measurements. The glass transition temperature T g was determined by means of differential scanning calorimetry (DSC). For aged samples, the silver electrodes were removed with ethanol before each measurement. A METTLER TOLEDO DSC 1 device was used at a heating rate of 10 K min −1 in the range 25 . . . 180 • C with two cycles per sample. For all DSC measurements, a small fraction of the sample material (4 . . . 6 mg, measured with a scale of 0.01 mg accuracy for each sample) was extracted from the flat inner part of the sample in order to evaluate the location of the highest electrical stress during aging. FTIR measurements. The chemical composition of the non-aged and aged materials was analyzed by FTIR. Measurements were carried out with a diamond attenuated total reflection type device (Varian 640 Fourier Transform Infra Red Spectrometer) in the wavelength range 750 . . . 4000 cm −1 . For aged samples, the silver electrodes were removed with ethanol before each FTIR scan. Reference measurements (not shown in this work) confirmed that this did not lead to measurable changes in the FTIR spectrum. As the absorbance might vary between different FTIR scans, it is recommended to use reference peaks which are known to be (almost) unaffected by aging. For anhydride-cured epoxy, the absorbance around 1608 cm −1 is reported to be the most stable and independent of aging due to its correspondence to the aromatic structure in epoxy [63]. Thus, all FTIR spectra were related to this absorbance peak. Results In the following section, the thermoelectrical simulation results as well as the aging evaluation results are presented. Thermoelectrical simulation results At the highest possible MF-MV stress, the maximum temperature increase at dry conditions and room temperature is shown in figure 5. The influence of the AC voltage on dielectric heating for the given sample-electrode configuration was almost negligible and also the high-frequency PWM voltage pulses introduced only a maximum temperature increase of 0.75 K. Note that the different ∆T values at each V 2 PWM,pp · f PWM value reflect the measured frequency-dependent dielectric losses, i.e. ε ′ ′ r (ω) as well as the different number of dielectric loss terms in equation (4) for different fundamental frequencies. For example, V 2 PWM,pp · f PWM = 48 kV 2 kHz can be achieved with V PWM,pp = 1 kV and f PWM = 48 kHz or with V PWM,pp = 3.5 kV and f PWM = 3.9 kHz, but the dielectric losses measured at 3.9 kHz and 48 kHz, respectively, are not the same, thus leading to slightly different ∆T values in figure 5. Elevated temperatures up to 125 • C and relative humidities up to 80% led to maximum dielectric heating of 0.75 K and 1.2 K, respectively (not shown here). Thus, the effect of dielectric heating for the used sample-electrode configuration in this work is considered negligible. However, the same simulation model developed in this work was capable of revealing considerably higher dielectric heating in the case of much higher voltages and frequencies than possible with the used setup. For example, the glass transition temperature T g ≈ 123 • C of the tested epoxy sample was exceeded at V PWM,pp /f PWM combinations of about 25 kV/50 kHz or 100 kV/2.75 kHz. Note that under the latter condition, i.e. V PWM,pp = 100 kV, the short-term breakdown strength might be exceeded even before a thermal breakdown can take place. Moreover, a higher sample thickness with associated poorer heat dissipation might as well lead to a higher risk for excessive dielectric heating. In order to verify the correctness of the developed simulation model experimentally, phenolic paper samples with much higher dielectric losses were used, which in turn caused higher dielectric heating. In figure 6, the measured (with standard deviation) as well as the simulated temperatures are shown at different time steps after applying of voltage stress for 30 min. The measurements and simulations were evaluated at the same location in proximity to the electrodes. It was found that due to heating of the nearby MOSFET switch during operation, the ambient air heated up additionally, which was also included in the simulation model. From figure 6, it is evident that a good agreement between measured and simulated values of dielectric heating and the subsequent temperature decay existed. The same test was carried out at different voltage levels (not shown here), which resulted in a similar agreement between experiments and simulations. Analysis of pre-breakdowns In case of a pre-breakdown, i.e. a sample breakdown occurring before the end of the intended 48 h of aging, the failed sample was removed from the test cell and the silver paint was removed with ethanol. Each sample was optically examined for PD traces. If present, PD occurred at the sample edges, probably caused by a delamination of the silver paint with associated formation of a small air gap in which PD inception was possible. The failure times of all pre-breakdowns at the corresponding RMS voltage stress levels are given in figure 7. Note that therein, the pre-breakdowns of more aging studies than presented in this contribution were included. A clear distinction in terms of failure times between samples with and without PD traces can be observed. When a sample breakdown occurred in the first 10 h of aging, traces of PD were always observed. In turn, for later breakdowns, no PD traces were found. This was further confirmed by optical analysis of aged, non-failed samples, which consistently exhibited no PD traces. AC breakdown strength measurement results Measurement results of the short-term AC breakdown strength E bd of a set of epoxy samples before and-for a different but identically manufactured set-after aging at the conditions described in table 2 are given in figure 8. Whereas no clear aging effect was visible for samples aged at dry conditions, a clear decrease in E bd was observed under same electric stress, but additional high RH. In other words, the electrical stressing is observed to lead to a reduction in the short-term AC breakdown strength only in the presence of (absorbed) water. In order to rule out that humidity alone had a significant influence on E bd , sample conditioning for different periods of time at the same RH level was applied to non-aged samples. Subsequent measurement of E bd either at wet conditions or after drying resulted in figure 9. From this representation, it is clear that only the combined humidity and electric stress (here called hygroelectric stress) had such a tremendous influence on the breakdown strength. Dielectric permittivity measurement results A comparison of non-aged epoxy samples and the same aged (H: 5.75/3.5/10/0.5) samples for ε ′ r and ε ′ ′ r measurements at different frequencies and temperatures is presented in figure 10. For the shown two samples, no significant differences were observed and also evaluation of all eight samples (in figure 10 for instance at 10 3 Hz) revealed no aging-induced changes. It is noteworthy to mention that dielectric spectroscopy measurements of PD-aged samples exhibited distinct changes in ε ′ r as the sample thickness had been reduced (but not considered in the calculation) due to material erosion (not shown here). This is another indicator that samples aged in this work were aged in the non-PD regime. Volume DC resistivity measurement results The measurement results of PDCs of differently aged samples are shown in figure 11. From the current difference i pd,1h , the apparent volume DC resistivity ρ v,1h was calculated according to the procedure described in section 2.5.3 and presented for non-aged and aged (H: 5.75/3.5/10/0.5) samples in figure 11. It is evident that aging under these conditions caused no measurable changes in the DC resistivity. Glass transition temperature measurement results DSC measurements of non-aged and aged (H: 5.75/3.5/10/0.5) epoxy samples resulted in the glass transition temperature T g values, which are shown as box plots in figure 12. The absolute value was reproducibly around T g ≈ 123 • C, but the relative difference between non-aged and aged samples was negligible. Thus, hygroelectric aging under the conditions applied in this work had no influence on T g . FTIR measurement results The recorded FTIR spectra of two (out of a total eight) nonaged and aged (H: 5.75/3.5/10/0.5) epoxy samples are shown in figure 13. Note that the absorbance peak at 1608 cm −1 was taken as a reference value due to its reported independence on aging, as described in section 2.5.5. The focus was on the hydroxyl group between 3400 . . . 3520 cm −1 as well as the ester groups between 1680 . . . 1780 cm −1 and between 1100 . . . 1300 cm −1 , respectively [63,64,79,80]. As seen from figure 13, no evidence for the existence or change of the hydroxyl peak was found. Furthermore, no strong changes of both ester-related peaks were observed for samples aged under pure electric stress, pure humidity stress as well as under hygroelectric stress, see figure 13). Discussion Experimentally validated thermoelectrical simulations of the used electrode-sample configuration showed only a maximum temperature increase due to dielectric heating of <1.2 K, even at the highest possible stress conditions (see section 3.1). These results clearly confirm that dielectric heating for the used sample-electrode configuration cannot lead to a thermal breakdown or other heating-related degradation mechanisms. It should be noted that a thermal breakdown can still occur for the same insulation material if either the switching losses (P l,PWM ∝ ε ′ ′ r V 2 PWM,pp f PWM ) are large enough or the heat dissipation of the insulation system is poor enough. Furthermore, the analysis of pre-breakdowns demonstrates that PD-related aging occurs on a much faster timescale than PD-free aging, see figure 7. It ensures that samples aged for 48 h (without pre-breakdown) were not aged by PD erosion. In addition, permittivity measurements revealed no aginginduced changes in ε ′ r which, in contrast, were observed if PD erosion was present during aging. These observations indicate that indeed no PD was present at the investigated sample area under MF-MV and ambient stress for the studies in this work. Thus, it verifies the suitability of the used sample design for aging analysis below the PDIV. MF-MV aging at RH = 0% showed no effect on any of the evaluated potential aging markers. Thus, it is assumed that the electric stress conditions used in this work were too low to activate a detectable aging mechanism in the epoxy sample in the absence of humidity, or, more precisely, in the absence of absorbed water. In the case of combined electric and humidity stress, a significant reduction of the short-term AC breakdown strength was observed (about 25% at H: 4.35/3.5/10/0.9 and about 56% at H: 5.75/3.5/10/0.5). A dominant influence of humidity alone is ruled out by the aging analysis after pure humidity stress. However, interestingly, other potential aging markers such as the real or imaginary part of the relative permittivity, the volume DC resistivity, the glass transition temperature and FTIR spectra are unable to detect the aging-induced changes that lead to a significant reduction of the material's electric strength. Whereas these parameters reflect macroscopic material properties representing an average over the sample's volume, the breakdown strength is by its very nature also sensitive to highly localized changes (weakest link theory). Thus, it is hypothesized that the underlying aging mechanism occurs in a small partial volume of the sample, potentially influenced by changes on a microscopic scale. Similar observations were made in [81] by (online-)monitoring of the dielectric permittivity of polyethylene terephthalate (PET) during voltage stress until a breakdown occurred. Similar as in the present work, no changes in the permittivity were observed despite the apparent aging that led to a sample breakdown. Moreover, the non-existing alterations of the glass transition temperature further confirm that no plasticization due to the hygroelectric stress occurs. Hydrolysis is excluded as possible aging mechanism by the FTIR spectra, as no increase in the absorbance peak associated with the hydroxyl group was observed. As can be seen from figure 8 and table 2, the breakdown strength reduction is more significant for larger applied RMS voltage stress (for a constant peak voltage stress). This points towards an aging mechanism which is dependent on the dissipated energy in the sample during aging. Moreover, the results indicate that no uniform aging with changes on a macroscopic scale takes place under the studied experimental conditions. Consequently, the electromechanical aging models described in section 1.1.3 are suited to explain this observation as they all have in common the formation and growth of microcracks caused by and dependent on the amount of dissipated energy. The fact that this aging mechanism appears to occur only in the presence of water/humidity can also be explained within these models. For example, water is reported to lower the activation energy for breaking of intermolecular van der Waals bonds [47]. In addition, the propagation of microcracks under an applied electric field is described to be more pronounced if they are filled with a conductive medium, such as water [42,52]. In order to connect these hypothesized aging-induced material changes and the observed reduction in the AC breakdown strength, the breakdown criterion as described in [82] can be used. It states that a breakdown is possible when the voltage drop V x = E x x over the longest free path length x, where electrons can be accelerated by the electric field E x , attains the value W b /e, i.e. Therein, W b represents the height of the energy barrier that needs to exceeded by an electron to contribute to hopping charge transport and e = 1.602 × 10 −19 C the elementary charge of an electron. The formation of voids/microcracks in amorphous regions of the polymer by an alternating electric field stress due to movement of polymer chains leads to an increase of the free volume in fractions of the polymer [48]. As this increases also the longest free path for an electron to be accelerated by an electric field, electrons of higher potential energy are present in fractions of the polymer when an electric field is applied. These electrons are thus capable of hopping over larger energy barriers. Moreover, the energy barrier W b depends only on the chemical structure of the polymer [82] and is reported to be lowered by electromechanical stress, especially in the presence of water [47]. Thus, it seems plausible that localized microcracking leads to a local reduction in the energy barrier W b and, according to equation (9), to a reduction in the breakdown strength as the breakdown path always follows the weakest path. In other words, the observed reduction in the AC breakdown strength after hygroelectric aging can be explained both by a higher probability of high-energy electrons as well as by a (local) reduction in the energy barriers for electron hopping charge transport during the applied AC voltage ramp used for short-term AC breakdown strength testing in this work. The present contribution thus demonstrates how aging of solid insulation materials can be analyzed in the absence of PDs and dielectric heating. Even though the stress conditions applied in this work are higher than in real applications, the approach can be applied to lower and more realistic stress conditions. As a consequence, it provides an effective method for aging analysis under more realistic stress conditions compared to destructive lifetime testing, which inherently needs much higher stress conditions in order to generate a premature failure in reasonable time. However, the results demonstrate that aging markers that reveal the macroscopic material properties are not suited to detect aging in the absence of PDs and dielectric heating. Only the AC breakdown strength, i.e. a destructive aging marker, was found to indicate aging effects. This is also of practical relevance as it points out that the observation of classical nondestructive aging markers (e.g. the dielectric permittivity) during operating conditions is not sufficient to detect early signs of critical aging in the absence of PDs and dielectric heating. Conclusion and outlook In the present contribution, an approach for the aging analysis of polymeric insulation under combined MF-MV and ambient stress in the absence of PDs and dielectric heating is presented. For this purpose, a specialized MF-MV aging test bench is used and potential aging markers are evaluated before and after different aging sequences. The key findings of this work are as follows: • The proposed non-destructive aging analysis approach is applicable to recessed epoxy samples and reveals aging effects in the absence of both PDs and dielectric heating without the need for excessively accelerated or long destructive lifetime testing, but only the destructive aging marker (AC breakdown strength) is able to indicate aginginduced changes. • The results confirm that PD-related aging occurs on a significantly shorter timescale than non-PD aging. sible aging mechanisms as well as degradation due to PDs or dielectric heating. • The dominant aging mechanism rather influences the material on a localized rather than a uniform scale (as of all potential aging markers, only the breakdown strength changes due to aging). • The observed reduction in breakdown strength can be explained both by a higher probability of highenergy electrons due to increased free volume caused by electromechanically-induced microcracking as well as by a (local) reduction in the energy barriers for bond breaking. The measurements carried out throughout this work have a proof-of-concept intention. As the applicability of the aging analysis approach is successfully verified (however only for a destructive aging marker), the next step is to conduct a broad study with focus on different stress parameters, such as the RMS and peak voltage stress, the PWM frequency, slew rate and temperature. In terms of material optimization, it seems promising to modify the material with respect to its performance in the presence of water. Approaches in this direction will be part of future work. Since at the present time, only indications of the hypothesized electromechanical aging mechanism involving microcracking exist, it is recommended to investigate this further with the help of other diagnostics, e.g. microscopic tools. This will enable a better understanding of MF-MV aging mechanisms in the absence of PDs and dielectric heating, a topic, which is still underrepresented in current research activities. Data availability statement All data that support the findings of this study are included within the article (and any supplementary files).
10,234
2023-05-24T00:00:00.000
[ "Physics" ]
Trade-off between Photon Management Efficacy and Material Quality in Thin-Film Solar Cells on Nanostructured Substrates of High Aspect Ratio Structures : Although texturing of the transparent electrode of thin-film solar cells has long been used to enhance light absorption via light trapping, such texturing has involved low aspect ratio features. With the recent development of nanotechnology, nanostructured substrates enable improved light trapping and enhanced optical absorption via resonances, a process known as photon management, in thin-film solar cells. Despite the progress made in the development of photon management in thin-film solar cells using nanostructures substrates, the structural integrity of the thin-film solar cells deposited onto such nanostructured substrates is rarely considered. Here, we report the observation of the reduction in the open circuit voltage of amorphous silicon solar cells deposited onto a nanostructured substrate with increasing areal number density of high aspect ratio structures. For a nanostructured substrate with the areal number density of such nanostructures increasing in correlation with the distance from one edge of the substrate, a correlation between the open circuit voltage reduction and the increase of the areal number density of high aspect ratio nanostructures of the front electrode of the small-size amorphous silicon solar cells deposited onto different regions of the substrate with graded nanostructure density indicates the effect of the surface morphology on the material quality, i.e., a trade-off between photon management efficacy and material quality. This observed trade-off highlights the importance of optimizing the morphology of the nanostructured substrate to ensure conformal deposition of the thin-film solar cell. Introduction Although texturing of the transparent electrode of thin-film solar cells (particularly amorphous silicon solar cells) has long been used to enhance light absorption (and thus the conversion efficiency) via light trapping, such texturing has involved low roughness (i.e., structures with heights less than 100 nm) and thus is limited regarding the degree of light absorption enhancement that is possible in thin-film solar cells (see, e.g., Reference [1]). With the recent developments in nanotechnology, a new approach called photon (light) management [2][3][4][5][6][7] in thin-film solar cells has been developed to further enhance the light absorption in thin-film solar cells; the ultimate goal is to enable even thinner films to be used while still providing higher conversion efficiency. Of the wide variety of approaches to perform photon management in thin-film solar cells, one particularly attractive approach is the use of nanostructured substrates with an array of nanostructures of high aspect ratio that have greater height from the substrate; such a nanostructured morphology manifests itself in the subsequently deposited thin-film solar cell, thereby allowing improved optical absorption [2][3][4][5][6][7][8]. The higher aspect ratio structures that can be produced are particularly advantageous for enhancing the light absorption in thin-film solar cells with very thin absorber layers, e.g., amorphous silicon (a-Si) solar cells deposited onto a nanostructured substrate comprised of an array of nanopillars (see e.g., [5]). Because of the "folded" morphology of the subsequently deposited thin-film solar cell on the nanostructured substrate, typically based on a p-n or p-i-n junction semiconductor device, we refer to such a device as a "folded junction" solar cell [2][3][4]. Despite the great progress in the development of photon management in thin-film solar cells using nanostructured substrates, most of the studies on the use of nanostructured substrates focus on the improvement in optical absorption in folded junction solar cells and do not consider the effect of reduced material quality of thin-film solar cells deposited on such nanostructured substrates. Understanding the relationship between material quality and nanostructured morphology is crucial for practical implementation of nanostructured substrates in thin-film solar cells. Here, we report the observation of a correlation between the reduction of the open circuit voltage (related to the degradation of the material quality) of a-Si thin-film solar cells and the areal number density of high aspect ratio structures of the front contact (i.e., transparent conducting electrode of the solar cells in the superstrate configuration) of each solar cell. As discussed above, the high aspect ratio nanostructures enable enhanced light absorption in the folded junction solar cells via improved photon management; as a result, a higher areal density of such nanostructures corresponds to an overall increase in the light absorbed by the folded junction solar cell. However, the enhanced light absorption does not necessarily correspond to enhanced conversion efficiency, particularly if the material quality of the absorber layer is degraded [9]. Thus, we report a trade-off between photon management efficacy and material quality in a-Si folded junction solar cells. The results highlight the importance of considering the material quality of the absorber layer of thin-film solar cells deposited onto nanostructured substrates. Materials and Methods Amorphous silicon solar cells were deposited onto various nanostructured substrates at the National Renewable Energy Laboratory (NREL). Although a variety of nanostructured substrates were considered, one in particular had a spatial gradient of areal density of higher aspect ratio nanostructures, which allowed for the convenient study of the effect of the morphology of the nanostructured substrate on the a-Si solar cell device performance (and thus on the material quality). Zinc oxide nanostructures were deposited onto indium tin oxide (ITO)-coated glass substrates using an electrochemical deposition method that is similar to the method described in a patent [3] and a patent application [4] of nLiten Energy Corporation, but with the application of an electrical bias. A brief description of the method is provided here. All chemicals used in the zinc oxide nanostructure growth process were obtained from Sigma Aldrich. The growth solution was prepared by dissolving zinc nitrate (0.001 M), hexamethylene tetramine (0.001 M), and polyethyleneimine (0.03 M) in deionized water. The pH of the solution was adjusted to 10.5, and then the solution was heated to 80 • C. For each sample, a piece of ITO-coated glass (2" × 2") was used as the working electrode (upon which the zinc oxide nanostructures were deposited), a zinc foil (2" × 2") was used as the counter electrode, and a Saturated Calomel Electrode (SCE) was used as the reference electrode. The distance between the working electrode and the counter electrode was 2 cm. The electrochemical deposition was performed at −1.0 V for different times, depending on the average size of the nanostructures desired. For the substrate with graded areal density of larger nanostructures described below, the deposition time was 10 min. Our hypothesis is that the short deposition time and the resistance of the ITO (reduced applied voltage further from the wire contact to the ITO substrate) resulted in a gradient in the areal density of larger nanostructures, i.e., for the short deposition time, incomplete growth resulted in a gradient in the number of high aspect ratio nanostructures on a substrate, with a lower number of larger nanostructures further away from the edge of the substrate where the voltage is applied. Finally, the coated substrate was withdrawn from the solution and then rinsed, first with deionized water, and then with methanol. Prior to deposition of amorphous silicon thin-film solar cells, a 250 nm layer of ITO was deposited on top of the nanostructured substrate to ensure the conductivity of the nanostructured layer is high enough for efficient carrier extraction. Note that this layer is not thick enough to passivate all "structural defects", as will be discussed below [2]. The a-Si:H solar cells with p-i-n structure (superstrate configuration) produced at NREL were deposited on ITO coated glass by plasma-enhanced chemical vapor deposition (PECVD) using a RF power of 13.56 MHz in a multi-chamber cluster tool (MVSystems, Inc., Arvada, CO, USA) at a substrate temperature of approximately 200 • C. The p-layer (~8 nm) with E Tauc of 2.1 eV and dark conductivity of 5 × 10 −4 S/cm was grown using SiH 4 , BF 3 , and H 2 in the PECVD chamber. The i-layer (~500 nm) with E Tauc of 1.78 eV and dark conductivity of 2 × 10 −10 S/cm was grown using SiH 4 without hydrogen dilution in the PECVD chamber. The n-layer (~20 nm) with E Tauc of 1.75 eV and dark conductivity of 2 × 10 −2 S/cm was grown using a SiH 4 and PH 3 /H 2 mixture in the PECVD chamber. Finally, various Ag dots with masked areas ranging from 0.2 to 0.8 cm 2 were deposited using an e-beam evaporator with a shadow mask to define the Ag back electrode layer of separate devices, as shown in the image in Figure 1. nanostructures, i.e., for the short deposition time, incomplete growth resulted in a gradient in the number of high aspect ratio nanostructures on a substrate, with a lower number of larger nanostructures further away from the edge of the substrate where the voltage is applied. Finally, the coated substrate was withdrawn from the solution and then rinsed, first with deionized water, and then with methanol. Prior to deposition of amorphous silicon thin-film solar cells, a 250 nm layer of ITO was deposited on top of the nanostructured substrate to ensure the conductivity of the nanostructured layer is high enough for efficient carrier extraction. Note that this layer is not thick enough to passivate all "structural defects", as will be discussed below [2]. The a-Si:H solar cells with p-i-n structure (superstrate configuration) produced at NREL were deposited on ITO coated glass by plasma-enhanced chemical vapor deposition (PECVD) using a RF power of 13.56 MHz in a multi-chamber cluster tool (MVSystems, Inc., Arvada, CO, USA) at a substrate temperature of approximately 200 °C. The p-layer (~8 nm) with ETauc of 2.1 eV and dark conductivity of 5 × 10 -4 S/cm was grown using SiH4, BF3, and H2 in the PECVD chamber. The i-layer (~500 nm) with ETauc of 1.78 eV and dark conductivity of 2 × 10 -10 S/cm was grown using SiH4 without hydrogen dilution in the PECVD chamber. The n-layer (~20 nm) with ETauc of 1.75 eV and dark conductivity of 2 × 10 -2 S/cm was grown using a SiH4 and PH3/H2 mixture in the PECVD chamber. Finally, various Ag dots with masked areas ranging from 0.2 to 0.8 cm 2 were deposited using an e-beam evaporator with a shadow mask to define the Ag back electrode layer of separate devices, as shown in the image in Figure 1. Table 1 (note that the small devices are locally uniform, with some slight variation in devices 28 to 32 on the right edge of the substrate, where the gradient appears to be the highest). (b) a-Si p-i-n device structure deposited by NREL (see the text for details). The solar cells were tested using a home-built illuminated current-voltage (I-V) curve tracer probe station coupled to a fiber-coupled Xe lamp light source. Note that, although the spectrum of the light source was not filtered to match the AM 1.5 solar spectrum, because all the solar cells were tested under the same conditions, we can perform a comparative study of the solar cells tested using the illuminated I-V data obtained with this setup. Table 1 (note that the small devices are locally uniform, with some slight variation in devices 28 to 32 on the right edge of the substrate, where the gradient appears to be the highest). (b) a-Si p-i-n device structure deposited by NREL (see the text for details). The solar cells were tested using a home-built illuminated current-voltage (I-V) curve tracer probe station coupled to a fiber-coupled Xe lamp light source. Note that, although the spectrum of the light source was not filtered to match the AM 1.5 solar spectrum, because all the solar cells were tested under the same conditions, we can perform a comparative study of the solar cells tested using the illuminated I-V data obtained with this setup. Figure 1. Note the similarity in device performance between devices with the same areal density of high aspect ratio nanostructures, i.e., same distance from the left edge of the substrate shown in Figure 1 (e.g., device 2 and device 3). The illuminated I-V curve for a reference device deposited on a flat substrate is shown for comparison. Results We obtained the solar cell device performance parameters for a variety of devices of the same size (as defined by the Ag back electrode of each device). Devices with Different Areal Densities of Nanostructures The illuminated current-voltage curves for different areal density of larger nanostructures are shown in Figure 2. Note that devices at the same distance from the left edge of the substrate have comparable performance. The corresponding illuminated I-V curve parameters for representative devices of different areal densities of larger nanostructures are listed in Table 1. In Figure 3, the increased areal density of larger nanostructures for these devices is indicated by a corresponding scanning electron microscope (SEM) top-view image of each of the representative devices listed in Table 1. In each SEM image, the larger dome-like features correspond to larger structures, allowing for estimation of the areal density based on the number of larger features for each device over an area of 850 µm 2 (see Table 1). Cross-Sectional Image A focused ion beam (FIB) cross-sectional SEM image of a solar cell similar to that of device 15 of Figure 3 is shown in Figure 4. The FIB cross-sectional SEM image reveals that the nanostructured substrate is predominantly composed of small spheres with regions of larger, higher aspect ratio nanostructures. Note that these larger nanostructures have a height comparable to the thickness of the a-Si solar cell absorber layer and that there is significant deformation of the a-Si layer around such larger nanostructures. Cross-Sectional Image A focused ion beam (FIB) cross-sectional SEM image of a solar cell similar to that of device 15 of Figure 3 is shown in Figure 4. The FIB cross-sectional SEM image reveals that the nanostructured substrate is predominantly composed of small spheres with regions of larger, higher aspect ratio nanostructures. Note that these larger nanostructures have a height comparable to the thickness of the a-Si solar cell absorber layer and that there is significant deformation of the a-Si layer around such larger nanostructures. Figure 3; note that the vertical scale bar is corrected for the viewing angle, i.e., 476 nm (cs) indicates that the actual height of the nanostructure after correcting for the viewing angle is 476 nm. Each high aspect ratio nanostructure (or group of nanostructures) corresponds to a large dome-like feature in the top-view SEM images of the devices (e.g., device 30 has a high number of such high aspect ratio features), and the regions with the smaller nanostructures of lower aspect ratio correspond to the flatter regions in the top SEM images of the devices (e.g., device 3 predominantly has such low aspect ratio features). The scale bar in between the two top-view SEM images on the right is the scale bar for both images. Discussion In traditional thin-film solar cells, the degree of surface roughness (i.e., feature height) is small compared to the absorber layer thickness. As a result, the deposition of the absorber layer onto such a surface is approximately the same as the deposition onto a flat substrate. However, when the aspect ratio of the nanostructures on the nanostructured substrate is high (i.e., >1) and the absorber layer thickness is small compared to the nanostructure height (i.e., the "folded junction" configuration of a thin-film solar cell on a nanostructured substrate) the material quality of the absorber layer is a critical issue in the device performance. In the devices considered above with small nanostructures, the aspect ratio of the smaller nanostructures is low (approximately 1) and the height of the smaller nanostructures is small compared to the absorber layer thickness. As a result, the material quality (as indicated by the open circuit voltage of the solar cells) is high for the region of low surface roughness (i.e., primarily smaller nanostructures). However, in the regions with increasing numbers of higher aspect ratio structures for a given area and with height comparable to the absorber layer thickness, the open circuit voltage becomes progressively lower with the increase in areal density of higher aspect ratio structures, i.e., the material quality of the absorber layer becomes progressively worse. Furthermore, the consistent fill factor of all devices suggests that no pinholes exist in the devices, i.e., no shorting occurred. As a result, increased carrier recombination caused by reduced material quality is the primary issue, with carrier recombination increasing with increased areal density of nanostructures, thereby offsetting any enhancements in carrier generation via improved photon management. This reduced material quality is primarily due to the poor conformal coverage of the a-Si layer around the larger nanostructures [8]; such effects are more acute for microcrystalline Si solar cells (see, e.g., [9]). Note that our nanostructures have an added complication of being nonuniform in distribution and orientation. Although such nonuniformities can contribute to the absorber material degradation via inhomogeneity in the absorber layer Figure 3; note that the vertical scale bar is corrected for the viewing angle, i.e., 476 nm (cs) indicates that the actual height of the nanostructure after correcting for the viewing angle is 476 nm. Each high aspect ratio nanostructure (or group of nanostructures) corresponds to a large dome-like feature in the top-view SEM images of the devices (e.g., device 30 has a high number of such high aspect ratio features), and the regions with the smaller nanostructures of lower aspect ratio correspond to the flatter regions in the top SEM images of the devices (e.g., device 3 predominantly has such low aspect ratio features). The scale bar in between the two top-view SEM images on the right is the scale bar for both images. Discussion In traditional thin-film solar cells, the degree of surface roughness (i.e., feature height) is small compared to the absorber layer thickness. As a result, the deposition of the absorber layer onto such a surface is approximately the same as the deposition onto a flat substrate. However, when the aspect ratio of the nanostructures on the nanostructured substrate is high (i.e., >1) and the absorber layer thickness is small compared to the nanostructure height (i.e., the "folded junction" configuration of a thin-film solar cell on a nanostructured substrate) the material quality of the absorber layer is a critical issue in the device performance. In the devices considered above with small nanostructures, the aspect ratio of the smaller nanostructures is low (approximately 1) and the height of the smaller nanostructures is small compared to the absorber layer thickness. As a result, the material quality (as indicated by the open circuit voltage of the solar cells) is high for the region of low surface roughness (i.e., primarily smaller nanostructures). However, in the regions with increasing numbers of higher aspect ratio structures for a given area and with height comparable to the absorber layer thickness, the open circuit voltage becomes progressively lower with the increase in areal density of higher aspect ratio structures, i.e., the material quality of the absorber layer becomes progressively worse. Furthermore, the consistent fill factor of all devices suggests that no pinholes exist in the devices, i.e., no shorting occurred. As a result, increased carrier recombination caused by reduced material quality is the primary issue, with carrier recombination increasing with increased areal density of nanostructures, thereby offsetting any enhancements in carrier generation via improved photon management. This reduced material quality is primarily due to the poor conformal coverage of the a-Si layer around the larger nanostructures [8]; such effects are more acute for microcrystalline Si solar cells (see, e.g., [9]). Note that our nanostructures have an added complication of being nonuniform in distribution and orientation. Although such nonuniformities can contribute to the absorber material degradation via inhomogeneity in the absorber layer deposition, the primary cause of the material degradation is the lack of conformal deposition of the absorber material that is further exacerbated with increased aspect ratio of the nanostructures that are substantially oriented perpendicular to the substrate. A convenient figure of merit to determine whether material quality issues come into play is the relative thickness (absorber layer thickness/nanostructure height) t relative divided by the aspect ratio of the relevant nanostructures on the nanostructured substrate AR, i.e., t relative /AR. If the figure of merit is less than 1 (t relative /AR < 1), then one must consider the effects of deposition onto the nanostructured substrate on the material quality of the absorber layer. The primary effects on the material quality are structural defects caused by poor conformal coverage of the absorber layer. Such defects can be avoided by appropriately tailoring the nanostructures to have a surface that promotes conformal deposition, i.e., no sharp features or sharp corners. Alternatively, a structural buffer layer can be added on top of the nanostructures to smooth/cover any sharp features [2]. Here, as shown in Figure 4, the a-Si absorber layer thickness is approximately 500 nm, the nanostructure height is approximately 500 nm, and the nanostructure aspect ratio is approximately (500 nm)/(250 nm) = 2; as a result, t relative is approximately 1, and the figure of merit is t relative /AR ≈ (1/2) = 0.5. Thus, in the folded junction solar cells considered in this study, t relative /AR < 1 for the high aspect ratio nanostructures. Note that this discussion does not consider the extreme case of dye-sensitized solar cells, where the ultrathin absorber layer is comprised of molecules deposited in a layer of molecular thickness and thus can conformally cover the nanostructured electrode surface [10]. Here, we consider the case of enhancing the performance of traditional thin-film solar cells using a nanostructured substrate. In addition, note that it is possible to produce high-efficiency devices on nanostructured substrates with t relative /AR < 1; however, such devices require careful design of the nanostructures to enable conformal coverage of the absorber layer (see, e.g., [5]). Alternatively, one can smooth some of the structural defects by depositing a structural buffer layer of appropriate thickness; however, this approach sacrifices some of the enhanced absorption via the reduced aspect ratio nanostructures [2]. Conclusions In the development of thin-film solar cells using nanostructured substrates, one factor that must be considered is how the material quality of the solar cell absorber material is affected by deposition onto a surface with nanostructured morphology via an array of high aspect ratio nanostructures. We found that the a-Si solar cells deposited onto nanostructured substrates with increasing areal density of high aspect ratio nanostructures exhibit a decreasing open circuit voltage caused by the reduction in material quality resulting from poor conformal coverage. We proposed a figure of merit that can be used to indicate whether the material quality may be an issue in a thin-film solar cell using a nanostructured substrate. The use of high aspect ratio nanostructures requires careful consideration of the material quality to fully benefit from the enhancements in photon management provided by such nanostructures.
5,300.4
2018-04-13T00:00:00.000
[ "Materials Science" ]
A Coordinated Wheeled Gas Pipeline Robot Chain System Based on Visible Light Relay Communication and Illuminance Assessment The gas pipeline requires regular inspection since the leakage brings damage to the stable gas supply. Compared to current detection methods such as destructive inspection, using pipeline robots has advantages including low cost and high efficiency. However, they have a limited inspection range in the complex pipe owing to restrictions by the cable friction or wireless signal attenuation. In our former study, to extend the inspection range, we proposed a robot chain system based on wireless relay communication (WRC). However, some drawbacks still remain such as imprecision of evaluation based on received signal strength indication (RSSI), large data error ratio, and loss of signals. In this article, we thus propose a new approach based on visible light relay communication (VLRC) and illuminance assessment. This method enables robots to communicate by the ‘light signal relay’, which has advantages in good communication quality, less attenuation, and high precision in the pipe. To ensure the stability of VLRC, the illuminance-based evaluation method is adopted due to higher stability than the wireless-based approach. As a preliminary evaluation, several tests about signal waveform, communication quality, and coordinated movement were conducted. The results indicate that the proposed system can extend the inspection range with less data error ratio and more stable communication. Introduction Japan owns, in total, a 200,000 km gas pipeline distribution network. However, it is a country with frequent severe natural disasters such as floods and earthquakes. Gas pipeline networks need to be inspected and maintained regularly since the leakage caused by destruction such as corrosion, deformation, and cracking will endanger the safe supply of gas. In recent years, various pipeline robots for pipe inspection and maintenance have been developed as they are more effective and cost less compared with traditional destructive pipe inspection methods. Destructive inspection is inefficient because it requires the pipeline to be removed from the ground. Besides, pipeline robots are more precise and time-saving than conventional external inspection technologies based on non-destructive sensor network inspection technologies such as fiber optic distributed temperature sensing (DTS) system [1]. The application of pipeline robot for inspection and maintenance has been regarded as one of the most suitable and effective solutions available. The inspection system usually consists of a robot relay. To ensure the stability of this relay communication between adjacent robots, we have adopted a RSSI-based evaluation method for coordinated movement of RCS [10]. Moreover, a wireless application layer communication protocol (WALCP) is used to enhance the stable performance of wireless relay communication. This system can enlarge the inspection range and provide some advantages compared with WSN-based inspection systems, such as higher applicability to the more complex environment regardless of pipeline layout and its depth. However, from the experimental results, we reveal some defects as follows. (1) The wireless signals possess severe attenuation when they are transmitted, especially in the small diameter metal pipe. (2) The communication quality is also not good since the data error ratio (DER) and data loss ratio (DLR) increase significantly when the transmission rate rises. (3) Since the wireless signal is not stable in the pipe, RSSI-based distance estimation and communication become inaccurate with floating and inaccurate RSSI data. These features can increase the risk of relay communication failure [11]. In response to these problems, as a preliminary study, we propose a new RCS based on the VLRC. Recently, VLC technology has received attention since it can increase the available bandwidth for wireless communication which uses visible light (band: 380 nm~80 nm) to transmit information. Since the concept of VLC was proposed in 2001, many related researches have promoted the development of this technology. In 2008, Little et al. developed a point-to-point bidirectional data transmission system using white LEDs as a light source. This system adopts the OOK modulation method which can reach a maximum 56 kbps transmission rate [12]. In 2009, Zeng et al. proposed an optical multiple input multiple output (MIMO) technology based on white LED array and detector array. Then they analyzed the data transmission capabilities of this MIMO system. The maximum transmission rate could reach about 1 Gbps [13]. In 2013, Azhar et al. proposed a MIMO system which adopts white light source. The system adopts orthogonal frequency division multiplexing (OFDM) technology to realize indoor communication with 1 Gbit/s transmission rate [14]. In our research, visible light can also be used as a communication medium due to its high-speed encoding and transmitting features and an illumination function for pipe inspection. Compared with a wireless signal, VLC owns a high communication quality, especially the low transmission error ratio. Due to the short wavelength and diffuse reflection in the pipe, light owns less attenuation in the pipe, especially in the small diameter pipe. An illuminance assessment method for distance estimation and stable communication can work more precisely and effectively than the RSSI-based method. In summary, this system can not only extend the inspection range but also overcome the uncertainty and instability of wireless signal attenuation and can finally realize a precise and stable communication in the pipe. Analysis of VLRC Transmission Channel In this study, we first analyze the concept of the VLRC channel. The VLRC system can be described as a multi-path transmission system and is composed of several baseband linear and time-invariant subsystems. Each subsystem owns instantaneous input power of the emitted light P i (t) and the output current I(t) after photoelectric conversion. The relationship between these parameters can be summarized by the following equation [15][16][17]: where η represents the photosensitivity of the photo-detector, h(t) indicates the impulse response of channels, and N(t) denotes the white additive Gaussian noise [18]. The (n − 1) rd average transmitted optical power P t (n − 1) can be given by: The n rd average received optical power P r (n) can be determined by: where H(0) is the direct current channel gain which can be described as H(0) = ∞ −∞ h(t)dt, n represents the number of VLC relay node. Here, we consider that the transmission channel of visible light in the pipe includes two links: the unobstructed line-of-sight (LOS) links and non-line-of-sight (NLOS) links [19][20][21]. LOS links rely on a direct transmission path between the transmitter and receiver for communication, whereas NLOS links usually rely on diffuse reflection from the pipe wall. During the transmission in the pipe, both LOS and NLOS should be considered. The VLRC channel structure is shown in Figure 1. The received power is generally described by the channel direct current gain of LOS and reflected path of NLOS as the following equation: Hardware Development of System In this section, a fundamental wheeled RCS is explained. A relay communication mechanism based on VLC is adopted in such a system. The RCS mainly consists of three communication nodes: a controller terminal, a follower robot which acts as a 'signal relay node', and a leader robot which mainly undertakes pipeline inspection tasks. The detailed communication system will be introduced in Section 5. System Architecture The RCS can be conceptualized as the following major functional blocks with the corresponding hardware architecture shown in Figure 2. Figures 3 and 4 mainly describe the main mechanical and electrical mechanism in the leader and follower robots. This system consists of: • Control terminal: This functional block is a micro-controller based on STM32F407VGT. It deals with the following tasks: (1) collecting analog control signals or commands and converting these signals to digital signals for robots, (2) modulating digital signals with pulse width modulation (PWM) method, (3) signal amplifying, filtering, (4) driving the light emitting diode (LED) spotlight. • Receive terminal: This functional block is responsible for the reception and processing of sensor data. It directly handles visible light with the silicon PIN photodiode (SI-PIN PD). Then, it converts the signals into electrical signals by using a PIN PD. After signal amplifying, filtering, and demodulating, the sensor data from the leader robot are restored. • Leader robot: This robot plays the most important role in the whole system. It is capable of completing gas pipe inspection tasks with gas detection sensors. The robot contains three parts: (1) The gas detection and transmission module: The robot can collect the information inside the pipe with barometric gas pressure/temperature sensor BMP280 and gas concentration sensor CCS811. The signals are processed and transmitted through the 'relay node' in the form of visible light. (2) The visible light intensity indication assessment module: The robot can realize the coordinated movement and reliable VLRC with the support of this module. The detail functions will be explained in Section 4.3. (3) The visible light receiver, motor drive, and robot controller modules: The robot can be controlled by the relay visible light signal from the control terminal. • Follower robot: As a relay node, it can receive different kinds of data and retransmit to the corresponding object (leader or receiving terminal). It adopts a special VLC application protocol to prevent information conflict and mutual interference. This mechanism will be described in Section 5.2. The follower robot can achieve self-navigation in the pipe with three ultrasonic sensors. VLC Transmitter Design To establish the duplex VLRC, each robot and control terminal have VLC transmitters. In the control terminal, the functions of the transmitter realizes are signal analog/digital (A/D) conversion, signal modulation, signal amplifying, filtering, and wave transmission. The robot does not have A/D converters (ADCs), but the other functions are almost the same as the control terminal. In these transmitters, A/D conversion and signal modulation are completed in ARM-based STM32F407ZGT6 micro-controller. This module has several 12 bit ADCs, which are successive approximation ADCs [22][23][24]. In this research, two-channel analog signals of joystick controller are captured and converted into 12-bit binary digital data. These digital signals can be modulated through the PWM modulation method. The modulation process is explained in Section 5 in detail. Figure 5 shows the schematic design of the signal amplifier and filter. The signals from the micro control unit (MCU) are firstly amplified by the OPA861 [25,26]. This chip provides a high dynamic range amplifier for wide-band transimpedance amplifier applications. It also owns high bandwidth of approximately 80 MHz. A second-order passive band-pass filter based on OPA657 is applied to filter out the lowand high-frequency components which can affect the normal signal transmission [27]. As the most indispensable part of the VLC system, wave transmission module loads the modulated electrical signal onto the LEDs and converts electrical signals into optical signals. Since the current output of the main control ARM chip is too small, the modulated signal has to be amplified to display the change in LED optical power [28,29]. As illustrated in the LED driver of Figure 6, we selected two NPN S8050 transistors as the Darlington amplifier. The maximum collector current of S8050 can reach 1500 mA. The current gain is set to about 110. It can almost satisfy the distortion-free transmission of modulated signals and power supply requirements of the LED light. The characteristics of LEDs have a profound impact on the communication quality and distance of the entire VLC system. Generally, white LEDs own a great advantage in light sources. We thus select XLamp XHP 70.2 LEDs provided by CREE. Co. as a light source for this RCS. Such LED owns 4292 l m output and maximum drive current can reach (Standard) 4.8 A (6 V) and 2.4 A (12 V) [30]. VLC Receiver and Illuminance Assessment Module The main function of the VLC receiver is to convert the optical signal to the electrical signal. In the receiver, the PD detects the optical signal, and at the same time, converts it into a current signal. After the amplifier processing and demodulation, the control signal can be restored. As the core part of the receiver, PD can convert optical wave energy into a current signal. Considering visible light wavelength range from 380-780 nm, in this research, we select SI-PIN PD S12915-33R which is provided by Hamamatsu Co. The maximum photoelectric conversion efficiency of such a product can reach almost 0.64 A/W. The photosensitive area is about 5.76 mm 2 , the spectral response range is from 340 nm to 1100 nm, and the response frequency can reach maximum of 60 MHz [31]. Similar to wireless transmission, the optical signal attenuation exists in the open air and also there are many interferences and noise issues during the channel transmission, so in the receiver part, amplifier and filter circuit for further pre-processing of the optical signal are important. We thus choose OPA2846 and OPA657 as an amplifier and high-frequency filter processor, respectively. OPA2846 is a low-noise, wide-band, typical dual, and voltage-operational amplifier. When setting up OPA2846 as the broadband PD amplifier, the important elements in the design is the diode capacitance (C1) with the reverse bias voltage (−5 V) and the desired transimpedance gain (R1). The schematic design is shown in Figure 7. It illustrates a design using a 10 pF source capacitance diode and a 10 kΩ transimpedance gain. It is necessary to adopt the low pass filter in the receiver due to ambient light such as sunlight in the open air. The frequency of light is much higher than the carrier signal we adopt for light communication, so these high-frequency components have to be filtered out in the process of filtering. As the filter circuit described in Figure 7, the OPA657 chip is used to build a second-order low pass filter circuit. The digital illuminance sensor BH1750FVI is installed around the photodiode module. BH1750FVI is a digital ambient light sensor for I2C bus interface [32]. This module can capture from weak ambient light (1 lx) to strong light (65528 lx). It fits to be used as the illuminance assessment for light relay communication. In the later experiments, we determine the precise illuminance parameter for the RCS. Using BH1750FVI, we have also divided the VLC into three regions: good communication (1165 lx < illuminance < 1495 lx), poor communication (974 lx < illuminance < 1165 lx), and lost communication (illuminance < 974 lx). In the first region, each robot is in good communication with each other. In the second region, each robot's function is abnormal because of poor VLC quality. In the last region, each robot may lose communication with each other. In the coordinated movement test, the robots function based on this illuminance assessment. Software Development of System The software developed in this study contains modulation/demodulation between two robots and relay light communication of the whole system. Modulation and Demodulation For the VLC system, a good selection of modulation and demodulation approach can obviously reduce multipath interference, and finally improve communication efficiency. The most important parts of this research are the technology of optical signal intensity modulation and demodulation. In the design of VLC system, the coding technology of communication is generally based on On Off Key (OOK), Pulse Position Modulation (PPM), Binary Phase Shift Keying (BPSK), and Orthogonal Frequency Division Multiplexing (OFDM). Based on the analysis of these types of modulation approaches, OOK is the simplest modulation method, however, the transmission rate is very low with huge signal transmission delay [33]. BPSK method can effectively cut down inter-symbol interference in wireless channels, whereas the transmission information ability and bandwidth utilization rate are very low [34]. As some of the important modulation methods used in VLC systems, PPM and OFDM methods have a high transmission rate. However, they have a low anti-interference ability and high DER to external light [35,36]. This means the VLC system can be applied to the environment without ambient light. In this article, we adopt a PWM method for the VLC system because it is easier to realize by hardware and has a relatively low DER at high transmission rate. This method can transform binary information code into pulse width waves through micro-controller, and then the LED drive module transmits a pulse width optical signal to the transmission channel. The advantages of this modulation approach lie in its strong anti-noise capability and high transmission rate. By adjusting the pulse width, information can be loaded into the output waveform. These pulses have the same amplitude and different widths. The digital signals '1' and '0' are defined based on the pulse width in one period. The carrier frequency used is set to 2~2.68 KHz. When transmitting a signal '1', it takes 500 µs, including 200 µs low level and 300 µs high level. When sending '0', it also requires 350 µs, including 200 µs low level and 150 µs high level. In the receiver, the signal can be demodulated by determining the pulse width of the received pulse signal. For example, in our VLC control system, we choose the Grove-thumb Joystick controllers. There are two 0-10 kΩ potentiometers for X-axis and Y-axis respectively which controls 2D movement of the robot by analog signals. When the module is in working mode, the output will be two kinds of analog values, representing X-and Y-axis directions. The micro-controller can detect these two analog values and convert into 12 bits digital signal through two successive approximation ADCs [37]. At the receiver, the TIM1 input pin of MCU is directly connected to the PD module. As an advanced control timer, TIM1 can be easily set to the input capture mode. The special capture/compare register (TIM1_CCRx) is used to detect and measure the square wave pulse width. Then, we demodulate the detected signal according to the transmission protocol used by the signal modulation module. When entering the ID detection interruption program, the system receives and detects the pulse signal [38]. After passing the ID checking, the ID detection flag is set to '1', and the ID detection interruption finishes and enters into the program of data detection interrupt. Here the digital signal '1' or '0' is determined according to the pulse width calculation in one period. After completing receiving 12 bits data of control command and detecting the end mark, they will convert 12-bit binary data to a decimal control command for robot control. VLRC Mechanism In the relay transmission, the modulation/demodulation mechanism is the same as direct transmission between the controller terminal and one robot. Here we adopt a VLRC mechanism for duplex communication in the RCS. The phenomenon of scattering and diffuse reflection exists when the light is transmitted into the pipe [39][40][41]. This phenomenon has an impact on light-based communication quality. For the above reason, a mechanism based on the message identifier is presented in this study. This mechanism is capable of preventing information conflict and mutual-interference from light scattering and diffuse reflection, and thus can improve the security and accuracy of information transmission [42]. Message Identifier Structure and Frame Format The VLRC mechanism is adopted in the design of the information frame. As mentioned in the modulation and demodulation section, each information frame contains: the message identifier, data of information, and end mark. The node ID of frame identifier is set as follows: one 100 µs, 120 µs, and 130 µs pulse width square wave which represent nodes: controller, follower, and leader robot, respectively. There are two types of signals in this communication system: control command and sensor data, so after setting of 'node ID', the 'signal type ID' is set as follows: two 100 µs and two 125 µs pulse width square waves indicate sensor data and control command, respectively. Then before data transmission, there are two 100 µs square waves required for starting transmitting the data frame. The transmission of 12 bits digital data requires 7.2 ms in total [43]. After the transmission of 12 bits data, there is one square pulse with a width of 200 µs as the end mark. This mechanism enables the identification of useful message and filters out the useless message the receiver receives. Software Realization of Relay Process As in Figure 8, relay frame routing follows this principle: The leader robot is the terminal receiver, which only receives data frames from the control terminal over the relay of the follower robot. Moreover, the leader robot can identify the control command ID and further process these command signals. This leader robot can also send sensor data frame with the ID identifier to the follower robot. As the function of the follower robot is 'relay node', it can receive the sensor information with corresponding ID from the leader robot and bypass this message to the control terminal. Similarly, it can bypass the control command from the control terminal to the leader robot with the corresponding ID [44]. System Performance Evaluation To verify the performance of the proposed RCS, a series of experiments were carried out. (1) The waveform tests of VLRC between the transmitter, follower robot, and leader robot were conducted in the laboratory environment. (2) The two parameters of the RCS: DER and DLR were measured at different distances and different transmission rates. (3) When performing the in-pipe experiments, the effective communication region was determined with the leader robot in a 300 mm diameter straight pipe. (4) When testing the coordinated movement of the RCS, illuminance assessment, an assessment of the light illuminance power present in a received light signal, was recorded. We also tested the coordinated movement of the RCS in the pipe with steel cover to avoid the ambient light, and illuminance was recorded during this experiment [45]. Waveform Test of System In the laboratory environment, we first conducted VLRC waveform testing on control command signals. The interval between each adjacent robot was 1 m. To avoid the ambient light, we turned off the room light. As illustrated in Figure 9a, the entire data frame of the control command was recorded. This signal indicated the signals sent by the control terminal. The yellow waveform was composed of multiple rectangular wave signals. The purple waveform represented the signals received by the 'relay node' of the follower robot. The high-frequency noise was added to the transmission channel in the purple waveform. The effective voltage of signal changed from 3.3 V to 2.4 V [46]. Figure 9b demonstrated the signal waveforms transmitted between the follower and leader robot. The yellow waveform indicated the signal transmitted by the 'relay node'. In 'relay node', this signal was re-modulated and amplified from the light signal. The purple wave denoted the actual light signal that the leader robot received after the second channel attenuation in the open air. Although some signals had a certain attenuation and some a high-frequency noise, this relay communication could still satisfy the further experimental requirements and realize the effective light transmission. Communication Quality Test of System To further investigate the reliability of the RCS, a series of VLRC quality test were carried out in the lab. Two important parameters: DER and DLR were measured and analyzed [47]. Figure 10 depicts the experiment setting for this test. The distance between each robot was about 180 cm. The maximum total relay communication distance between the leader robot and the controller can reach about 360 cm (2 × 180 cm). The control terminal continuously sent a standard 12 bits frame data 0xFFF to the 'relay node' of the follower robot at a certain transmission rate. The leader robot received this data through this 'relay node' and the statistics of the received data at the leader robot were evaluated in this communication quality test. DER means the error rate the data frame received by the leader robot compared to the standard frame sent by the control terminal. DLR indicates the percentage of the data frame that is lost during light transmission or demodulation. To measure these parameters in a dark environment under different data rates and different distance can help us to estimate the maximum transmission distance and suitable transmission rate. First, in this system, we adopted the 2.5 kbps transmission rate and tested DER and DLR at different distances from 50 to 350 cm (maximum 360 cm) [48]. As described in Figure 11a,b, the DER and DLR increased as distance extended. When the transmission distance of the system was within 150 cm, the DER and DLR could reach less than nearly 0.5% and 5%, respectively. When the distance was more than 150 cm, both parameters rose significantly. The reason for this is that the longer the transmission distance, the more obvious the scattering effect of light, and the greater the transmission attenuation in the channel. We tested the communication quality of RCS at 250 cm distance using four different transmission rates: 2.5, 5, 7.5, and 10 kbit/s [49]. As shown in the analysis from Figure 12a,b, notably from 2.5 kbit/s to 10 kbit/s, the DER just increased from 0.32% to 0.74%, while the DLR rose significantly from 7.81% to 22.31% with PWM method. Through these experiments, one conclusion can be obtained: that the higher the transmission rate, the higher the chance of data transmission loss and error. From Figure 12a,b, OOK method was evaluated as a reference standard to compare with PWM in order to show that the PWM method had relatively low DER at the same transmission rate. The OOK method showed larger data error than the PWM method when the transmission rate was set below 10 kbit/s. As a special Amplitude Shift Keying (ASK) method, OOK method was based on carrier amplitude modulation. It was the most energy-saving method because it transmitted energy only when digital data "1" was sent. However, it could only demodulate the signal with a high signal-to-noise ratio. Therefore compared with PWM, it required for high hardware demodulation ability. Big DLR occurred because of failure of demodulation. Although OOK was much easier to realize, the transmitter using the OOK method had low transmitting power. In the same transmission distance, the DLR and DER were much larger than the PWM method. Whereas PWM had many advantages such as quick response time and lower DER at a rate below 10 kbit/s, we therefore used the PWM method for our system. Figure 11a also depicted that DER of WRC rose from 0.35% to 1.97%, which was higher than VLC (0.21-0.71%). In Figure 11b, the DLR of WRC and VLRC were almost the same except for the longer distance such as 350 cm. At this distance, the VLRC system had bigger DLR = 31.14% than WRC because of huge light scattering in transmission. In this research, DER was more important for information transmission in the pipe inspection mission. Experiment on Relationship between Illuminance Assessment and Communication Region Before analyzing the comprehensive performance on the RCS based on VLRC, the 'light-control' quality and precision of a single robot should be considered. In Figure 13, the leader robot was placed in the 3.5 m length pipe. The illuminance of control terminal spotlight was about 1500 lx. Through the tests, three zones (good, poor and communication cut-off) could be divided. From Figures 13 and 14, the robot was initially in a good communication region (DER< 0.45%) [50]. When the robot entered the pipe to a depth of approximately 164 cm, the illuminance reduced to 1165 lx. Meanwhile, it was in a poor communication state. Some decode error of control command and severe data frame loss was observed. The robot might function abnormally in this region of poor visible light quality. As illustrated in Figure 14, the distance of 164-188 cm was considered as the poor communication region. Above 188 cm, the robot lost communication with the illuminance of 974 lx [51]. Through these tests, the relationship between illuminance and effective communication region could be determined. Movement and Illuminance Assessment Test in the Transparent Straight Pipe The movement tests of the RCS were conducted in the 3.6 m length transparent pipe. Figure 15 illustrated the coordinated movement of the whole system in the pipe. In this test, the leader robot was designed to conduct a mission of pipe inspection with detection modules. It could obtain pipe information such as air pressure, temperature and gas concentration. This information was processed and sent by light carrier to the follower robot. The leader robot was manually controlled through the control terminal. It first entered the pipe. When it arrived at the region where the communication was unstable, it stopped and waited for the follower robot to recover the light signal. Then, the leader robot could communicate again with the control terminal with the support from 'relay node'. In this test, an important parameter about the illuminance of both 'leader robot' and 'follower robot' was also recorded and illustrated in Figure 16. The illuminance assessment could reflect the strength of the visible light signal received by each robot of the system. Entirely based on the preliminary measurement on a single robot, we could get the conclusion that each robot could keep a good 'link' through VLC when only the illuminance was above the 1165 lx. A black dotted line was added to the figure to indicate a stable and unstable communication boundary line with illuminance = 1165 lx. Moreover, a red dotted line was added to show the threshold of losing communication with illuminance = 974 lx. The whole experimental mission took nearly 46 s. The leader robot first entered the pipe at 2 s and moved forward to the target detection region. At around 14-16 s, the leader robot reached the region of 'poor communication', and illuminance decreased significantly from about 1495 lx to 1165 lx. According to the setting of the program, the robot stopped at this moment. Then, we placed the follower robot in the pipe, and it could move to the leader robot based on the self-navigation mode. Before 18 s, the follower one was at the pipe entrance with a good 'link' with the control terminal. The follower one could keep stable communication with the controller until 27 s. After this time, it was at the boundary of poor communication mode, which might have resulted in control error or sensor data error. However, during the period from 16 to 27 s, the leader robot recovered the communication with the help of 'signal relay' from the follower one. At the end of this test, the system was at the boundary of good communication conditions with stable illuminance of approximately 1165 lx. Movement and Illuminance Assessment Test in the Straight Pipe with Steel Cover To further verify the comprehensive performance of such a system, the movement tests in the transparent pipe are not enough. There are two reasons: (1) Since visible light can penetrate the pipe wall and create a big energy loss. (2) The ambient light can also affect the experimental results. Thus, we built a simulated environment with a steel cover to reduce these effects. Figures 17 and 18 illustrated the coordinated movement of RCS in the pipe with steel cover. The RCS was operated to complete the whole mission including the 'signal relay' and coordinated movement as experiments before. The illuminance was measured and recorded during the movement test. As is similar with tests in the transparent pipe, the communication between leader robot and control terminal became unstable communication region at about 14 s and soon recovered the signal with the 'signal relay' from follower robot at around 18 s. When the follower robot entered the pipe at 30 s, it could be near the boundary of unstable communication with the control terminal. However, the leader robot was in a good communication region because of the 'signal support' from the follower one. In Figure 18, the leader robot was in a poor communication state at 14 s, while at 11 s in Figure 16. Before losing signal in the poor communication state, the leader robot in the steel pipe travelled a longer distance than in the transparent pipe. By further analyzing the experimental results, we could find that due to the light diffuse reflection in the pipe, the light energy attenuation was smaller than in the transparent pipe. This resulted in the longer transmission range in the pipe. Finally, we took the leader robot as an example and made a spectrum analysis of the former RSSI-based and proposed the illuminance-based evaluation method. These two signals were converted into frequency domain signals by FFT transformation. Figure 19 was obtained based on the analysis of the former wireless-based RCS [10]. From Figures 19 and 20, we could see that the main frequency distribution range was from 234 to 1417 Hz, whereas the frequency range of illuminance from illuminance-based RCS was from 21 to 52 Hz. Since RSSI was highly affected by reflections by the pipe wall, the high-frequency noise exists. These noises would affect the evaluation method and generate imprecise results. In this research, the illuminance-based assessment method for the RCS was more stable and precise than the former RSSI-based approach since it owned less high-frequency noise. Conclusions and Future Work In this study, in order to overcome the challenge of limited inspection range brought by cable friction and wireless attenuation, a preliminary study using visible light communication was conducted, then a wheeled robot chain system based on the VLRC was developed. To verify the feasibility and performance of this system, we carried out four tests. Firstly, waveform experiment of relay signal transmission was carried out in the laboratory environment. The results revealed that such a relay communication system could realize the effective light transmission with low noise. Secondly, the system was tested on communication quality. Although VLRC possessed bigger DLR than wireless communication, the DER was smaller. DER was more important for information transmission in the pipe inspection mission. Thirdly, an experiment on the relationship between illuminance indication and communication region was conducted. Through this test, the effective communication region was determined. Finally, a set of movement tests were carried out in the transparent pipe and the pipe with steel cover. The results revealed that the coordinated movement of this system could be realized with VLRC and illuminance assessment. The illuminance assessment method was more precise than the RSSI-based evaluation method. Although the proposed system made breakthroughs, the defects of the system can also be found. These defects are highlighted below: • In order to extend the inspection range with more effective and high-quality communication method, some VLC characteristics in the pipe should be studied further. These studies may contain: signal to noise ratio analysis, research on diffuse path gain of light in different pipelines, experiment on the effect of inter-symbol interference, study on diffuse reflection and scattering features of light transmission in gas medium, test on the influence of visible light components of different frequencies and wavelengths on light communication. • The selection of PWM method in this research has not been appropriately justified. There is still a lack of numerical analysis to verify the transmission performance improvement based on such a method. Actually, the DER indicates that the PWM is a poor choice at low lux level. The redesign of the modulation/demodulation module and the relay module will be a better solution. From Figure 12a,b, the results demonstrate that if the transmission rate is beyond 10 kbps, big DER will occur. In addition, we have done the communication experiment at a transmission rate of 15 kbit/s for both methods. The results depict that the communication cannot be realized by both the PWM and OOK methods. Through the short distance test, we discovered that the demodulation system based on embedded STM32 we developed has limitation to capture and recognize the digital pulse accurately if the rate is beyond 10 kbit/s. Besides, a new MCU such as FPGA for modulation and demodulation should also be considered. Like the prototype, although this transmission rate is enough for robot controlling, it is not enough for video transmission for pipe inspection. The selection of other advanced modulation techniques should be considered and performance comparison at a high data rate and low lux level should also be implemented. • This system can increase the risk of the system's control and communication complexity. In order to make the system more robust and stable, a new control and communication algorithm should be further investigated. • Such a system still has difficulties to adapt to a more complex pipe environment such as the elbow, T-junction, etc. The solution can be described as: (1) optimizing the mechanical structure of the robot, (2) improving communication ability in these complicated regions by analyzing and improving the property of light diffusion in elbow or T-junction. Acknowledgments: This research is supported in part by Tokyo Gas Co., Ltd. and in part by the Research Institute for Science and Engineering, Waseda University. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
8,505
2019-05-01T00:00:00.000
[ "Materials Science" ]
The Laser calibration of the Atlas Tile Calorimeter during the LHC run 1 This article describes the Laser calibration system of the Atlas hadronic Tile Calorimeter that has been used during the run 1 of the LHC. First, the stability of the system associated readout electronics is studied. It is found to be stable with variations smaller than 0.6 %. Then, the method developed to compute the calibration constants, to correct for the variations of the gain of the calorimeter photomultipliers, is described. These constants were determined with a statistical uncertainty of 0.3 % and a systematic uncertainty of 0.2 % for the central part of the calorimeter and 0.5 % for the end-caps. Finally, the detection and correction of timing mis-configuration of the Tile Calorimeter using the Laser system are also presented. The ATLAS Tile Calorimeter [1,2] (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN.The TileCal is a sampling calorimeter whose operation is based on the detection of scintillation light using photomultiplier tubes (PMTs).On average, around 30 % of the total energy of jets from quark and gluon fragmentation is deposited in the TileCal.It therefore plays an important role for the precise reconstruction of the kinematics of the physics event.The control of its stability, within 1 %, and resolution is important for a correct jet and missing transverse energy reconstruction in ATLAS 1 .In order to obtain a precise and stable measurement of the energy deposited in the calorimeter, it is mandatory to precisely monitor any variation of the gain 2 of the PMTs and, if needed, correct for these variations.Several complementary hardware calibration systems have therefore been included in the TileCal design (see figure 1), one of them being the Laser system described in this article. The calorimeter is briefly described in section 1, and the Laser system as it was operational during run 1 of the LHC is detailed in section 2. A major upgrade of this system was performed in 2014, but falls outside the scope of this article.The study of the stability of the Laser electronics is discussed in section 3 before the description of the calibration procedure in section 4. Timing misconfiguration detection and correction are detailed in section 5 while monitoring of pathological calorimeter channels is briefly described in section 6.Finally, conclusions are drawn in section 7. The ATLAS Tile Calorimeter The ATLAS detector is made of a central part, called the barrel, and two endcaps.In the barrel, the TileCal is the only hadronic calorimeter of ATLAS.In the endcaps, TileCal constitutes the external part of the hadronic calorimeter, the internal part using the liquid argon technology.The part of TileCal that is in the ATLAS barrel is called the long barrel (LB), while the parts that are in the endcaps are called the extended barrels (EB). The Tile Calorimeter is the result of a long process of R&D and construction.The Technical Design Report [3] has been completed in 1995 and the construction of the mechanical part ended in 2006.After a period of operation using cosmic muons, the TileCal was ready in 2009 to record the first LHC proton-proton collisions. Mechanical aspects TileCal is a non-compensating sampling calorimeter made of steel plates that act as absorber and provide mechanical structure into which the active scintillating tiles are inserted.Charged particles going through the tiles produce scintillation light that is collected by two wavelength-shifting (WLS) fibres, on each side of a tile.These fibres are then grouped in bundles to form cells, organised in three radial layers, as depicted in figure 2, thus achieving a granularity of about 0.1 in η3 for the layers A and BC (the closest to the collision point) and around 0.2 in the outermost layer.Azimuthally, the detector is segmented in 64 wedge-shaped modules (see figure 3), thus achieving a granularity of ∆φ = 0.1.The E cells are non-standard calorimeter cells made of single large scintillators.Their aim is to measure the energy of particles lost in the inactive material located in front of the TileCal.In total, there are 5182 cells. In addition, 16 large scintillator plates are located between the barrel and the endcaps, the Minimum Bias Trigger Scintillators (MBTS).They are mainly used in the ATLAS low luminosity trigger.Although they are not TileCal cells, their readout is performed by dedicated PMTs of the TileCal system, that were originally planned to be connected to 8 pairs of E3 and E4 cells. Detector readout Each fibre bundle, usually corresponding to one side of a cell, is read out by a photomultiplier tube: each standard cell is thus read out by two PMTs, the E cells being read out by a single PMT.Therefore, there are 9852 PMTs in total.The electric pulses generated by the PMTs are shaped [4] and digitised [5] at 40 MHz with two different gains, with a ratio of 64, in order to achieve a good precision in a wide energy range.These samples are then stored in a pipeline memory until the level-1 trigger decision is taken (ATLAS has a three-level trigger system, the first level giving a decision in 2.5 µs during which the data are kept in the front-end electronics).If the decision is positive, seven samples, in time with the signal and with appropriate gain giving the best precision on the pulse amplitude, are sent to the off-detector electronics (Read Out Drivers or RODs [6]).The amplitude of the signal is reconstructed as the weighted linear combination of the digitised signal samples, using an optimal filtering method [7,8]. In parallel, the output of each PMT is also integrated over approximately 14 ms using an analog integration system, during Cesium calibrations (see below and figure 1) and also to measure the PMT current induced by minimum bias proton-proton collisions. Hardware calibration systems Three different and complementary hardware calibration systems have been integrated in the Tile-Cal design: the Cesium system [9], the Laser system and the Charge Injection System (CIS) [4] (see figure 1).About once or twice a month, a radioactive 137 Cs source scans most 4 the response of the tiles, including the WLS fibres and the PMTs, to a known amount of deposited energy.For this Cesium calibration, the analog integrator is used instead of the digital readout: it is therefore not sensitive to potential variations of the gain of the readout electronics used for physics.The Laser calibration is performed about twice a week and allows monitoring the stability of the PMT gain between two Cesium calibrations by illuminating all PMTs with a Laser pulse of a known intensity.For this calibration, the same readout electronics as for physics is used and is therefore also monitored.This calibration is described in detail in the next sections. During the CIS calibrations, performed about twice a week, a known electric charge is injected into the readout electronics chain, simulating a PMT output pulse.It is used to measure the conversion factor from ADC-counts to pico-Coulomb (pC) and also to monitor the linearity of the ADCs. Energy reconstruction In order to measure correctly the energy, several effects must be taken into account, using the calibration systems described previously as well as results from tests using special beams of particles (see figure 4).Hence, for each PMT, the energy is reconstructed as where A opt is the amplitude in ADC counts computed by the optimal filtering method, C CIS is the ADC→pC conversion factor measured by the CIS, C e is the pC→GeV conversion factor defining the energy scale as measured with electron beams in past beam tests [10] and f Cs and f Las are correction factors extracted from the Cesium and Laser calibrations.The Cesium calibration is able to correct the residual non-uniformities after the gain equalisation of all channels and thus to preserve the energy scale of the calorimeter that was determined during the beam tests.The Laser calibration allows keeping stable this energy scale between two Cesium scans and is therefore relative to the Laser calibration immediately consecutive to the last Cesium calibration.The cell energy is the sum of the energies from the two PMTs connected to this cell.The computation of the Laser calibration constants f Las will be described in the next sections.(3) that distributes the light to the photodiodes box (4) and the two PMTs (5), the filter wheel (6), the shutter (7) and the output liquid light guide (8). The Laser calibration system The first prototype of the Laser system has been built in 1993 and the first tests with a prototype module of the calorimeter were performed in October 1994 [11].A second prototype was then built and tested in 1997 [12].After several improvements, a first version of the system has been used during the qualification of the TileCal modules using particle beams [10].Finally, the system described in the following paragraphs was installed in its definitive location in 2008, in time for the calibration of TileCal before the first LHC collisions.After several years of running, the current Laser system has been upgraded in October 2014 for the LHC run 2, starting in 2015.This new system includes a new Laser source, new optical components and new electronic boards, and is not described in this article. The Laser system contains of course a Laser source but also other components.Because the actual intensity of the pulses has variations of approximately 5 %, the system includes dedicated photodiodes to precisely measure the intensity of each pulse.It also includes a tunable attenuation system to cover the whole TileCal PMTs dynamic range (100 MeV to 1.5 TeV per PMT). The Laser system consists of a Laser pump, the so-called Laser box, a light distribution system and several electronic components in a VME crate to control and monitor the system.The Laser box (see figure 5) contains the Laser head, two photomultipliers, the so-called photodiodes box, a semireflecting mirror, a filter wheel and a shutter.The photodiodes box contains four PIN photodiodes from Hamamatsu [13] with their amplifiers and shapers.These components are located in the USA15 cavern that hosts the back-end electronics, and that is close to the main ATLAS cavern. The Laser system can be operated in four different modes, divided in two categories: three internal calibration modes and one Laser mode. Internal calibration modes: the aim of these modes is to calibrate and monitor the Laser system electronics, without any Laser light emission.There are three different calibration modes: • two to measure the characteristics of the electronics: linearity (Linearity mode), noise and pedestal level (Pedestals mode); • one to calibrate the photodiodes, with an embedded radioactive source (Alpha mode). Laser mode: this mode represents the main operation mode, in order to calibrate the TileCal PMTs using Laser light. Choice of the light source In order to synchronise all the TileCal channels, it is mandatory to be able to send simultaneously a single light pulse to the 9852 PMTs, which requires a powerful light source such as a Laser. Pulse shape requirements (∼10 ns pulse width) has driven our choice toward a commercial Qswitched DPSS (Diode-Pumped Solid State) Laser manufactured by SPECTRA-PHYSICS.This is a frequency-doubled infrared Laser emitting a 532 nm green light beam, this wavelength being close to the one of the light coming out of the WLS fibres (480 nm).It delivers 10 to 15 ns pulses of a few µJ maximum energy, which is sufficient to saturate all TileCal channels, and thus to test their full dynamic range.Moreover, the pulse shape is sufficiently similar to the shape of physiscs signals, so that the optimal filtering method does not have to be adapted.The delay between triggering and actual light emission is of the order of 1.2 µs, which depends on the pulse energy in a window of about 60 ns; and a jitter of 25 ns due to the internal 40 MHz clock of the Laser source electronics. Laser light path The light emitted by the Laser head passes first through a semi-reflecting mirror (commercial metallic neutral density filter).About 10 % of the light is redirected towards a specific calibration system in order to measure precisely the amount of light that has been emitted, with a photodiode, and also to precisely measure the time when the pulse was emitted with two PMTs.The light that passes through the semi-reflecting mirror then goes through a rotating filter wheel, which is used to attenuate the outgoing light.This wheel contains seven metallic neutral density filters, each one applying a specific attenuation factor (from 3 to 1000).The filter wheel contains also an empty slot in order to reach the maximum intensity.Downstream of the filter wheel, a shutter allows controlling whether the light is sent to TileCal or not.When this shutter is open, the light enters a 1 m long light guide with a 5 mm diameter liquid core that acts as a first beam expander (see figure 6).The liquid light guide has been chosen for its large acceptance cone, high transmittance and very good resistance to powerfull light pulses.At the output of the liquid light guide, the light enters in the beam expander and light splitter [14] (see figure 7), where the Laser beam is first enlarged by means of a divergent lens followed by a convergent one in order to obtain a parallel light beam.In order to avoid any speckle effects, the beam then goes into a coherence-killing light-mixer, made of a parallelepipedic PMMA block.At the output of this block, the light is transmitted to a bundle of 400 clear optical fibres (Mitsubishi ESKA GH4001).In order to further improve the splitting uniformity, the amount of light transmitted to the 400 secondary fibres can be varied with adjustable distance optical connectors [15].From these 400 outputs, 384 of them are connected to 80 m long clear fibres of similar model, that bring the light from the USA15 cavern to the TileCal modules.Inside a module, each clear fibre is connected to 17 (EB) or 48 (LB) fibres of the same type but with lengths varying between 0.5 and 6 m; each of these fibres reaching one PMT.The light splitting is performed by the mean of an empty anodized aluminium tube: the incoming beam expands and reflects on the inner wall of the tube before reaching the bundle of output fibres.These tubes are 26 mm (respectively 40 mm) long with an internal diameter of 6 mm (10 mm) in the EB (LB).In total, each endcap module is fed by two 80 m fibres (16 PMTs per fibre) and each barrel module is fed by two fibres (45 PMTs per fibre).It should be noted that the light splitters located in the modules can not be tuned. Connected to the adjustable connectors, three additional fibres go back inside the Laser box in order to measure the intensity of the light pulses downstream of the filter wheel.Due to the large range of available attenuations (from 1 to 1000), the light sent to these three photodiodes has been tuned in order to always have a measurable signal in at least one of them. Radioactive source The four photodiodes are located in a special box in which temperature and humidity levels are monitored and controlled by Peltier elements and a dry air flow.This box also contains a 16 mm diameter radioactive source of 241 Am, releasing mostly α particles of 5.6 MeV with an activity of 3.7 kBq.This source provides an absolute calibration (independent from the Laser source) of each photodiode.In order to calibrate all the diodes, the source can be moved along them into the box. Electronics The VME 6U crate contains a VME processor, on which the software interface between the system and the ATLAS data acquisition system (DAQ) is running, and several custom-made electronic boards.The main boards that are needed for the data acquisition are described in the following paragraphs and can be seen on figure 8. The LASTQDC board is an eight channels charge Analog to Digital Converter, for the digitisation of the signal coming from the four photodiodes and the two photomultipliers.It has a resolution of 11 bits with a maximum range of 200 pC. The LILASII board provides two distinct functions.The first function is a charge injection system used in the Linearity mode, with a 16-bit Digital to Analog Converter (DAC) producing a maximum output of 10 V that is injected in a 1 pF capacitor.The second function is an interface between the TileCal calibration requests system (SHAFT 5 ) and SLAMA, containing mostly counters to know precisely how many requests were sent by SHAFT and by SLAMA. The SLAMA board is the main component of the system, producing signals to control other components like the LASTQDC, LILASII or the Laser pump.It contains three FPGAs (Altera ACEX EP1K300QC208): one is devoted to the VME communication, the second one to the communication with LASTROD via a dedicated bus, and the last one to the management of the Laser pump and the production of the various electronic signals that are described in the next sections.This board contains also two 15-bit Time to Digital Converters (TDC) with a resolution of 250 ps.The internal clock of SLAMA has a frequency of 80 MHz, either from an internal oscillator or provided by LASTROD (see hereafter). The LASTROD board is the dedicated Read Out Driver (ROD) of the Laser system, included in the ATLAS trigger and data acquisition system.It receives timing and trigger information from the standard ATLAS TTC systems, via an optical fibre connected to a TTCrq mezzanine board [16].This mezzanine includes a QPLL that provides 40 MHz and 80 MHz clocks synchronised to the LHC clock.LASTROD also sends the data to the ATLAS data acquisition computers (ROS), using another mezzanine board named HOLA [17], that contains a 2.5 Gbps optical link.The various functions of LASTROD are implemented in four FPGAs (Altera Cyclone I EP1C12Q240C6N): the first one manages the communication with the TTCrq mezzanine, the second one the communication with the HOLA board, the third one is the VME decoder and the last one is devoted to the communication with the SLAMA and LASTQDC boards via the dedicated bus. Two other custom VME boards are needed to control the movements of the filter wheel and the radioactive source, and to monitor the low and high voltages, temperatures and humidity levels. Operating in internal calibration modes Operating the Laser system in one of the three internal calibration modes requires only a sub-set of the Laser system hardware. Pedestals mode The aim of this mode is to record a high number of events without any energy in the photodiodes and the photomultipliers (neither from the Laser nor from the radioactive source).SLAMA generates a random gate for the LASTQDC.Alpha mode In this mode, the α radioactive source is moved in the photodiodes box, in front of each photodiode.The signals coming out of the four photodiodes are fed into SLAMA, where they generate an internal trigger if at least one of them is above a given threshold.This internal trigger generates the gate needed by the LASTQDC to digitise the delayed photodiodes signals: for each event, only one photodiode contains energy from the α radiation (see figure 9).In very rare cases, two α particles hit the photodiode within the same digitisation gate (as can be seen above ADC 1000 in figure 9) but the effect is negligible.The large dispersion of the α spectrum is due to the absence of collimation of the α particles, resulting in a wide range of incidence angles 6 for the particles hitting the photodiode. Linearity mode This mode allows measuring the linearity of the photodiodes electronics (amplifier, shaper and ADC): a tunable electric charge is injected in this electronics (downstream of the photodiodes), the response of the photodiodes electronics and the injected charge are recorded (see figure 10). Operating in Laser mode The Laser system is operated in the Laser mode for two data-taking situations: • dedicated calibration runs, when TileCal is operated independently of the rest of ATLAS and all events are Laser calibration events; • standard physics runs, when TileCal is synchronised with the other ATLAS sub-detectors.In this configuration, the Laser pulses are emitted only during a dedicated period of the LHC orbit, in which it is ensured there are no collisions (see figure 12.2 of reference [18]), in order to separate physics events from calibration events. The main difficulty in this mode is the timing.First, the Laser system must be synchronised with the other TileCal components, so that the events recorded in TileCal correspond to the events when the Laser pulse is emitted.Second, during standard physics runs, the Laser pulses must not be emitted during proton-proton collisions.In order to achieve these two goals, the Laser system is operated as a slave of the SHAFT board.The frequency at which the Laser pulses are emitted is about 1 kHz in the calibration runs and 1 Hz in the physics runs. Operating in Laser mode requires most of the components, in particular the interfaces with ATLAS.The various boards located in the Laser VME crate that are needed in this mode can be seen in figure 8, together with their interactions. The SHAFT board is programmed to send a request to the Laser system at a fixed time with respect to the beginning of the LHC orbit.Depending on the required frequency, this signal is only sent during selected orbits.When SLAMA receives this signal, it triggers a Laser pulse by the mean of a signal sent to the Laser pump together with a DAC level to set the required Laser intensity.However, a small delay occurs between the pump triggering and the emission of the Laser pulse, and this delay is dependent on the pulse intensity.In order to compensate for this variable delay, SLAMA sends the trigger signal to the Laser pump only after a programmable time, by steps of 12.5 ns, as explained in figure 11. The gates needed by the LASTQDC to digitise the photodiodes response are then produced by SLAMA from the response of one of the Laser box photomultipliers.Using its TDC and the signal from these photomultipliers, SLAMA also measures the time needed by the Laser pulse to be emitted.In order to get relevant time values, the internal 80 MHz SLAMA clock is the one that is produced by LASTROD, which is synchronised with the LHC clock. Once SLAMA has detected that the pulse is correctly emitted, it generates a signal back to SHAFT that then sends a Laser calibration request to the ATLAS central trigger processor.This request is sent to SHAFT when the pulse is emitted in order to synchronise the pulse emission and the readout of TileCal.Then, a level-1 trigger (L1A) signal associated to this Laser event is distributed to the whole ATLAS detector, together with the TileCal Laser calibration trigger type (TT). Finally, when a L1A is received by LASTROD, associated with the Laser TT, it sends to the ATLAS DAQ the amplitude of the Laser light, measured by the photodiodes, the time measured by the TDC, the average and RMS of the reponses measured during the last Pedestals and Alpha runs, as well as the state of the system (filter wheel position, shutter state, temperature and humidity in the Laser box, status of the power supplies).All this information is then available for offline analysis. Stability of the electronics Before any use of the Laser system to calibrate the TileCal, it is necessary to monitor the behaviour of its readout electronics and in particular the photodiodes.This is done in two steps: the first step is a measurement of the electronics characteristics when no energy is deposited in the photodiodes (no Laser light pulse nor α particle), the second step is a calibration of the photodiodes response using the radioactive source.These two steps are performed using the Pedestals and Alpha internal calibration modes, that are taken regularly. Characteristics of the electronics Each readout channel can be characterised by two numbers: its noise and its pedestal.The noise can be measured as the RMS of the distribution of the responses when no energy is recorded in the channel.The pedestal is the average response in the absence of deposited energy: its value is very important because it must be subtracted from the response of each channel, in order to obtain a response that is directly proportional to the intensity of the Laser pulses. Both the pedestal and noise can be measured in the Pedestals runs and in the Alpha runs.Indeed, in the Alpha runs, for each event, only one photodiode is hit by an α particle, thus the three other photodiodes contain no energy; in this case, the pedestal and noise levels can be measured.It is also very important for the photodiodes calibration to make sure that the pedestal and noise are the same in the Pedestals and Alpha runs, in particular that there is no cross-talk between the channels when some energy is deposited in a photodiode. The stability of the electronics has been studied during the LHC data taking periods of 2011, 2012 and 2013, from February 2011 to February 2013.Figure 12 shows the evolution of the pedestal measured in the Pedestals runs (P Ped ) and in the Alpha runs (P α ), together with the ratio R Ped α/Ped =P α /P Ped , for photodiode 1 (similar results are obtained for the three other photodiodes).The vertical dashed lines separate the two years in four periods, named A to D, that correspond to stable conditions separated by hardware interventions on the system.In particular, in March 31 st 2011 (between periods A and B), a new version of the photodiodes amplifiers was installed, allowing to switch off photodiodes collecting an amount of light sufficient to saturate the electronics and thus generating cross-talk with the other channels.The second important intervention, in July 6 th 2011 (between periods B and C), was a change in the photodiodes amplifiers grounding scheme in order to further reduce the cross-talk between channels.As can be seen in figure 12, the pedestals measured in Pedestals runs and in Alpha runs are similar (the ratio R Ped α/Ped is very close to 1 and is stable) during periods C and D, thus showing that the cross-talk has been reduced to a negligible value.This means that the pedestal measured in Pedestals runs can be safely subtracted from the measured response of the photodiodes in Alpha runs. Figure 13 shows the evolution of the noise measured in the Pedestals runs (RMS Ped ) and in the Alpha runs (RMS α ), together with the ratio R RMS α/Ped =RMS α /RMS Ped .The change of grounding scheme in July 2011 significantly improved the electronic noise.Moreover, the noise is similar in the Pedestals and in the Alpha runs: R RMS α/Ped ∼ 1 with variations well within ±5 %.The linearity of the readout electronics can be measured by injecting a known charge in the photodiodes amplifiers and varying this charge over the full dynamic range (see figure 10).Using dedicated Linearity runs, the difference between the electronics response and a straight line varies within ±1 %, with an RMS of 0.2 %. Finally, using standalone Laser runs during which the shutter was closed, it was also possible to measure the pedestal and noise of the channels connected to the three photodiodes that see the light after the light splitter, but in a condition similar to a standard Laser run (Laser pump on and light pulses seen by the photodiode 1).The pedestal variation with respect to standard Pedestals runs is less than 1.5 % and the noise differs7 by less than 5 %.This ensures that no significant cross-talk between photodiodes is observed during Laser runs. In conclusion, the readout electronics is found to be stable since April 2011.Evolutions of the pedestal, probably due to environmental effects like temperature and humidity, is not a problem since the value measured regularly in the Pedestals runs can be used for the response renormalisation in the Alpha and Laser runs.Moreover the electronic noise is perfectly stable and similar in different types of runs. For the rest of this article, the response of the photodiodes will always be the value after pedestal subtraction. Calibration of the photodiodes In the Laser calibration of the TileCal, the photodiodes are necessary to measure the intensity of each Laser pulse sent to the calorimeter PMTs, thus requiring that the response of the photodiodes to a given energy does not vary with time.The monitoring of this potential evolution is performed by depositing a known amount of energy periodically in the photodiodes, i.e. α particles of 5.6 MeV, using the Alpha runs. Two methods have been used to study the evolution of the response of the photodiodes to α particles.In both methods, a high statistics Alpha run is used as a reference and each Alpha run is compared to this reference.Four reference runs have been defined, each one at the beginning of a period as defined in the previous section.The reference run for period C contains 3.8 × 10 5 α events and the reference run for period D 1.1 × 10 6 events. The first method is using the normalised mean value that consists in computing the ratio between the mean value of the α spectrum in the Alpha run under study with the mean value of the α spectrum in the reference run. The second method has been developed specifically for this study.It is called the scale factor and it is based on the assumption that the variation in the measured α spectra is due to a variation of the gain of the photodiodes and/or their readout electronics, thus only implying a rescaling of the spectra and not a distortion.Therefore, in this method, the spectrum under study is compared to the reference spectrum.A test spectrum is first built by rescaling the reference one, i.e. by multiplying the photodiode response for each recorded event by a constant number, called the scale factor.Then, this test scale factor is varied until the resulting distribution fits as well as possible the α spectrum under study, using the Kolmogorov-Smirnov test [19,20] is then a measurement of the gain variation, being equal to one if no rescaling is needed to fit the reference distribution to the studied one. To compare the two methods and their sensitivity to pedestal and noise variations, studies with simulated α spectra were performed.These simulated α spectra were generated by randomly drawing photodiode responses, according to a measured high statistics α distribution.Then, these simulated α spectra were shifted (to simulate a pedestal variation), rescaled (to simulate a gain variation) or smeared (to simulate an increase of the noise), before being used as input to the photodiode calibration procedure.These studies have shown that the scale factor method has a smaller uncertainty (0.05 %) than the normalised mean value (0.1 %).Moreover, they have also shown that the bias introduced by a wrong measurement of the pedestal is the same for the two methods: if the true pedestal differs from the subtracted one by x %, the bias is 0.2 × x %.Finally, as has been shown in the previous section, the noise in the Alpha runs may be up to 5 % larger or smaller than in the Pedestals runs.The simulation showed that an error of 5 % on the noise would bias the scale factor by a negligible amount (0.0003 %) and would have no measurable effect on the normalised mean value.The noise measured in the reference runs (Ref RMS α ) can be seen on figure 13 as the horizontal lines and shows a good agreement with the noise measured in the studied Alpha runs.Therefore, the scale factor is chosen as the main method to study the evolution of the response of the photodiodes, with the normalised mean value as a cross-check. Figure 14 shows the evolution of the normalised mean value and the scale factor measured in the Alpha runs.It appears that the period D was more stable than the previous ones.Concentrating on the scale factor, the largest variation is 0.6 % in the periods C and D and usually even less, which is small enough to safely use the response of the photodiodes to Laser pulses as a reference. Calibration of the calorimeter As has been seen in equation 1.1, the reconstruction of the energy in TileCal depends on several constants, some of them being updated regularly.The main calibration of the TileCal energy scale is obtained using the Cesium system [2].However, since a Cesium scan needs a pause in the protonproton collisions of at least six hours, this calibration cannot be performed very often.Therefore, regular relative calibrations are accomplished between two Cesium scans using the Laser system.The method to compute the Laser constant f Las introduced in equation 1.1 is based on the analysis of specific Laser calibration runs, taken about twice a week, for which both the Laser system photodiodes and the TileCal PMTs are readout.By definition, if the response of a channel to a given Laser intensity is stable (the gains of the PMT and of the associated readout electronics are stable), the Laser constant f Las is 1. In the next sections, the general method to determine these Laser calibration constants f Las is first described, the uncertainty on these constants is then estimated, followed by the description of the actual procedure to produce them. Description of the method A Laser calibration consists in two successive runs: • a Low Gain run (labelled LG) with 10 4 pulses using the filter with an attenuation factor of 3, • a High Gain run (labelled HG) with 10 5 pulses using the filter with an attenuation factor of 330. For each type of run, R i,p is defined as the normalised response of channel i: where, for the pulse p, D1 p is the signal measured in the photodiode 1 8 in the Laser box and is the signal measured by the TileCal photomultiplier connected to the channel i.The analysis is performed with the mean value of this ratio over all pulses for each channel, denoted R i =< R i,p >.The Laser calibration is a relative calibration with respect to a Laser reference run taken right after each Cesium scan.The raw relative gain variation of a channel i is defined as follows: where R ref i is the normalised response of the TileCal channel i during the reference Laser run.However, due to inhomogeneities in the light mixing of the light splitter or radiation damage to the long clear optical fibres, the light intensity may vary with time and from fibre to fibre (one fibre being linked to half of the PMTs of the same TileCal module).Therefore, ∆ i is corrected by the term ∆ fibre f (i) ( f (i) is the number of the fibre coming from the Laser and connected to the given channel i) taking into account the gain variation due to a light variation of this specific fibre, as explained in the next paragraph. To compute ∆ fibre f (i) , an iterative method is used.First, the distribution of ∆ i of channels fed by the same fibre is considered (see figure 15).In this distribution, only PMTs connected to the D cells for the long barrel modules and the B13, B14, B15, D5 and D6 cells for the extended barrel modules are used, assuming that these selected channels are stable between two Cesium scans 9 .The mean and the RMS of the ∆ i distribution are then calculated from these selected channels.To deal with single drifting channels that would bias the measurement of the shift due to the fibre itself, an iterative procedure is applied: all the channels that have a gain variation larger than twice the RMS are excluded from the distribution for the next iteration.The mean gain variation after five such iterations is the correction ∆ fibre f (i) .The aim of this method is to correct the drifting channels of TileCal, not the variations of the fibres or any global variation.Therefore, the correction factor, f i Las for channel i, is defined as: and is the Laser calibration constant that enters in equation 1.1. Evolution of the fibre correction Figure 16 gives a hint of the inhomogeneity of the light distribution during year 2012, estimated as the RMS of the distribution of all the ∆ fibre f (i) with respect to the latest Laser reference run.At the beginning of each period, i.e. after each Cesium scan, the spread of the fibre correction factor determination is 0 by definition and it increases up to 0.4 % in a one month period. Uncertainty on the correction factor The method assumes that the reference D and B cells are stable.Systematic effects on the correction factor may arise, in particular from global variations of the PMT gains of these cells, due to two effects.The first effect, independent of the presence or absence of collisions, is a constant increase of the gains, but it is negligible over a period of one month (less than 0.1 %) and has been first observed in 2009 by the Cesium system, prior to collisions.The second effect, observed during collisions, is a decrease of the gains.Its origin is not completely understood but the effect is less than 0.5 % between two Cesium scans. Assuming that the scintillators ageing is negligible between two Cesium scans (typically one month) and taking into account the fact that the precision of Cesium calibration constants is at the per-mil-level, the Cesium system is a good tool to determine the precision and the systematic uncertainty connected with the Laser calibration constants.The two systems are expected to provide compatible measurements.To compare the Laser and the Cesium calibration constants, pairs of the closest Laser run/Cesium scan are selected. For each pair of Laser run/Cesium scan, the ratio of Laser calibration constants f Las and Cesium calibration constants f Cs is considered.In this comparison, are considered only the channels for which the quality of the two calibration constants is good.The distribution is fitted to a Gaussian function.The mean obtained by the fit quantifies the compatibility of the two calibration systems and its difference to one (called δ ) is interpreted as the systematic uncertainty on the Laser calibration constants.The σ obtained by the fit can be interpreted as the statistical precision of the Laser system assuming that the uncertainty on the Cesium calibration constants is negligible with respect to the Laser one (it is therefore a conservative estimate of the Laser system precision). Figure 17 shows a comparison between Laser and Cesium calibration constants, the ratio f Las / f Cs , for long and extended barrels, for a one month period in 2012.Fitting the same distributions for 11 periods in 2012 (see results in table 1), the systematic uncertainty on f las is evaluated to be at most 0.21 % for the long barrel and 0.56 % for the extended barrels, and the statistical uncertainty of the Laser system varies between 0.22 % and 0.46 %.The second period corresponds to a period of high LHC luminosity.As anticipated, this high luminosity has a more significative effect on the PMTs of the extended barrels than on the ones of the long barrel, as can be seen from table 1.The statistical and systematic uncertainties of the correction factors obtained using the Laser system are then estimated: • statistical uncertainty: 0.3 % • systematic uncertainty for long barrel: about 0.2 % • systematic uncertainty for extended barrels: about 0.5 % Combining together these results, the overall precision is estimated to be 0.4 % and 0.6 % for the long and the extended barrels respectively.However, the precision on the Cesium calibration constants (of the order of 0.2 %) is not completely negligible with respect to the Laser ones, thus the above values must be considered as upper limits rather than exact values.Table 1.Difference between the mean and unity (δ ) and spread (σ ) of a Gaussian fit to the distribution of the ratio f Las / f Cs during 2012.The value of δ is always positive, due to the fact that the reference cells, that are assumed to be stable, have nonetheless a small decrease of the gain, yielding a systematic effect. Determination of the calibration constants In equation 1.1, f Las represents the correction of the PMT gain variation computed with the Laser system.However, only channels that have undergone a significant deviation are corrected, i.e. the value of f Las is set to 1 if it is smaller than a predetermined threshold of about three times the overall precision of the Laser system, thus a threshold of 1.5 % for the long barrel and 2 % for the extended barrels.The low gain (LG) and the high gain (HG) Laser runs are used to determine the Laser calibration constants.However, the LG data are more precise than the HG data because the HG signal amplitude is much smaller than the LG one (a factor 100 between them for the same channel).Therefore, the LG Laser calibration constants are used for both gains, while the HG Laser calibration constants are used for cross-checks.A readout electronics issue can be revealed by an incompatibility between the LG and the HG Laser calibration constants.Therefore, the production of the Laser calibration constants f Las follows this procedure: • for each pair of the LG and the HG runs, a Laser calibration constant is computed, • a channel is corrected if its LG gain variation is larger than 1.5 % (2 %) in the LB (EB), • the compatibility of the Laser calibration constants for the LG and the HG is required (both constants with same sign and above the thresholds), otherwise the Laser calibration constants are set to 1 -this is to ensure the calibration constant corrects only drifts due to the PMTs and not problems in the readout electronics itself that are covered by the CIS, • a limit is applied to the constants values corresponding to a deviation between -60 % and +60 %.A healthy channel is not supposed to drift up to these values.However, a variation of up to -90 % is allowed for channels in a module in reduced-HV mode10 , • if a channel is known as bad, its Laser calibration constant is set to 1, whatever is the nature of the problem.Indeed, this flag is very often due to readout electronics, a gain variation or drift faster than Laser run frequency, or corrupted data, and the Laser system is not presumed to correct these cases. Based on these criteria, only 60 channels among ∼ 10 4 could not be corrected by the Laser system during the LHC run 1. A few additional conditions are needed for three different types of channels: the ones linked to the E3 and E4 cells, the channels in reduced-HV modules and some channels that have an erratic behaviour, likely due to readout electronics problems.These latter channels are not calibrated. The channels linked to E3 and E4 cells are not calibrated by the Cesium system, while the Laser system monitors all the TileCal channels, including those connected to these special cells.The fact that there is no Cesium calibration implies that there are no references for these channels.The chosen solution is to monitor these channels and provide constants with respect to a reference date set to March 18 th 2012 which is the date of the first Laser reference run in 2012.Even though the precision of the Laser system over a period of one year has been estimated to be less than 2 %, it is sufficient to calibrate these highly drifting cells. Some reduced-HV modules can not be calibrated by the Cesium system.As for the E3 and E4 cells, these channels are monitored and Laser constants are computed starting from the Laser reference run taken just before the beginning of the reduced-HV mode period. Finally, the channels linked to E1, E2 cells 11 and MBTS (see section 1.1) are not corrected by the Laser system. Over the large number of runs recorded and analysed in 2012, only 32 pairs of Laser runs were needed to calibrate the data.Indeed, if several consecutive runs lead to compatible constants, only the constants from the first pair are implemented for the full period.The main periods during which a large number of channels had to be calibrated (at most 432 in the long barrel and 148 in the extended barrels) are at the end of April and the beginning of June 2012.Indeed, during these periods high luminosity was delivered by LHC, producing high current in the PMTs, thus modifying their gain.Other channels that needed calibration in 2012 were the 224 channels reading the E3 and E4 cells as well as the 167 channels of the reduced-HV mode modules. Calibration with pulses during physics runs Up to now, the Laser calibration constants have been computed using dedicated Laser calibration runs, recorded on a regular basis between collision runs.But, as explained in section 2.6, the Laser pulses are also emitted during collision runs, and are used only so far to monitor the calorimeter timing (see section 5).However, with the increase in instantaneous luminosity expected from the LHC, resulting in possible fast variations of the gains of the TileCal PMTs during collisions, being able to monitor these gains during the physics runs will become important.Technically, the main difference with standard Laser runs is that the number of Laser events is not fixed, since it depends on the length of the physics run and therefore on the beam conditions.An additional study was performed and showed that, applying the method described in the previous paragraphs, Laser calibration constants can be derived with a good precision using these runs: the statistical precision with 5000 pulses, corresponding to a rather short physics run (85 minutes), is similar to a standard calibration run (0.3 %).Therefore, the physics runs will probably be used in the future to derive Laser calibration constants. Timing monitoring and correction The TileCal does not only provide a measurement of the energy that is deposited in the calorimeter, but also the time when this energy was deposited.This information is in particular exploited in the removal of signals that do not originate from proton-proton collisions as well as for time-of-flight measurements of hypothetical heavy slow particles that would enter in the calorimeter.Therefore, the time synchronisation of all calorimeter channels represents one of the important issues in the calibration chain. Time reconstruction As described in section 1.2, the signal coming from the PMTs is digitised every 25 ns and seven samples are readout after a positive level-1 trigger decision.The optimal filtering (OF) method that is applied on these samples is able to determine the total amplitude of the signal but also the signal phase, defined as the phase of the analog signal peak with respect to the TTC clock signal corresponding to the fourth sample.The precision on this measurement is better than 1 ns [10,2].The optimal filtering method is applied online to all channels and later re-computed offline in channels that have sufficiently large signals. However, the parameters of the OF method depend on this signal phase, thus a bad setting of the timing of a channel will produce a wrong measurement of the signal amplitude.Studies have shown that a difference of 25 ns between the actual and supposed phases degrades the reconstructed energy by 35 %.For timing differences of a few nano-seconds, the effect on the measured energy is small but the corresponding cells could be excluded from the reconstruction, being falsely tagged as not coming from proton-proton collisions. The PMT signal digitisation is performed by an electronic board called the digitiser that is able to process the signal from six PMTs.Only the timing of each digitiser can be adjusted independently of each other.Inside the digitiser the six channels have the same relative timing. The meaurement of the pulse timing is made in two steps.First, the phases of the digitisers are set so that the physics signals from particles originated in the interaction point and travelling with the speed of light peak close to the fourth sample.Second, the residual offsets t phase are measured, using dedicated runs, as the average of the reconstructed OF time t OF .The channel time is then defined as t phys channel =t OF −t phase .This value should of course be close to zero for standard particles coming from the interaction point at nearly the speed of light. Timing jumps During data taking, it was discovered that digitisers can suddenly change their timing settings due to a mis-configuration, either at the beginning of a run or after an automatic power cycling of the low voltage power supplies (LVPS) feeding the TileCal front-end electronics.This mis-configuration of the time settings is later referred to as timing jump.An example of such a behaviour can be seen in figure 18 where the reconstructed time for a channel is represented as a function of the luminosity block, that is the elementary unit of time in a run (one or two minutes depending on data taking periods) for which parameters like calibration constants can be modified if necessary.Of course, this kind of feature has a negative impact on the calorimeter performance since the reconstructed energy and time are mis-measured during the period affected by the timing jump. As stated previously, the Laser system emits light pulses also during physics runs.Since all calorimeter channels are exposed to Laser light at the same time, the statistics is sufficient to determine the reconstructed time to a precision of about 1 ns.Because the Laser is not synchronised with the LHC clock, the reconstructed time in each channel must be corrected for the Laser phase using the Laser TDC information t TDC .In addition, the time of arrival of the Laser pulses to the TileCal PMTs is different from the signal time for particles from collisions, due to the path followed by the Laser light.Therefore, in a channel perfectly timed-in for physics, the Laser pulse arrives at a different time 12 for each channel as a function of the luminosity block.Since this time is supposed to be close to zero, the monitoring program searches for differences from this baseline.Identified cases -potential timing jumps -are automatically reported to the data quality team for manual inspection. If the timing jump is confirmed, the t phase value of the corresponding channel is modified for the affected period, thus allowing to recover a correct reconstructed time, as demonstrated on figure 19.These Laser results are available still during the calibration loop and thus allow for time constants correction before the full data processing starts. Almost all timing jumps detected with Laser events can be corrected during the subsequent data processing.However, few digitisers appeared to be very unstable, exhibiting very large number of timing jumps.Such channels are flagged as bad timing.This flag prevents the time from such channels to be used in the further physics object reconstruction. Impact of the timing jumps on the calorimeter time performance The overall impact of the timing jump corrections on the reconstructed time was studied with jet collision data using a sub-set of the 2012 data representing 1.3 fb −1 of collected physics data.To reduce the impact of the time dependence on the reconstructed energy, the channel energy was required E channel > 4 GeV, but still read out with high gain. 13he results are displayed in figure 20, where the reconstructed time is shown for all calorimeter channels with and without the timing jump corrections.While the Gaussian core, corresponding to channels (and events) not affected by timing jumps, remains basically unchanged, the timing jump corrections significantly reduce the number of events in the tails.The overall RMS is improved by 9 % (from 0.90 ns to 0.82 ns after the corrections are applied). Pathological channels monitoring For a few channels, the monitoring shows that the PMT input high voltage is unstable.This instability may be due to a true unstable input voltage or a failure of the monitoring system.In order to disentangle the two possibilities, the Laser system can be used to measure the real gain variation (as described in the previous sections).Figure 21(a) presents an example of such pathological channel behaviour.In this case, the Laser system does not observe any variation of the PMT gain, meaning that the actual input high voltage must be stable and the drift observed in the monitoring is not real. In the case of figure 21(b), the Laser system confirms the instability of the high voltage distribution system, the very large gain variations measured by the Laser system being in agreement with the variations computed from the high voltage monitoring. Conclusions The Laser system is one of the three calibration systems of the ATLAS Tile Calorimeter.It is a key component for monitoring and calibrating the 9852 TileCal photomultipliers and their associated readout electronics.The Laser calibration constant f Las applied on each channel has been regularly measured, with a statistical uncertainty at the level of 0.3 %, and is one ingredient in the TileCal cell energy reconstruction.Comparing these correction factors with the ones derived with the Cesium calibration system, a good compatibility between them was found and the systematic uncertainty on the Laser calibration constants is estimated to be 0.2 % for the long barrel modules and 0.5 % for the extended barrel modules.The Laser system has also been used to monitor the timing of the TileCal front-end electronics during physics runs, allowing to correct for timing jumps provoked by mis-configuration resulting from power supplies trips.It has also been used to monitor the behaviour of pathological calorimeter channels. In order to improve the stability and the reliability of the system for the LHC run 2, a major upgrade was performed in October 2014, including a completely new electronics, ten photodiodes to measure the light at various stages in the Laser box, and a new light splitter. 3. Stability of the electronics 3. 1 Characteristics of the electronics 3.2 Calibration of the photodiodes 4. Calibration of the calorimeter 4.1 Description of the method 4.2 Evolution of the fibre correction 4.3 Uncertainty on the correction factor 4.4 Determination of the calibration constants 4.5 Calibration with pulses during physics runs 5. Timing monitoring and correction 5.1 Time reconstruction 5.2 Timing jumps 5.3 Impact of the timing jumps on the calorimeter time performance 6. Pathological channels monitoring 7. Conclusions Figure 2 .Figure 3 . Figure 2. Schematic of the cell layout in a plane parallel to the beam axis, showing only positive η side (the detector being symmetric with respect to η = 0).The three radial layers (A, BC and D) are also visible.Special scintillators, called gap (E1 and E2) and crack (E3 and E4) cells, are located between the barrel and the endcap. Figure 4 . Figure 4. Diagram of the contributions of the hardware calibration systems to the energy reconstruction. Figure 5 . Figure 5. Sketch of the Laser box, showing the Laser head (1), the semi-reflecting mirror (2), the light mixer (3) that distributes the light to the photodiodes box (4) and the two PMTs (5), the filter wheel (6), the shutter(7) and the output liquid light guide(8). Figure 6 .Figure 7 . Figure 6.Sketch of the light splitting scheme.The main light splitter distributes the light to 400 optical fibres, 256 for the two extended barrels, 128 for the long barrel and 16 spares.In each module, the secondary light splitter distributes the light to 17 (EB) or 48 (LB) fibres, connected to 16 (EB) or 45 (LB) photomultipliers. Figure 11 . Figure 11.Sequence of signals to trigger a Laser pulse.The time (tl) between the reception of the trigger signal by the Laser pump and the pulse emission depends on the pulse intensity.The programmable delay (d2) is chosen as a function of the intensity in order to keep d2+tl constant. Figure 13 . Figure 13.Evolution of the noise value measured in the Pedestals runs and in the Alpha runs, and their ratio for the first photodiode from February 2011 to February 2013.The error bars represent the statistical uncertainties on the averages and the ratios. Figure 16 . Figure 16.Inhomogeneity due to light distribution instabilities as a function of time. Figure 17 . Figure 17.Example of distribution of the ratio f Las / f Cs during one month for the long barrel (a) and the extended barrels (b).The runs were taken in July 9 th and August 5 th , 2012. Figure 20 . Figure 20.The impact of the timing jump corrections on the reconstructed channel time in jet collision data.Shown are all TileCal high gain channels with E channel > 4 GeV, being a part of reconstructed jets.Data correspond to 1.3 fb −1 of collected physics data in 2012. Figure 21 . Figure21.Example of pathological channels monitoring in 2012.These plots present the comparison between the gain variation expected from the high voltage monitoring (blue dots) and the one measured by the Laser (red squares) and the Cesium (green dots) systems.The vertical structures are due to ON/OFF switchings and are expected.However, between two switchings and for normal channels, the gain should be constant. TileCal cells, measuring Figure 12.Evolution of the pedestal value measured in the Pedestals runs and in the Alpha runs, and their ratio for the first photodiode from February 2011 to February 2013.The error bars represent the statistical uncertainties on the averages and the ratios.The large variation early 2013 is due to the stabilisation of the readout electronics after the end-of-year shutdown, during which all systems were off. . The resulting scale factor Figure 14.Evolution of the scale factor and the normalised mean value of the α spectra from February 2011 to February 2013.The error bars represent the statistical uncertainties. Example of distribution of ∆ i for all channels connected to one fibre.The average of this distribution is ∆ fibre f (i) (see text). t laser ref .The channel time for Laser events is then defined as t laser channel =t OF −t TDC −t laser ref .If the time setting of the digitiser is correct, t laser channel should be close to zero.The Laser events are recorded in parallel to the physics data taking and this stream gets reconstructed immediately once the run finishes.The reconstructed Laser times t laser channel are histogrammed
13,504.4
2016-08-09T00:00:00.000
[ "Physics" ]
Quantum computation in optical lattices via global laser addressing A scheme for globally addressing a quantum computer is presented along with its realisation in an optical lattice setup of one, two or three dimensions. The required resources are mainly those necessary for performing quantum simulations of spin systems with optical lattices, circumventing the necessity for single qubit addressing. We present the control procedures, in terms of laser manipulations, required to realise universal quantum computation. Error avoidance with the help of the quantum Zeno effect is presented and a scheme for globally addressed error correction is given. The latter does not require measurements during the computation, facilitating its experimental implementation. As an illustrative example, the pulse sequence for the factorisation of the number fifteen is given. I. INTRODUCTION Over the past few years, Benjamin [1,2,3,4], in particular, has followed up an initial proposal by Lloyd [5] on the concept of global control schemes for quantum computation. The motivation for such schemes is simple -instead of needing individual elements for the manipulation of every single qubit in a system, which is technologically very difficult, we control a very limited set of fields that are applied to all the qubits in the system. It is in this sense that we use the term 'global addressing' -we will use lasers with a beam width that addresses the whole ensemble of atoms. While every qubit will be given the same commands, we shall demonstrate the techniques that localise the effects so as to carry out operations on only single qubits. Recently [6], a practical scheme has been proposed to implement such ideas with optical lattices (see also [7]). Here we present another possible scheme, which has a number of advantages in terms of ease of implementation and especially scalability, where we can perform computation in two, or even three, dimensions of an optical lattice. As we shall see in the following, the present scheme gives the possibility to incorporate error correction and fault tolerance in a straightforward manner. The way that we implement global control is relatively simple. We have a register array of qubits on which we wish to perform computation and an auxiliary array for which we retain single qubit control over a certain lattice site [8]. This enables us to initialise a pointer (referred to as a control unit or marker atom in previous works) which is essentially a unique component which we can use to localise operations while applying global operations. Initially, the pointer is just the same atom as all other qubits, trapped in the lattice in the state | 0 , except that we individually rotate it to the state | 1 . By exclusively using global addressing, we can move this pointer atom relative to the computational qubits and apply controlled-U operations which then effects U solely on the qubit adjacent to the pointer. Two-qubit gates can be implemented in a similar way, using a three qubit gate such that, in the presence of the pointer, the desired gate is enacted on two neighbouring qubits. This control set is sufficient to give universal quantum computation. In comparison the schemes requiring individual addressing, the only additional resources are those required to move the pointer around the lattice structure. For a d-dimensional structure of N qubits, this will increase the total number of steps by a factor of order d √ N . Significantly, error avoiding and error correcting techniques can be implemented that respect the global addressing requirement and render our computational scheme favourable for scalable quantum computation. Hence, the evolution of the system can be described by the Bose-Hubbard Hamiltonian, This is comprised of tunnelling transitions of atoms between neighbouring sites of the lattice, and collisional interactions between atoms in the same site where E R = 2 k 2 /(2m) is the recoil energy, k = 2π/λ, m is the mass of the atoms, V 0 is the potential barrier between neighbouring lattice sites and a s is the s-wave scattering length of the colliding atoms. The collisional couplings can be arranged to take significantly large values via Feshbach resonances [9, 10]. Small tunnelling couplings, with respect to the collisional ones, can be produced by increasing the amplitude of the laser fields comprising the optical lattice. In this way, the system can be brought into the Mott insulator phase with only one atom per lattice site [11,12,13,14]. We encode the logical |0 and |1 of the computation in the |a and |b ground states of the atom, which correspond to the populations of the different modes of the optical lattice. As seen in Figure 1(a), the state of an atom can be transported between modes a and b by performing Raman transitions. B. Simulation of spin Hamiltonian We initially consider that the optical lattice system is brought into the Mott insulator regime where there is only one atom per lattice site. By manipulating the tunnelling couplings, it is possible to obtain a nontrivial evolution suitable for performing quantum computation. It is possible to expand the evolution due to the Bose-Hubbard Hamiltonian in terms of the small parameters J σ /U σσ ′ . Up to second order in perturbation theory, this expansion is given, in terms of the Pauli matrices [15,16,17], by The local field B can then be arbitrarily tuned by applying appropriately detuned laser fields. The effective couplings λ (i) can be tuned by manipulating the amplitudes of the laser fields that generate the optical lattices. In particular, by activating only one of the two tunnelling couplings, e.g. J a , we can obtain the diagonal interaction σ z i σ z i+1 along all the qubits of the lattice. This, up to local qubit rotations, is equivalent to a series of control phase gates (CP). However, if we activate both of the tunnelling couplings with appropriate magnitudes, it is possible to activate the exchange interaction σ x i σ x i+1 + σ y i σ y i+1 . When applied for a sufficient time interval it results in a SWAP gate, exchanging the atoms at neighbouring lattices sites. We have used an effective Hamiltonian to create the gates that we are interested in. The creation of these gates is studied in more detail in [18], where the error introduced by the real Hamiltonian is examined for experimentally sensible parameters. Currently achievable errors are shown to be of the order of 10 −3 , which is small enough for an in-principle demonstration, even if it is not small compared to thresholds for fault tolerance [19]. Ref. [18] also shows how to create such gates without resorting to an effective Hamiltonian in the adiabatic regime, giving much shorter time scales for gate implementation. C. Superlattices As we have seen in the previous section, it is possible to control the tunnelling coupling constants by modifying the amplitude of the standing laser fields. The way to avoid single atom addressing is to employ "superlattices" (see Figure 1(c)), that is the superposition of optical lattices with different wavelengths. This will eventually be sufficient for performing universal quantum computation. With superlattices, we can manipulate the tunnelling couplings, J σ i , and consequently the effective couplings λ, by varying the potential barrier V 0 , as seen in equation (2). In particular, we shall employ two independent lattices whose spatial periods differ by a factor of two. This can be achieved, for example, with two pairs of laser beams, each one creating a lattice with period d i = λ/[2 sin(θ i /2)] that depends on the angle θ i between them [6,20]. Hence, by choosing θ i s appropriately, one can achieve a light-shift potential given by where k = 2π sin(θ 1 /2)/λ = 4π sin(θ 2 /2)/λ and φ is the phase difference between the second pair of lasers, while the first pair is taken to be in phase. By changing the amplitudes U i and the phase φ it is possible to obtain the control procedures necessary for the realisation of the presented scheme. In the same way, one can create the desired three dimensional lattice structures. Raman transitions can be performed on every other qubit in a desired direction by employing similar structures of standing waves that are properly tuned, creating a two photon transition between the atomic ground states a and b. If we denote the laser Rabi frequencies that couple the states a and b by Ω a and Ω b respectively, then the coupling of the two states is given by Ω a Ω * b /∆. The excited state has a detuning of ∆, as shown in Figure 1(a). By positioning the lasers such that the Ω σ s have a sinusoidal configuration, we can activate the Raman transition only on alternate rows. We will commonly tune this to activate the transition on the register arrays without affecting the auxiliary ones. III. QUANTUM COMPUTATION WITH SUPERLATTICES In what follows, we show how to perform one and two qubit gates between any qubits. To do this, we need to transport the pointer qubit to any desired location. In particular, for a single qubit gate we first transport the pointer next to the targeted qubit, q 1 . A conditional unitary transformation is then applied between the auxiliary array and the register one. This acts if the auxiliary qubit is in the state | 1 , resulting in a gate only on the qubit conditioned by the pointer. In order to perform a two qubit gate, we need to perform interactions between all three qubits, the two targeted ones, q 1 and q 2 , and the pointer. In principle, there is much freedom in choosing the geometry for performing such steps. The minimal one dimensional case is rather cumbersome. Alternatively, one can use semi-one dimensional models, e.g. ladders, consisting of two parallel interacting arrays of qubits, one being the auxiliary array and the other the register. In terms of control procedures, the square configuration adopted here (see Figure 2) requires the least resources. The lattice comprises standing laser fields with wavelength λ which we assume are perpendicularly oriented. We prepare the qubits such that one is placed at each site of the lattice, all in the state |0 . The pointer is then created by performing σ x on one of these qubits. We consider that the register qubits are in arrays along the x direction and occupy every other lattice array along the y direction. At the same time, the auxiliary arrays between them are used for the transportation of the pointer qubit. The idea is to use global laser addressing to perform all the necessary manipulations on the lattice to result in quantum computation on the register qubits. These manipulations, with their equivalent physical implementation, are described in the following. A. Transport of pointer qubit along the same or different auxiliary arrays The transport of the pointer qubit along an array can be performed by activating a lattice as given in Figure 3(a). The superimposed lattices create minima between the pointer qubit and its appropriate neighbour on its left or right, resulting in the SWAP interaction between them. By exchanging the minima between neighbouring links, it is possible to move the pointer to any lattice site along the x direction. To transport the pointer qubit to a different auxiliary array we have to superimpose the optical lattice given in Figure 3(b). This exchanges the auxiliary and the register arrays, resulting in the transport of the pointer along the y direction. These techniques can be combined to transport the pointer to any location we want in order to perform a one qubit gate on the desired qubit. This manipulation also plays a significant role when performing two-qubit gates. If we implement this interaction on the register arrays, then qubits that are separated by an even number of lattice sites can be moved next to each other. Qubits that are separated by an odd number of sites do not move relative to each other with this swapping mechanism. B. One-qubit gates with common addressing To perform a one-qubit gate on a certain qubit, q 1 , let us first move the pointer next to it, by successive SWAP operations, as presented in the previous subsection. We then perform, by a Raman transition, the rotation U on all the register qubits without affecting the auxiliary qubits. This is possible if we activate the Raman transition between the ground states |0 and |1 of the atom by two standing laser fields along the y direction with periodicity 2λ, as illustrated in Figure 4. Then we perform a control phase gate (CP) between the auxiliary and the register arrays. This is realised by similar standing laser fields to those presented in Figure 3(b). In contrast to the fields we used to SWAP the qubits, where we lowered the intensities for both modes, now we aim to activate the tunnelling of only one of the atomic species [18] giving, effectively, a CP gate. This will only apply σ z to q 1 , as the rest of the qubits, coupled to the |0 states of the auxiliary qubits, will not be affected. Next, we apply the inverse rotation, U † , to all the register qubits using the same laser configuration as we did for the initial Raman transition. The overall effect will be that all the register qubits will return to their original state, while the targeted one will be rotated by U σ z U † , a one-qubit gate. Note that for the most general one-qubit gate, we should apply CP twice to give the evolution Aσ z Bσ z C, where ABC = 1 1 [21]. However, a universal set of gates can be achieved without needing to use this more general form. C. Two-qubit gates with common addressing In order to perform a two-qubit gate between two particular qubits, q 1 and q 2 , of the register, we move the qubits such that they neighbour the pointer. We have already specified how to do this if the qubits are separated by an even number of qubits. If not, then we have to move the pointer next to q 1 (or q 2 , whichever is more convenient) and perform a controlled-SWAP, causing the separation between q 1 and q 2 to be reduced by 1 site and then they can be moved together. The need to move the two qubits together is common to all quantum computing schemes that rely on short range interactions and so, in comparison, this procedure does not introduce any additional overhead. By performing a three-qubit gate between q 1 , q 2 and the pointer, such as the controlled-controlled-NOT gate (C 2 NOT) we effectively obtain a two-qubit gate between the targeted qubits. This three qubit gate can be constructed out of globally applied single qubit rotations and two qubit gates, which we have already used to create a localised single qubit gate. A suitable algorithm is given in Figure 6. We also use this to implement the controlled-SWAP. This is sufficient to create a 2-qubit gate between q 1 and q 2 provided they are in the same row (i.e. on the same register). To perform such a gate between two qubits on different rows, we first have to move the qubits so that they are within one column of each other. This may require a controlled-SWAP gate using the pointer. The pointer then performs a controlled-SWAP to bring q 1 onto the auxiliary array. We then move the auxiliary array such that it is adjacent to the register containing q 2 . We are then in a position to perform our gate, after which q 1 should be returned to its original position. The combination of all these procedures results in universal quantum computation for the two dimensional register, and can be trivially extended to three dimensions. D. Qubit Measurement The final step in performing quantum computation is a measurement stage. This can also be accomplished with the help of the pointer. We move the pointer so that it is on the same square, but diagonally opposite, the qubit we wish to measure, | q 1 = α | 0 + β | 1 . We then perform the C 2 NOT procedure presented in the previous subsection, but rotated by 90 o . The effect of this is to entangle q 1 with the auxiliary qubit adjacent to the pointer, giving the state α | 00 + β | 11 . If we then measure the auxiliary array using a standing wave with double the lattice period, then this will correctly measure q 1 , without affecting the register array (except for the one qubit that we measure). . 6: The algorithm to perform a controlled-controlled-NOT around a square is presented. The notation for the program is defined in Figure 8(b). In this notation, the programmed gate of (b) is written as C 1,4 1 . This circuit is completely general in that it does not assume any of the bits to be classical. The employed operations are U = e −iσxπ/8 and W = U † σzU . Specifically in (a) the algorithm is presented to apply a two-qubit gate between q1 and q2 when q1, q2 and the pointer, P , are all on the same square. P ′ is the other auxiliary qubit on the square, and starts in the state | 0 . In (b) the commands required for performing the cc-NOT are presented, where the notation is given in Figure 8(b). The commands should be applied from top to bottom within a column and then from left to right. Practically, there is a potential problem in implementing this idea with global addressing. When we perform a measurement as specified in [22], we choose which state to measure, either | 0 or | 1 . If this state is occupied, then we are at risk of losing the atom from the optical lattice. Hence, we can't measure the whole line of auxiliary qubits because we would lose the pointer. This problem can be avoided if we use a technique similar to that which we are using on the single-qubit gates, and only measure every other qubit on the auxiliary array. E. Pointer Initialisation Up to now, we have assumed that we can create a pointer on a single lattice site. Technically, this is not a straightforward procedure, hence the desire for global addressing. However, as a one-off effect, we can achieve a single qubit rotation. Recall that, before we create the pointer, the system consists of a lattice of qubits all in the | 0 state. The way in which we intend to create a pointer is to use a tightly focused laser to create a σ x rotation on a single qubit. In general, this laser will not have a small enough Gaussian profile and it will create small rotations on neighbouring qubits. To circumvent this problem, one can impose a second lattice of double wavelength along both dimensions, as seen in Figure 7(a). This lattice should address the eight nearest neighbours to the qubit we wish to rotate, causing continuous measurements on their state | 1 and thus preventing population of this state through the quantum Zeno effect [22,23]. This is accomplished by applying an additional laser with amplitude Ω that couples the state | 1 to an excited state | e ′ which spontaneously emits photons at a rate Γ. This is illustrated in Figure 7(b), where the unwanted Raman transition between a and b is also depicted. The efficiency of this scheme has been studied by a simulation of the evolution of the state of the atoms that neighbour the pointer. Figure 7(c) shows the results of this simulation, plotting the fidelity of keeping the population of the neighbours fixed in the | 0 state, F , and the success rate, P . The success rate is the probability that no photon emission occurs during the process. We have assumed that the laser has a width of 0.8λ. This proves that significant suppression takes place for physically sensible laser profiles, since the fidelity can be brought close to 1. If photons are emitted, then they can be measured and the initialisation process can be repeated. In principle, further lattices can be applied, if necessary, to suppress the effect on the next-nearest neighbours. Vollbrecht et al., in [7], introduce a potentially very interesting concept which generates pointer qubits from imperfections in the lattice (the globally addressed computation component of their paper is only of secondary nature). This idea not only creates pointers, but also removes any imperfections in the loading of atoms into the lattice. We can, in fact, incorporate the idea into our scheme as an alternative procedure for preparing the pointer. We would do this by following the scheme presented in [7] until ready to perform the computation. At this point, we add two additional steps. These involve, firstly, performing a controlled-NOT operation as defined in their scheme, so that the qubit adjacent to the pointer is placed in the | 1 state. Secondly, we reject the pointer qubit, so that there is only one qubit in each lattice site, hence we have converted from their pointer into our pointer. However, the authors of this paper accept that the number of steps required for this preparation is currently very demanding from an experimental perspective. It also requires a different set of controls to those which we use in the rest of the computation. Both of these factors make it preferable to use quasi-single qubit addressing outlined above. IV. PHYSICAL IMPLEMENTATION OF SUPERLATTICES In order to perform the previously presented quantum gates we need to enable couplings on alternate rows and between alternate pairs. The required arrangement of lasers can be calculated by specifying the potential offset from the base trapping potential that we need to create across the 2D structure. Specifically, we need to ensure that the zero offsets appear exactly where we want no interactions to occur. For example, let us assume that we want to create interactions as shown in Figure 8(b), and that the vertices where the qubits are located are separated by a distance of λ/2. The potential offset, V off , that we require can thus be written as This expression can be expanded to give a sum of sine terms. Each term has a period and a direction. The period, d, specifies the angle between the pair of lasers that is required to create that term and is given by This potential, illustrated in Figure 8(a), can be implemented by the combination of two independent pairs of lasers, each one producing a potential offset, V i off , given by For example, taking the term V 1 off , the required laser field has a wave vector along the direction of (2 i − j)/ √ 5 and a period d = 2π/( √ 5π/λ) = 2λ/ √ 5. As a result, we can employ lasers of the same wavelength as those creating the trapping potential, where the doubling of the wavelength is produced by having θ ≈ 68.0 o . Similar setups can be used for generating the other control procedures of the previous section. As a final point we would like to consider the influence of the superlattices on the harmonicity of the trapping potentials of the atoms. Around the potential minima, the superposition effect can be given by expansions of sine or cosine functions. The qubits that remain uncoupled get only even powers in the expansion including quadratic terms, and hence the location of the qubits remain unchanged. However, the qubits that become coupled obtain an x term, and hence the trapping minima for the coupled qubits actually move together. This is not a problem provided the offset potential remains small compared to the trapping potential, and the superlattices are turned on adiabatically so the atoms remain in their ground states of the trapping potential. This also demonstrates that the trapping frequencies of the interacting qubits remain unchanged when the additional lattice field is introduced. V. PROPOSAL FOR THE EXPERIMENTAL REALISATION OF FACTORING 15 The standard experimental demonstration of a quantum computation is to factor 15 [24], the smallest meaningful factorisation, since the method fails for even numbers and powers of primes. The employed algorithms include a The arrows indicate which qubits are interacting. (b) Definition of the initial labelling of qubits and labels for the different interactions. V c-NOT means apply a c-NOT operation vertically, with the auxiliary as control. C q 2 ,P ′ γ means perform a cc-NOT operation with qubit q2 the target, and P ′ the unused qubit on the square, γ. significant element of simplification (for a full description of such methods, see [24,25]). Since we can already factor 15 by classically calculating all the steps in the algorithm, we can quickly realise that most of the work is redundant, enabling us to reduce the required control steps. This won't be possible when we consider factoring much larger numbers. The general factoring scheme, for a number N , which is K bits long, consists of taking 2K qubits (the first register), and applying the Hadamard to each of the qubits. This creates an equally weighted superposition of all numbers, x, from 0 to 2 2K − 1. We then take another K qubits (the second register) and calculate a x mod N , thus entangling them with the first register. a is a number that is randomly selected from the numbers smaller than N which satisfies gcd(a, N ) = 1. So, for N = 15, a ∈ {1, 2, 4, 7, 8, 11, 13, 14}. This calculation will, in general, also require some ancillas to act as scratch space for the calculation. The next step is to apply the inverse Fourier transform on the first register and then measure these qubits. The result is used as the input to a classical continued fractions algorithm, which will finally yield one of the factors of N . We start the simplification process by noting that, in the case of N = 15, a 4 mod 15 = 1 for all a. This means that only the 2 least significant bits of x affect the calculation on the second register. Furthermore, if we 'randomly' select a = 11 (say), we find that a 2 mod 15 = 1 and hence only the least significant bit of the first register matters. The computation that we have to perform then becomes very simple, while still creating the state The more standard case to demonstrate is the choice of a = 7, where we have to act on 2 of the bits of x. The circuit diagram for this is given in Figure 9 [24]. In Table 9(b) we give the set of commands required to perform this factorisation, with the notation being defined in Figure 8(b). The minimum device size is a grid of 18 × 2 qubits. In general, the size of the optical lattice for computing on N qubits needs to be 3N × 2 qubits. The need for this can be seen in Figure 8(b) since if we were to apply H SWAP 2 , qubits r 4 and x 7 would try to move into empty lattice sites if the device size was just N × 2, but the interaction has different effects if those lattice sites are empty. FIG. 9: The algorithm required for the 'hard' case of factoring 15. (a) shows the circuit diagram, and (b) provides the required set of commands. The commands should be applied from top to bottom within a column and then from left to right. The notation is defined in Figure 8(b). Here U σxU † = √ σz. A. The Auxiliary Qubits As with any proposal for quantum computation, decoherence is a significant issue that cannot be neglected. In a global control scheme, its significance only gets amplified. Such a scheme necessarily introduces more computational steps and so the algorithm will take longer to run, thus increasing the build-up of errors. Even more demanding is the requirement that our pointer qubit and, in fact, all the other qubits in the auxiliary array, are error free. If an error occurs in the auxiliary array, it will affect every gate throughout the rest of the computation. Such an error would be catastrophic for our computation, and needs to be prevented. This can be achieved by noting that all the qubits of the auxiliary array are in classical states. These classical states are eigenstates of σ z , and are thus unaffected by phase-flip errors. The effect of bit-flip errors can be reduced by using the quantum Zeno effect [22]. In principle, by continuous measurement, the probability of a bit-flip can be reduced to zero. Since all errors can be described in terms of phase flips, bit flips or a combination of the two, this renders the auxiliary array error free. While applying the quantum Zeno effect we do not wish to lose the pointer during the measuring procedure. Consequently, we have to employ the same optical lattice that only measures every other qubit on the auxiliary array. This also means that we can never measure the state of the pointer. We have to use the measurement result of the other qubits to indicate how likely it is that an error has occurred on the pointer. If some of the auxiliary qubits are found in the | 1 state, then we have to consider it likely that the pointer has also been affected and stop the computation. In the following subsection, we present how error correction can be performed on the register qubits. In principle, similar concepts can be applied to the pointer. However, our current method requires significant modification of the physical models, and is in need of optimisation. This is an avenue for future study. B. The Register Qubits Recent work [4] has shown that error correction can be implemented on globally controlled quantum computers, and that the architecture even supports fault tolerance. The basic idea is that the qubits are divided up into blocks of m qubits. These m qubits will constitute one encoded qubit for error correction. A typical error correction phase of a computation involves extracting a syndrome measurement from the encoded qubits, thus finding out what errors have occurred and, depending on the results held on ancillas, correcting for the error. The way one can implement this in a globally controlled structure is to add extra quantum gates that feed back from the ancillas to correct for the errors, without ever actually measuring them. We also require a 'switching station', which is a classical pattern of qubits, for every encoded block of qubits. This switching station allows us to switch on and off pointers with the correct sequence of global pulses. The patterning required for error correction is simply a | 1 for one of the blocks, and a | 0 for all the others, as in Figure 10. If we start with a pointer every m qubits (and we will select m to be even for the sake of the quantum Zeno effect, as described above), then the pointers start off with the correct parallelism for error correction. To switch to a phase where we want to apply operations to a single qubit, we just perform a controlled-operation using the switching station as the control. There are several drawbacks with this implementation of error correction. The first is that such algorithms are currently outside experimental feasibility because they require several thousand steps ( [4] gives specifications for two common codes), which would require a longer time to perform than current decoherence times. The second is that these switching stations are susceptible to errors. We will have similar problems applying the quantum Zeno effect to them as we do to the auxiliary qubits. Finally, we have to take care of the ancillas. At the start of each error correcting phase, we require our ancillas to be in the | 0 state. At the end of this phase, they will contain an unknown error syndrome. These qubits either have to be reset, or replaced by fresh ancillas. In the discussion of 3D structure in the following section, we will present how the third dimension could be used to provide a significant supply of fresh ancillas. This greatly simplifies the experimental implementation of quantum error correction algorithms. VII. PROPOSED COMPUTATIONAL SCHEMES AND CONCLUSIONS There are many different ways that our scheme can be used for performing quantum computation with three dimensional optical lattices. The choice of which one should be used can be determined by setting different priorities, such as efficiency of scaling, signal sizes etc. We outline some of the possible uses below. The conceptually simplest model is to arrange the computation on a one dimensional ladder, consisting of N qubits on a single register array and a single auxiliary array, also consisting of N qubits. This could be repeated across the whole three dimensional structure, so we would have approximately N 2 identical copies, all running in parallel. This is the least scalable architecture, since qubits are separated by O(N ) steps. However, it is very useful in terms of signal strength when we make a measurement at the end of the computation. Using the computer with many copies running in parallel gives an ensemble computation, where expectation values of the computational result can be given at the end. Alternatively, we can perform computation on a two dimensional grid by moving the registers relative to the auxiliaries with a series of SWAPs in the y direction. All qubits are within 2 √ N steps of each other, and so the structure scales more easily. We could also get a reasonable signal strength due to the parallel planes in the third dimension. As a final alternative, we could perform computation on a single plane, leaving all the other planes in the | 0 state, kept there possibly by the quantum Zeno effect. These other planes can then be used for ancillas in the error correction phase. They can be easily accessed as the computational plane can be moved through the other planes in the same way that the auxiliary array can be moved past the register arrays. Naturally, other possible arrangements exist, but these three illustrate some of the simple ideas that can be used. It is also important to remember that we are free to rearrange the labelling of our qubits so as to minimise the path of the pointer. Some operations can also be reorganised to minimise its path. These procedures can have a significant effect on the number of steps required to implement an algorithm, and hence also on the errors, and the practicality. Such an example has been given in a previous section where a convenient arrangement was given for the implementation of factoring 15. The significant remaining issue is the number of gates that can be implemented within the system's decoherence time. Our scheme has significant differences in comparison to those already proposed [6,7]. At the most fundamental level, we interact qubits in a different way, making use of the tunnelling interaction [18] instead of collisional couplings [26]. These are just two different experimental techniques and the current state of the art provides little to pick between them for performing global addressing quantum computation. The collisional schemes, however, have significant complications and/or drawbacks. Both these schemes use an additional qubit whose presence, or lack thereof, is crucial in performing quantum gates. This gives the pointer a very special position. In the present proposal, the pointer is not that special, it is just the same as every other qubit, experiencing the same fields etc. This makes experimental realisation easier, but also contributes to the complexity of the ideas required for the scheme. The work of [6] has significant issues associated with the initialisation procedure. They require a filling of one qubit on every other lattice site, which has been experimentally demonstrated in [20]. However, the pointer (referred to as a marker atom) is an extra atom that has to be inserted into one of the gaps. Our proposal extends the work presented in these papers by including control sequences that are particularly suitable for global addressing and it incorporates the concepts of error correction and avoidance [4]. It is unclear as to whether the scheme in [7] could be generalised to give the regular patterns required for error correction and fault tolerance, whereas the pointers in our proposal have no special place and can easily be prepared once a single pointer has been initialised.
8,583.4
2004-06-11T00:00:00.000
[ "Physics" ]
Synthesis , Characterization and Antimicrobial Activity of Imidazole Derivatives Based on 2-chloro-7-methyl-3-formylquinoline A series of oxazole and thereof imidazole derivatives were prepared from 2-chloro-7-methyl-3-formyl quinoline. The structures of all synthesized compounds were elucidated by elemental, IR, HNMR, CNMR spectra. Supplementary to these, they were assayed in vitro for their antimicrobial activity; it was revealed that some synthesized derivatives were exhibiting competent biological activity against both gram negative & gram positive bacterial species and fungal microorganisms. Introduction Heterocyclic compounds of nitrogen containing five membered ring systems have been described for their biological activity against various micro organisms. 1,2Besides this, the chemistry of quinoline and imidazoles have also been reviewed in literature.2][3][4][5] Considerable interest has been created in the chemistry of quinoline derivatives due to their versatile therapeutic activities like bactericidal, antihistaminic, antimalarial, antidepressant, analgesic, anti-ulcer, antiviral, herbicidal, antitumor, anti-allergic, anticonvulsant, anti-inflammatory etc. [6][7][8][9] Almost every class of imidazole derivatives has been used for different reactions to produce enormous number of heterocycles.1][12][13][14][15] The emergence of powerful and elegant imidazole has stimulated major advances in chemotherapeutic agents of remarkable significance in medicine, biology and pharmacy.Besides this, it is also reported [15][16] that imidazole compounds are one of the effective antifungal agents.Considering the importance of both moieties Quinoline and Imidazole, extending our previous work 17,18 we planned to synthesize imidazole derivatives from 2-chloro-7-methyl-3-formylquinoline.The whole synthesis route is shown in scheme 1. Material and Methods Acetanilide and their derivatives were purified by crystallization in R-spirit.DMF and phosphorous oxychloride used were of analytical reagent grade.All of the organic solvents and Hippuric acid, acetic anhydride, sodium acetate used were of analytical reagent grade.Eight diamines were used after recrystallization.The 2-chloro-7-methyl-3-formyl quinoline was synthesized by Vilsmeier-Haack reaction by the procedure reported in the literature. 16,19elting points were measured in an open capillary tube and are uncorrected.Elemental analysis was obtained using Perkin Elmer (USA) 2400, series II CHN-analyser.In addition to this, the nitrogen content in all the imidazoles was estimated by Kjeldhal's method. 20IR spectra were recorded on a NICOLET-400 D spectrophotometer, 1 H NMR spectra in CDCl 3 /DMSO-d 6 at 400 MHz on a FT-NMR, R-1500 spectrometer (chemical shift in δ ppm) relative to TMS as an internal standard.Reactions were monitored by TLC, using silica gel as an adsorbent and ethyl acetate-hexane in different ratios as eluent. Dimethyl formamide (9.6 ml, 0.125 M) at 0 0 C was taken in a three necked flask equipped with a drying tube and phosphoryl oxychloride (32.2 ml, 0.35 M) was added drop wise under continuous stirring.To this solution, 3-methyl acetanilide (0.05 M) was slowly added with continuous stirring.After five minutes, solution was heated under reflux for 1 hour at 80-90 0 C. The reaction mixture was poured into ice water (300 ml) and stirred for further 30 mins at 0-10 0 C. The 2-chloro-7-methyl-3-formylquinoline so obtained, was filtered and washed with water.It was crystallized using R-spirit.The yield was 82%. Hippuric acid 2-chloro-7-methyl-3-formyl quinoline IMDT Antimicrobial assay: The antimicrobial activities were determined using agar-cup method by measuring the zone of inhibition in mm.All newly synthesized compounds were screened in vitro for their antibacterial activity against Gram positive species (Bacillus subtilis, Bacillus megaterium) and Gram negative species (Escherichia coli, Pseudomonas aeruginosa), while antifungal activity was tested against Aspergillus niger and C. albicans at concentration of 75 μg/ml.Streptomycin was used as a standard drug for antibacterial screening, while Imidil was used as a standard drug for antifungal screening and solvent DMSO was used as a control.Each experiment was made in triplicate and the average reading was taken.The results are summarized in Table-3. Synthesis, characterization and antimicrobial activity of imidazole derivatives based on 2-chloro-7-methyl-3-formylquinoline The test was performed by using the agar cup borer method, with some modifications using Streptomycin and Imidil as reference for bacterial and fungal culture respectively. 27A test tube containing sterile melted top agar (1.5 %) previously cooled at room temperature with 0.2 ml suspension of the test culture, mixed methodically and poured in the petri dish containing sterile base agar medium (autoclaved at 121°C for 15 min.)then allowed it to solidify.The cup borer was sterilized by dipping into absolute ethanol and flaming it then allowed to cool it.With the help of sterile cup-borer, three cups in the agar-plate were marked and were injected with 0.1 ml of test solution, 0.1 ml of standard solution and 0.1 ml of DMSO solvent respectively.Then the plates were allowed to diffuse for 20 min. in refrigerator at 4-5°C.The plates were then incubated in upright position at 37°C for 24 hrs.After incubation, the relative susceptibility of the micro-organisms to the potential antimicrobial agent is demonstrated by a clear zone of growth inhibition around the cup.The inhibition zone caused by the various compounds on the micro-organisms was measured and the activity was rated on the basis of the size of the inhibition zone. Results and Discussion The elemental analysis of the prepared compounds is given in Table-1 In all the imidazole derivatives vinylic proton is seen around 6 ppm (δ).The aromatic protons are assigned to resonances in the range of (δ) 7.00 to 8.2 ppm.The resonance due to -NH 2 moiety is attributed to the peak in the range of 6.5 to 6.8 ppm.The resonance due to -CH 3 is observed at 2-2.2 ppm.The compounds containing 4,4'-diamino diphenyl methane has a >CH 2 moiety attached to benzene ring and this >CH 2 is highly deshielded.This is reflected in the proton NMR signal of >CH 2 group seen at 3.69 ppm.The 13 C NMR peaks are quite interesting in all these imidazoles derivatives where the peak around 165 ppm is attributed to C of >C=O (Table -2).In all the compounds the peak at 158 ppm is assigned to Cl-C=N moiety.The peak at 148 ppm is likely due to >C=N moiety.The peaks in the region 110-130 ppm are attributed to aromatic ring.The compounds containing 4,4'-diamino diphenyl methane shows a peak at 40 ppm which is due to =CH 2 group attached to both the rings.Practically in all the compounds -NH 2 asymmetric stretching vibration is assigned to a peak around 3400 cm -1 , while a peak around 3250 cm -1 is attributed to -NH 2 symmetric stretching vibration.The =CH stretching vibration in the vinyl moiety is attributed to the absorption at ~3040 cm -1 .The aromatic C-H stretching frequency, as expected is observed at around ~3010 cm -1 .The strong absorption at ~1700 cm -1 is found to be present in majority of the compounds.The absorption will have contributions from stretching of >C=O and >C=N.The strong absorption at 1650 cm -1 have contributions from υ C=N, υ C=C and bending of -NH 2 .In most of the compounds the C-C stretching of the aromatic ring is around 1540 cm -1 . A fairly strong absorption at ~1300 cm -1 is assigned to C-N stretching.The strong absorption in the region 810-840 cm -1 is due to C-H out of plane bending in aromatic ring.The C-Cl stretching is attributed to the strong absorption in the region 740-720 cm -1 .Compounds containing O=S=O moiety show strong absorption in the region of 1050-1200 cm -1 is due to O=S=O stretching.The C-H bending in the vinyl moiety is seen as a strong band around 800 cm -1 in all the compounds.The compounds containing -CH 3 group shows peaks due to asymmetric and symmetric bending of -CH 3 group at 1475 and 1375 cm -1 respectively and absorption at ~550 cm -1 in the bromo compounds is assigned to C-Br stretching. The synthesized compounds were screened 'in vitro' for antimicrobial activity.From the data presented in Table-3, it is clear that out of 8 imidazole compounds IMMD, IMBD, IMDM exhibited moderate inhibition against gram negative bacterial species and especially against Escherichia coli while IMBD, IMDM and IMOTD showed maximum activity against most gram negative organisms.Against gram positive organisms almost all compound of the series exhibited maximum inhibition, especially IMPD and IMBD showed highest inhibition against B. megaterium, while IMMD and IMDT showed good inhibition against fungal organism especially C. albicans.The other compounds exhibited moderate to less inhibition against fungal species, but IMMD showed good inhibition.
1,877.4
2012-01-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Making Every Dollar Count: Local Government Expenditures and Welfare The paper aims at examining the relationship between the allocations of funds at the local government level and the economic well-being of citizens. The results of this study help shed light on ways to get the best out of every dollar spent by local governments. Three empirical proxy measures for citizen well-being were used in the estimation of three different panel data models. Results suggest that some types of government expenditure can have a positive influence on citizen well-being. The analysis provides insights into how economic development policies may be conceived by local governments to ensure sustained economic prosperity of its citizens. Introduction Government spending and taxation can have significant effects on the economy and on the lives of individuals.Government carries out a number of important economic functions including correcting inefficiencies in the allocation of goods and services by levying taxes or providing subsidies to correct externalities.Governments also provide public goods such as national defense, police protection, and infrastructure.Governments can also have an economic stabilizing function to reduce unemployment or inflation [1]. Many politicians advocate increases in government spending, particularly during recessions, to stimulate economic growth.Other politicians oppose increases in government spending as it contributes to the government deficit in the long run.Empirical evidence is mixed; studies suggest that some types of public social welfare expenditures can improve economic well-being.Hungerford [1] finds that countries with higher public social welfare expenditures relative to GDP have lower relative poverty rates.Gupta, Verhoeven, and Tiongson [2] use cross-country data for 44 countries to assess the relationship between public spending on health care and the health status of poor.Their results suggest that increased public spending on health alone will not be sufficient to significantly improve health status.Kenworthy [3] studies the effects of social welfare policy extensiveness on poverty rates across fifteen industrialized nations over the period 1960-91 using both absolute and relative measures of poverty.The results strongly support the conventional view that social-welfare programs reduce poverty.Fan, Hazell and Thorat [4] develop a simultaneous equation model using state-level data for 1970-1993 to estimate the direct and indirect effect of different types of government expenditure on rural poverty and productivity growth in India.They find that investment in rural road, agricultural research and education reduces poverty, but health has only a modest impact on growth and poverty. This paper examines the consequences of local government spending, especially public social welfare expenditures, on the well-being of the citizens in the US.Local government is defined to encompass counties, municipalities, towns and townships as well as special purpose governments such as water, fire and library district governments and independent school district governments.Using US Census Bureau's Census of Governments data [5], it was found that there were a total of 39,044 counties and sub-county governments in the United States and 50,432 special purpose governments, giving a total of 89,476.This gives a framework of the number of government entities under consideration in the rest of the paper.However, for the purposes of this paper, the local governments have been aggregated within states and assessed over time, from 1990 to 2005. Measures of Citizens' Well-Being Three empirical proxy measures of citizens' well-being are used in this paper including poverty rate, median income, and disposable income.These measures are typical in similar studies [6][7][8].The poverty rate within a region (e.g., state, county, and municipality) is a common measure used to define citizen well-being.To determine if a family is living in poverty, the US Census Bureau compares a family's total income against a set of money income thresholds which vary by family size and composition.These thresholds are based on the ones designed by Mollie Orshansky in the 1960's.If the total income is less than the appropriate threshold, then the family and all of the individuals within the family are considered to be in poverty.Although the money income thresholds do not vary across the nation, they are modified each year to account for inflation using the Consumer Price Index for All Urban Consumers (CPI-U) [9]. In the US, money income, market income, post-social insurance income and disposable income are four measures of income that can be used to estimate economic well-being and the impact of taxes and governmental transfers.Money income is used in the official definition of poverty and the other income measures differ from each other based on the inclusion or exclusion of certain monetary components.As a result, income distribution changes depending on the income measure used [10]. Money income includes all money income earned or received by individuals who are 15 years or older before tax deductions or other expenses.This measure does not include capital gains, lump-sum payments or non-cash benefits (e.g., payments from insurance companies, worker's compensation or pension plans).Market income consists of all resources available to families based on market activities.It is similar to money income but government cash transfers and imputed work expenses excluding work expenses are deducted.However, imputed net realized capital gains and imputed rental income are included in this definition.This measure can be used as a reference point when investigating the effect of government activity on income and poverty estimates [10]. Post-social insurance income consists of governmental programs that affect everyone and not those solely created for people with low income.This measure is similar to market income except that non-means-tested government transfers are included (e.g., social security, unemployment compensation and worker's compensation).Therefore, households who receive income from at least one of the non-mean-tested government transfers have a higher median income under post-social insurance income measure than market income [10]. The final income measure is disposable income, which represents the net income households have available to meet living expenses.According to Smeeding and Sandstorm [7], the best income definition when determining poverty and poverty rates is disposable cash or near-cash income.This measure includes money income, imputed net realized capital gains, imputed rental income and the value of noncash transfers (e.g., food stamps, subsidized housing and school lunch programs).Excluded from this measure are imputed work expenses, federal payroll taxes, federal and state income taxes and property taxes for owner-occupied homes.Of the four income measures, disposable income has the lowest median income.A comparison between post-social insurance income and disposable incomes highlights the net impact of meanstested government transfers and taxes.By comparing market income and disposable income, the net impact of government transfers and taxes on income and poverty estimates can be determined [10]. The National Academy of Sciences (NAS) advocates for a revision of the methods used by the US Census Bureau to measure poverty.One criticism of the official poverty definition is that pre-tax income (i.e., money income measure) is used to determine who is in poverty.Therefore, the effect of how taxes, non cash benefits and work-related and medical expenses on people's wellbeing are not taken into account.Additionally, the effect of policy changes on people who are considered to be in poverty cannot be observed.Another criticism is that the official poverty definition does not reflect variation in costs across the nation.NAS believes that the official thresholds do not accurately represent the increase in expenses or the economies of scales which occur with increases in the family size [9]. This criticism surrounding how poverty rates are calculated in the US provides the rationale for examining alternative measures of citizen well-being in addition to using poverty rate.That is, we will investigate how local government expenditure affects three different measures of citizen well-being; poverty rate, median income and disposable income. The foregoing discussion focuses on objective metrics of well-being.However, there is increasing consensus among sociologists and economists that well-being of individuals cannot be described solely by objective social situation alone [11].Thus, there are calls for a more nuanced view of well-being that meshes the broader social trends that is placing higher value on the quality of life rather than on economic success [12] and shifts focus within the social sciences to recognize the limits of revealed preferences, upon which most of consumer economics rests.The foregoing have caused the adoption of subjective well-being from its psychology domain and incorporated it into economics and sociology in attempts to understand how individuals within a community assess their well-being.While this paper makes no attempt to incorporate these broader and richer measures of individual subjective measures of well-being, it is discussed here to anchor our observations about the limitations of a macro-level analysis such as this as we search for the influence of local government expenditure on citizen wellbeing and local economic development. Data An unbalanced panel data collected from the US Census Bureau's annual survey of local government finances as well as state population sizes from 1991 to 2005 for fifty states plus the District of Columbia was used.However, local government finances data for the years 2000 and 2002 were not available and so these data points were treated as missing points.The expenditure categories include education, health, transportation, public safety, environment and housing, and government administration.The well-being measures (poverty rate, median income, and disposable income) were collected directly from US Census Bureau annual data for these communities.The absolute magnitude of local government expenditures varied significantly from state to state as so did the population sizes of the states.In order to avoid scaling problems associated with different expenditure and population sizes of the respective local governments, the expenditure categories were expressed in per capita terms.To do this, the local government expenditures in each state were divided by the population size of each state.The population data for the respective states were obtained from the Census Bureau database.Similarly, the poverty rate was expressed in percentage terms and both the median income and the disposable income for each state were also expressed as a proportion of the average values for the United States All variables are expressed in natural logarithms. Government Spending Measures Education spending includes spending on colleges and other institutions but does not include agricultural extension services.Spending on health includes immunization clinics, research and education, and public health administration.Public safety spending includes police and fire protection.Transportation includes spending on highways and air transportation.Environment and housing includes spending on parks and recreation and solid waste management.Government administration includes spending on planning and zoning.For a complete definition of the spending categories, please see the US Census Bureau.All expenditure categories are assumed to have a positive effect on economic well-being. Methods Following the work of Wald [13], Hildreth and Houck [14], and Swamy [15,16], we use three panel data models to assess the relationship between the measures of wellbeing and local government expenditure, including pooled ordinary least square (POLS), fixed effects model (FE), and random effects model (RE).The pooled ordinary least square model is used assuming that there is no individual heterogeneity between the states.However, the fixed effects and the random effects models account for heterogeneity among the states as will be explained further under each model.The study followed series of tests to choose appropriate models as shown in the results section. Under the pooled-OLS model, we have , where it y is the dependent variable (poverty rate, median income, disposable income) of the i th state at time t, it X represents the independent variables, βs are the estimated coefficients, α represent the intercept and u it is the unobserved error term.The POLS model increases the probability of a bias occurring due to unobserved heterogeneity (u it and it X may be correlated).If it is believed that the error terms and the independent variables are not correlated, then using POLS can give unbiased estimates.Otherwise the bias can be addressed by decomposing the error term in two components: where i is a state-specific error and it v represents an idiosyncratic error. Since the state-specific error does not vary over time, every state has a fixed value on this latent variable.Unlike the state-specific error, the idiosyncratic error, it v , varies over states and time and it should satisfy the assumptions for the standard OLS error terms. In the Fixed Effect (FE) estimation, after some manipulations we can get ( ) ( ) which can be estimated by POLS or FE estimator.Timeconstant unobserved heterogeneity is not a problem allowing the FE-estimator to be successful in identifying the true causal effect.In the Random Effects (RE) estimation, we assume that the i 's are random (independent and identically distributed random-effects) and that Cov (X it , i ) = 0. We run a set of regressions using each of the above models.The regressions use expenditures in education, health, transportation, public safety, environment and housing, and government administration as the explanatory variables.The dependent variables are poverty rate, median income, and disposable income. Results Results from the POLS model, Fixed Effects (FE) model and the Random Effects (RE) model are displayed in Table 1.Each model was analyzed with each measure of economic well-being. Fixed effects are tested by the (incremental) F test, while random effects are examined by the Lagrange OPEN ACCESS ME P. V. GARAY ET AL. 90 Multiplier (LM) test [17].If the null hypothesis (of the F test or LM test) is not rejected, the POLS regression is favored over FE or RE models, respectively.When poverty rate and disposable income are dependent variables, results of the F test and LM test indicate FE and RE models are preferred to POLS estimation.However, when median income was used as a dependent variable the test failed to reject the null hypotheses, implying that for this model the POLS model is preferred to both the FE and RE models. When either FE and RE models are preferred to the POLS model, the Hausman specification test [18] was applied to compare the FE and RE models.The Hausman specification test compares FE versus RE under the null hypothesis that the individual effects are uncorrelated with the other regressors in the model [18].If correlated (H 0 is rejected), a random effect model produces biased estimators, violating one of the Gauss-Markov assumptions; so a FE model is preferred.Examining whether the dependent variable (i.e., poverty rate, median income, disposable income) is location dependent is crucial to identify the true causal effect.The FE model controls state heterogeneity, which is supposed to be time constant, and helps identify the relationship between the dependent variables and explanatory variables.When the poverty rate and disposable income were used as dependent variables, we used the Hausman homogeneity test to determine if there was a systematic difference in coefficients between the FE and the RE models.Results indicate that there is a significant difference between the two models suggesting that there is certain covariance present between i 's and X it 's implying FE is preferred to RE.Some of the results, especially the signs on the coefficients, were not consistent in the different models.For example, in the upper panel of Table 2, the signs for the transportation variable were positive for the POLS and RE models, but negative sign for the FE model.So, for the foregoing discussions, we have used variables that were consistent in all the models. When poverty rate is used as a dependent variable, the In the POLS model where the median income is used as the dependent variable, most of the variables are statistically significant.Public safety and government administration are not significant.Using the POLS model to predict the effects of government expenditure measures on median income, a 1% increase in expenditure on education would cause median income to increase by 0.08%.Expenditures on environment and housing (e.g.parks and recreation, and housing and community development) and public safety cause the largest increase in median income as a proportion of average US median income compared to other expenditure categories. Using the FE model, when we analyze the relationship between the local government expenditure and the disposable income, expenditures on education and environment and housing have positive effect on disposable income.Government administration expenditures negatively affect disposable income.Using the fixed effects model to predict the disposable income, a 1% increase in expenditure on education would cause disposable income to increase by 0.03%. Conclusions In this study, we focused on the relationship between local government expenditure and citizen well-being.We hypothesized that the allocation of local government expenditure influenced the wealth status of its citizens.Three panel data models (pooled-OLS, FE model and RE model), each with three different dependent variables (poverty rate, median income and disposable income) were estimated to determine the impact of government expenditure on citizen well-being.Most of the variables in the regressions are statistically significant implying that government expenditures do affect citizen well being.The signs for education variable coefficients unambiguously matched expectations.Expenditure in education has a statistically significant impact on well-being regardless of the model. Depending on the objectives of the local government, policy instruments can be designed to target specific citizen well-being measures.For example, based on the results of the current study, if the priority of the local governments is to reduce poverty level, more per capita expenditure may be allocated on government administration.If local government wants to target median income, the results would suggest more expenditure on environment and housing (e.g.parks and recreation, and housing and community development) which cause the largest increase in median income as a proportion of average US median income compared to other expenditure categories.
3,964.8
2014-01-06T00:00:00.000
[ "Economics" ]
Scanning Quantum Dot Microscopy Interactions between atomic and molecular objects are to a large extent defined by the nanoscale electrostatic potentials which these objects produce. We introduce a scanning probe technique that enables three-dimensional imaging of local electrostatic potential fields with sub-nanometer resolution. Registering single electron charging events of a molecular quantum dot attached to the tip of a (qPlus tuning fork) atomic force microscope operated at 5 K, we quantitatively measure the quadrupole field of a single molecule and the dipole field of a single metal adatom, both adsorbed on a clean metal surface. Because of its high sensitivity, the technique can record electrostatic potentials at large distances from their sources, which above all will help to image complex samples with increased surface roughness. Interactions between atomic and molecular objects are to a large extent defined by the nanoscale electrostatic potentials which these objects produce. We introduce a scanning probe technique that enables three-dimensional imaging of local electrostatic potential fields with sub-nanometer resolution. Registering single electron charging events of a molecular quantum dot attached to the tip of a (qPlus tuning fork) atomic force microscope operated at 5 K, we quantitatively measure the quadrupole field of a single molecule and the dipole field of a single metal adatom, both adsorbed on a clean metal surface. Because of its high sensitivity, the technique can record electrostatic potentials at large distances from their sources, which above all will help to image complex samples with increased surface roughness. The atomic structure of matter inevitably leads to local electrostatic fields in the vicinity of nanoscale objects even if they are neutral [1]. For this reason electrostatic forces often dominate the interactions between nanostructures. In spite of their omnipresence, experimental access to such local electrostatic fields is a formidable challenge, Kelvin probe force microscopy (KPFM) being the most promising attempt to address it so far [2][3][4]. However, since KPFM measures the contact potential difference between surfaces, which by definition are extended objects, it inevitably involves considerable lateral averaging, especially for larger probe-to-surface distances. True three-dimensional imaging of local electrostatic fields in a broad distance range is therefore difficult with KPFM [5]. Here we introduce a scanning probe technique that provides a contact-free measurement of the electrostatic potential in all three spatial dimensions, without the drawback of distance dependent averaging. This is possible because the method, unlike KPFM, directly probes the local electrostatic potential at a well-defined subnanometer-sized spot in the junction. Besides its high spatial resolution, our technique benefits from a remarkable sensitivity that allows, e.g., the detection and quantitative evaluation of the electrostatic potential 7 nm above a single adatom on a metal surface. We image the electrostatic potential using a nanometer-sized quantum dot (QD) attached to the apex of the scanning probe tip (Fig. 1a). The tunneling barrier between the QD and the tip is sufficiently large so that the electronic levels of the QD experience only weak hybridization [6]. In the experiment the electronic levels of the QD are gated with respect to the Fermi level E F of the tip by applying a bias voltage to the tip-surface junction (Fig. 1b) [7][8][9][10][11]. In this way, the charge state of the QD can be changed, e.g. if the bias voltage V applied to the junction reaches a critical value V − that aligns one of the QD's occupied electronic levels with E F , thus inducing its depopulation (Fig. 1b). With this device the mea- surement of a local electrostatic potential field Φ(x, y, z), caused e.g. by a surface adsorbate, is possible because the electronic levels of the QD shift in response to any perturbation of the potential at the position (x, y, z) of the QD. Although small, these shifts can be detected by their effect on the charge state, if an occupied or empty level, gated by the bias voltage, lies in the close vicinity of E F (Fig. 1c). In essence, detecting charging events of the QD while scanning the three-dimensional half-space above the surface is the core working principle of our method, to which we refer to as scanning quantum dot microscopy (SQDM). Figs. 1e-f illustrate the outcome of SQDM imaging, by visualizing the effect which monolayer-thick islands of perylene tetracarboxylic dianhydride (PTCDA) adsorbed at the Ag(111) surface have on the charge state of the QD when the latter is scanned at a distance of ≈ 3 nm across the surface. The red contours in Figs. 1e-f mark locations where the QD changes its charge state. Note that these contours follow the shape of the standing wave pattern (Fig. 1d) which is formed by the surface state as it is scattered by the perturbed electrostatic potential in the surface [12]. This is an initial indication that the gated QD is indeed sensitive to the electrostatic potential created by the sample in the half-space above it. In the remainder of the paper we present experimental results that unambiguously confirm this conjecture. First we describe the structure of the junction and details of the measurement protocol. In our realization of SQDM, the role of the QD is played by a single PTCDA molecule which is connected to the tip of a commercial qPlus tuning fork [13] non-contact atomic force / scanning tunneling microscope (NC-AFM/STM) from CREATEC, operated at 5 K and in ultra-high vacuum (Fig. 2a). The molecule is attached to the tip through a single chemical bond between the outermost atom of the tip and one of the corner oxygen atoms of the PTCDA, using a well-described manipulation routine [14,15] that in brief is described as follows: Firstly the isolated PTCDA molecule adsorbed on Ag(111) surface is approached by the silver-covered AFM/STM tip directly above one of its corner oxygen atoms. These atoms are known to show reactivity towards silver. At a tip-surface distance of about z tip =6.5Å the chosen oxygen atom jumps up to establish a chemical bond to the tip. By this tip-oxygen bond the entire PTCDA molecule can be lifted off the surface. As the final bond between the molecule and the surface is broken, the attractive interaction with the surface aligns the molecule along the axis of the tip [14,15], leaving it in a configuration that is suitable for SQDM (Fig. 2a). Scanning the tip at sufficiently large distances from the surface ensures that the PTCDA QD does not change its configuration on the tip during the SQDM experiment. For a given distance between tip and sample (z tip ) the quantum dot is therefore always located at coordinate z, where d = z tip − z is the distance of the quantum dot from the tip apex (Fig. 2a). Since electrostatic potential measurements in SQDM are based on changes of the QD's electron occupation, a sensitive detection of charging events is crucial. In the present realization of SQDM this is accomplished by registering abrupt steps in the tip-sample force which always accompany the change of the QD's charge state [7][8][9]11]. In the qPlus NC-AFM, these steps show up as sharp dips in the frequency shift curve ∆f (V ) [13,16] (Fig. 2b). The positions of the sharp ∆f features on the bias voltage axis are the principal signal which is evaluated in SQDM. A slight complication in the measurement of local electrostatic potential fields by SQDM arises from the fact that topographic features in the sample surface can also lead to changes in the QD's charge state. Since nanostructures which produce local electrostatic potential fields usually have topographic signatures, both influences have to be disentangled from one another. As it turns out, the simultaneous analysis of two levels of the QD offers a straightforward possibility to achieve this. Therefore, in the experiments to be discussed below, we access two electronic levels of the molecular QD: one empty, the other occupied by one electron. To this end, the data in Fig. 2c, measured above the bare Ag(111) surface, exhibits two charging events: gating the occupied (empty) level to E F reduces (increases) the charge by one electron at V − (V + ). We note in passing that Fig. 2c also reveals that both charging events appear on top of the well-known parabola which originates from the attractive interaction between the opposing electrodes of the biased tip-surface junction [3]. The fact that topographic signatures can charge or discharge the QD if the tip is scanned across the surface in constant-height mode, i.e. at a fixed z (see Fig. 2a), is naturally explained by changes of the junction capacitance with the distance between tip and sample [8]. The effect is illustrated in Fig. 2c by the observation that the absolute values |V + | and |V − | increase with the distance z between the tip and the bare Ag(111) surface. Here we describe this behaviour in terms of a quantity α that we call gating efficiency. A smaller value of α implies that a larger bias is needed to align any given PTCDA level with E F . A larger z thus goes along with a smaller gating efficiency. In fact, at a fixed z, the quantity ∆V ≡ V + − V − is inversely proportional to the gating efficiency α: α = C/∆V where C is a constant [19]. In contrast to α, a local electrostatic potential Φ at the position of the QD shifts V − and V + rigidly on the voltage axis. For a fixed z, the separation of Φ * from topography, dielectric contrast and all other factors that influence the gating efficiency can be achieved straightforwardly, by Φ , where Φ * 0 is a constant. Note that Φ * is the local electrostatic potential created by the nanostructure in the presence of the metallic tip. Its relation to the electrostatic potential Φ of the nanostructure in the absence of the tip will be discussed below. We are now in the position to demonstrate the power of SQDM by mapping out the local electrostatic potential field of a nanostructure. As the latter we choose an individual PTCDA molecule adsorbed on the Ag(111) surface. Its field is expected to contain two major contributions, a quadrupolar field that is produced by the internal charge distribution of the molecule (negative partial charges at the oxygen atoms, see 3a) and a dipolar field due to the well-known electron transfer from Ag(111) to PTCDA upon adsorption [17]. The experimental quantities ∆V (x, y), inversely proportional to the gating efficiency, and V − /∆V (x, y), proportional to the electrostatic potential (up to a constant offset), are displayed in Figs. 3a-f for z tip = 24Å, 28Å, 36Å. A visual inspection of the images in Figs. 3d-f immediately indicates their close resemblance to the molecular quadrupole field. This is reinforced by a comparison to the results of a microelectrostatic simulation, in which the internal charge distribution of PTCDA, as calculated by density functional theory [18], its screening by the metal, and a homogeneous charge transfer from the metal to the molecule have been taken into account. Since the precise amount of transferred charge is not known, this was treated as a fit parameter (q = −0.09 e). The result is shown in Figs. 3g-i, which show excellent qualitative agreement with the corresponding images in Figs. 3d-f. Remarkably, if we compare the distances from the surface at which the model potential had to be calculated (z = 16Å, 22Å, 28Å) in order to reproduce the experimental images at z tip = 24Å, 28Å, 36Å, we obtain a systematic difference of d = (7 ± 1)Å (see Fig. 2a). This shows that the electrostatic potential is probed at a point approximately 7Å below the tip apex, hence precisely at the position of the QD, as to be expected from the proposed mechanism of SQDM. We then choose the well-known Smoluchowski dipole [1], created here by a single metal adatom on a metal surface (Figs. 4a-c), to demonstrate fully quantitative three-dimensional electrostatic potential imaging. For the latter it must be taken into account that the constants C and Φ * 0 are z-dependent. This can be taken care of by performing V − 0 (z) and V + 0 (z) reference measurements for a fixed set of heights z at a location where the local electrostatic potential Φ is taken to be zero, e.g. above the bare Ag(111) surface. In this way Φ can be evaluated in all locations (x, y, z) from Φ * (x, y, z) = −α 0 (z) V − (x,y,z) ∆V (x,y,z) ∆V 0 (z) − V − 0 (z) [19], where α 0 (z) is the z-dependent gating efficiency when the QD-tip is above the bare Ag(111) surface. In the simplest case, α 0 (z) = d/(z + d), if a plate capacitor geometry is assumed. Fig. 4d shows the experimental electrostatic potential vertically above the adatom, evaluated by the above formula, in comparison to the result of a DFT calculation [20]. Before making the comparison one should note that For the experimental data z is the distance between the point inside the QD at which the electrostatic potential is measured, and the surface. For DFT z is the distance from the surface at which the potential of the adatom was calculated. Inset: Constant height raw ∆f image recorded at z = 6.3 nm with an applied bias of V = 9.6 V, close to V + for this z. DFT yields the electrostatic potential Φ in absence of the tip. It is also clear that the grounded tip screens the local electrostatic potential Φ to a smaller value Φ * . Taking into the account this screening [23], we obtain the experimental Φ that comes at about 70% of DFT values. We consider the observed agreement in magnitude a remarkable verification of the SQDM performance in quantitative mapping of the electrostatic potential. The remaining small discrepancy between the theory and the experiment can be explained by an effective increase of α (in comparison to the plate capacitor case) caused by the strong curvature of the sharp metal tip used in the experiment. We note that this influence can be quantified by measuring a structure whose electrostatic potential is known and then transferred to any other experiment with the same tip. Finally, we comment on the sensitivity of our electrostatic potential field measurement, again using the adatom as an example. Using the fact that the experimentally determined Φ(z) closely follows the 1/z 2 behavior that is expected for a point dipole (see Fig. 4d) we find that the Φ(z) reaches the sensitivity limit defined by our bias voltage measurement resolution of ∼ 1 mV at a distance of about z = 7 nm from the surface. To confirm this estimate, the inset in Fig. 4d shows that the Smoluchowski dipole field of the adatom is indeed still detectable at a distance of z = 6.3 nm from the surface. In conclusion, we have reported a scanning probe technique that is able to provide truly three-dimensional, so far elusive, maps of the electrostatic potential field with nanometer resolution. The current realization of scanning quantum dot microscopy (SQDM) is based on a single molecular quantum dot attached to the tip of a scanning probe microscope. We demonstrate the power of SQDM by measuring the electrostatic potential fields of the Smoluchowski dipole created by a single silver adatom and by the quadrupole moment of a single PTCDA molecule. Since the quantum dot serves as a sensor of the electrostatic potential which at the same time transduces this signal to a charging event, the technique is a particularly fascinating variant of the general sensor/transducer concept for scanning probe microscopy introduced earlier [24-27]. Here, however, the transduction involves electronic rather than the mechanical degrees of freedom that were utilized in previous work. As a consequence of its high sensitivity, SQDM can be applied to rough and high aspect ratio samples, opening the possibility to study, e.g., semiconductor devices and biological samples. Moreover, the combination of high sensitivity and spatial resolution suggests the possibility of reading nanoscale electric memory cells entirely contact-and current-free. We note in passing that beside AFM other detection schemes of the charging events are conceivable. Finally, we stress that a molecular quantum dot, although particularly attractive, does not exhaust all possibilities. SQDM probes with nano-fabricated quantum dots on standard silicon AFM cantilevers can be envisioned, extending the applicability beyond ultra-high vacuum and cryogenic temperatures. [20] The DFT calculation was carried using the SIESTA package [21,22]. The exchange-correlation part was treated within the local-density approximation which gives realistic results for strongly bound adsobates like the Ag adatom considered here. The adatom adsorbs at the hollow site with a height of 2.11Å, which is 0.25Å closer to the surface than its ideal position would be. In order to calculate the electrostatic potential in real space, we decompose the charge density into two parts, by the clean surface and by the adatom. The main contribution of the Ag adatom originates from a region of about 4Å around the adatom. Since the potential of the clean neutral surface decays exponentially with increasing height, only the charge density contribution due to the adatom causes a measurable electrostatic potential at the position of the QD. This potential is evaluated from the charge density by solving Poisson's equation in real space, without any supercell periodicity. concept the quantum dot also acts as a filter which lets only those interaction potentials pass which lead to a gating of the quantum dot (importantly, this rules out interference from non-electrostatic forces which hamper attempts to measure electrostatic forces directly with AFM, as well as interference from the electrostatic interaction with distant parts of the tip, thus guaranteeing a superior spatial resolution) and as an amplifier, which -as long as these potential gradients lead to a 'charging event contrast' -allows the detection of electrostatic potential gradients, the corresponding forces of which may be undetectable by direct AFM. and where E hole (E el ) is the energy needed to create a hole (electron) in the quantum dot when no bias voltage V is applied to the junction. ±αeV is the energy associated with the position of the hole or electron in the electrostatic potential created by the bias voltage V . α is the gating efficiency which determines which fraction of the bias voltage V drops between the tip and the quantum dot. Finally, Φ * is an additional electrostatic potential present at the position of the quantum dot, created, e.g., by a nanostructure in the vicinity. In our experiment Φ * is the measured quantity. The charging conditions can be written as E(N − 1) = 0 and E(N + 1) = 0. This leads to a pair of equations in which V + and V − are the charging voltages measured in the experiment (cf. main text). From eqs. S3 and S4 we obtain Assuming that neither E hole nor E el changes when the tip is scanned at constant height (i.e. fixed z) across the surface, eqs. S5 and S6 show that from the measured V + (x, y), V − (x, y) at a given z we can obtain maps of the gating efficiency α(x, y) and the potential Φ * (x, y) at this z, up to a scaling factor and an offset. In Figs. 3a-c of the main paper we plot the measured ∆V , related to α −1 , and in Figs. 3d-f of the main paper we plot the measured dimensionless quantity V − /∆V , related to Φ * . REFERENCE MEASUREMENT AT CONSTANT z The unknown scaling factor (E hole + E el )/e (appearing in the main text as C) and offset E hole /e (appearing in the main text as Φ * 0 ) in eqs. S5 and S6 can be eliminated by a reference measurement at a point (x 0 , y 0 ) at which the local electrostatic potential Φ * is zero, e.g. above the bare Ag(111) surface. It is important that this reference measurement is carried out at the same z at which α(x, y) and Φ * (x, y) are further evaluated. If the local electrostatic potential Φ * is zero, eqs. S3 and S4 become where α 0 = α(x 0 , y 0 ), V − 0 = V − (x 0 , y 0 ) and V + 0 = V + (x 0 , y 0 ) determined at the chosen fixed z. Using eqs. S7 and S8, eqs. S5 and S6 become α(x, y) = α 0 ∆V 0 ∆V (x, y) and According to eqs. S9 and S10, both the gating efficiency α and the potential Φ * can be fully expressed in terms of measurable quantities, up to a common scaling factor α 0 .
4,855.2
2015-03-26T00:00:00.000
[ "Physics" ]
Pairing state at an interface of Sr$_2$RuO$_4$: parity-mixing, restored time-reversal symmetry, and topological superconductivity We investigate pairing states realized at the (001) interface of a spin-triplet superconductor Sr$_2$RuO$_4$ on the basis of microscopic calculations. Because of a Rashba-type spin-orbit interaction induced at the interface, strong parity-mixing of Cooper pairs between a spin-singlet state and a spin-triplet state occurs in this system. There are also strong inter-band pair correlations between the spin-orbit split bands, in spite of the considerably large spin-orbit splitting. This is due to frustration between the spin-orbit interaction and pairing interactions. In this pairing state, time-reversal symmetry is restored, in contrast to the bulk Sr$_2$RuO$_4$ which is believed to be a chiral $p+ip$ superconductor with broken time-reversal symmetry. It is demonstrated that, because of these features, the pairing state at the interface is a promising candidate for the recently proposed time-reversal invariant topological superconductor. Introduction In unconventional superconductors, the BCS order parameter possesses internal degrees of freedom, which give rise to various rich physics, as explored for Helium 3 [1], Sr 2 RuO 4 [2,3], and heavy fermion superconductors [4,5]. For Sr 2 RuO 4 , a possible realization of a chiral p + ip pairing state is suggested by several experimental and theoretical studies [3,6,7]. In this p-wave pairing state, time-reversal symmetry is broken by orbital degrees of freedom. Because of this feature, the chiral p + ip superconductor bears some similarities to the quantum-Hall-effect (QHE) state [8]. For instance, in a chiral p + ip superconductor with open boundaries, a gapless edge mode propagating in only one direction appear at the boundary edges, which is in analogy with a chiral edge state in the QHE state. This similarity ultimately stems from the realization of a topological state in both of these quantum condensed phases. A topological state is a novel class of quantum ground state which is characterized not by conventional long-range order, but by a topologically nontrivial structure of the Hilbert space. In a topological state, there is a bulk excitation energy gap which ensures the stability of this state, and, as mentioned above, there are also gapless edge states which play an important role for transport phenomena. In the case with broken time-reversal symmetry such as the QHE state and the chiral p + ip superconductors, the topological structure is associated with the existence of a nonzero topological number, i.e. the Chern number [9]. Recently, another class of a topological state was theoretically proposed for band insulators [10,11,12] and experimentally observed [13]. This topological state, which is called the Z 2 topological insulator, possesses time-reversal symmetry, in contrast to the above-mentioned topological state without time-reversal symmetry, and is characterized by the existence of two counter-propagating gapless edge modes, which are associated with the Kramers doublet. These gapless edge modes give rise to the quantum spin Hall effect, which has been attracting recently much interest in connection with possible applications to spintronics. As there is similarity between the chiral p + ip superconductors and the QHE state, there is parallelism between the Z 2 topological insulator and noncentrosymmetric p-wave superconductors [14,15,16,17,18,19]. In these few years, many classes of noncentrosymmetric superconductors (NCSC), the crystal structures of which lack inversion symmetry, have been discovered [20,21,22,23,24,25]. Some experimental and theoretical studies suggest that p-wave pairing states may be realized in certain systems of NCSC such as CePt 3 Si and Li 2 Pt 3 B [26,27,28]. However, unfortunately, these NCSC are not suitable for the realization of the Z 2 topological phase, because their superconducting gaps possess nodes [29,27], from which gapless quasiparticles in the bulk appear, and destabilize the topological state. In this paper, we investigate a possible realization of the Z 2 topological superconductivity at an interface of Sr 2 RuO 4 . We consider the (001) interface, at which the Rashba-type spin-orbit (SO) interaction breaking inversion symmetry may be induced [30]. We can also consider a setup in which a thin film of Sr 2 RuO 4 is fabricated on a substrate with a bias potential applied perpendicular to the (001) interface, which controls the strength of the Rashba SO interaction. Apart from the exploration of the Z 2 topological superconductivity, such a system is interesting in that it is suitable for the systematic investigation on the effect of party-mixing of pairing states raised by broken inversion symmetry [31,32]. For the realization of substantial parity-mixing of Cooper pairs, the existence of attractive interactions in both spinsinglet and spin-triplet channels is crucially important. Sr 2 RuO 4 is a good candidate for the realization of such a situation, because, according to microscopic analysis on the mechanism of superconductivity for this system, both the p-wave channel and the d-wave channel enjoy substantially strong attractive interactions [33,34]. It is expected that the addition of the asymmetric SO interaction to this system raises the strong admixture of spin-singlet pairs and spin-triplet pairs. In general, the structure of the parity-mixed Cooper pairs is determined by competition between the asymmetric SO interaction and pairing interactions in each channel [31,35,36,37]. When the asymmetric SO interaction is dominant, the structure of the d-vector for the spin-triplet component is mainly constrained by the SO interaction to suppress pairings between two SO split bands, which are unfavorable when the SO split is much larger than the superconducting gap. However, in the case that the pairing interaction that is not compatible with the symmetry of the asymmetric SO interaction is dominant, the direction of the d-vector does not minimize the energy cost due to the asymmetric SO interaction, yielding interband pairings between the SO split bands. For the case of Sr 2 RuO 4 , the dominant pairing interaction exists in the p-wave channel, and the d-vector is perpendicular to the xy-plane because of the bulk SO interaction of the d-electron orbitals [3,33,38,39]. In this paper, we consider the case that the asymmetric SO interaction at the interface is stronger than the bulk SO interaction, and examine how the chiral p + ip state in the bulk is affected by the asymmetric SO interaction, and what pairing symmetry is most stabilized at the interface. This is another purpose of the current paper. Our main results are as follows. As the asymmetric SO interaction becomes strong, the d-vector of the p-wave pairing is oriented to directions parallel to the xy-plane. However, there exists strong frustration between the asymmetric SO interaction and the p-wave pairing interaction because of an additional anisotropic structure of the pairing interaction. Thus, the direction of the d-vector does not fully optimize the asymmetric SO interaction, inducing substantial amount of Cooper pairs between the two SO split bands, in spite of the SO splitting much larger than the superconducting gap. As a result, the stable pairing state possesses p + d wave symmetry, rather than s + p wave symmetry or d + f wave symmetry which is expected to be stabilized for the Rashba superconductors when the inter-band pairs are suppressed [31,35,36]. The notable feature of the p + d wave state is that the single-particle energy has a full gap, and there are no nodal excitations for small strength of the asymmetric SO interaction. Furthermore, since the d-vector is parallel to the xy-plane, time-reversal symmetry is restored, which makes sharp contrast with the chiral superconductivity realized in the bulk of Sr 2 RuO 4 . These two features are quite important for the realization of the Z 2 topological superconductivity mentioned above. We can show that the pairing state realized at the interface is topologically equivalent to the combined state of a p + ip state and a p − ip state, which is time-reversal invariant, and supports the existence of counter-propagating gapless edge modes, which carry spin currents. The organization of this paper is as follows. In the sections 2 and 3, the pairing state at the interface of Sr 2 RuO 4 is microscopically investigated on the basis of the scenario that the pairing interaction is caused by electron correlation effects. We analyse the structure of the parity-mixed pairing gap. In the section 4, exploiting the results obtained in the section 3, we present a possible scenario for the realization of the Z 2 topological superconductivity in this system. Discussion and summary are given in the last section. Model and formulation In this section, we introduce a low-energy effective model for superconductivity realized at an interface of Sr 2 RuO 4 , and present a theoretical framework used for the study on pairing states. Low-energy effective model In Sr 2 RuO 4 , there exist three quasi 2-dimensional orbitals Ru4d xy,xz,yz in RuO 2 plane for electrons which play the most significant roles for low-energy properties. There are several theoretical proposals for the microscopic origin of pairing interaction in Sr 2 RuO 4 [2,3]. One of promising scenarios is that an effective pairing interaction in the p-wave channel is produced by higher order processes of electron-electron interaction through the Kohn-Luttinger-type mechanism [33]. In this scenario, which is first proposed by Nomura and Yamada, among the three bands, α, β and γ-band formed by the t 2g orbitals, the γ-band originating from the d xy orbital is considered to be most important for the realization of the superconductivity. In this paper, we employ this scenario, because this appraoch enables us to calculate the transition temperature which is quantitatively in good agreement with experimental observations. Therefore, to discuss the appearance of superconductivity, it is sufficient to focus only on the γband. Although there exists the SO interaction between the t 2g orbitals (we call this SO interaction the bulk SO interaction) which tends to direct the d-vector parallel to the z-axis, its effective energy scale for the pinning of the d-vector is small and negligible for a discussion on the transition temperature T c . However, for electrons near the surface or an interface parallel to RuO 2 plane, there exists another kind of spin-orbit interaction called the asymmetric SO interaction which breaks both reflection symmetry in the momentum space k → −k and spin rotation symmetry. This SO interaction originates from the spin-flip hopping processes between the Ru t 2g orbitals, of which the wave functions are modulated by a potential gradient in the vicinity of an interface. For the superconductivity, it tends to make the direction of the d-vector perpendicular to the z-axis as shown in section3. In the following analysis, we assume that the asymmetric SO interaction is sufficiently stronger than the bulk SO interaction, and neglect effects of the bulk SO interaction. We, then, simply describe the electrons of the γ-band near an interface by the single band Hubbard model, where c k = (c k↑ , c k↓ ) t is the annihilation operator and n iσ = c † iσ c iσ . The asymmetric type SO interaction induced near the (001)-interface is incorporated into the first term of the Hamiltonian. The strength of the asymmetric SO interaction is denoted by α. We assume that the Rashba form of the asymmetric SO interaction [30]. For Sr 2 RuO 4 , the dispersion relation ε k and the Rashba type SO interaction are approximated by The parameters are fixed as (t 1 , t 2 ) = (1.0, −0.375) taking t 1 as the energy unit, and the filling is n = 1.32. The hopping integral and the electron density are chosen so that the Fermi surface of our model is consistent with the experiments. In the real system, the form of L 0 (k) may be more complicated. However, as will be shown in the section 3, this simplified model captures important physics raised by broken inversion symmetry. Perturbation theory for the Kohn-Luttinger mechanism of superconductivity There are some theoretical proposals for the mechanism of p-wave superconductivity realized in Sr 2 RuO 4 . One promising scenario is that the pairing interaction in this system is caused by the Kohn-Luttinger mechanism; higher order interaction processes due to the Coulomb interactions U give rise to effective pairing interactions in interaction channels with nonzero angular momentum [40]. Actually, Nomura and Yamada demonstrated that interaction processes up to the third order in U yield a strong pairing interaction in the p-wave channel for the microscopic model of Sr 2 RuO 4 . This scenario successfully explains the origin of the p-wave superconductivity realized in this system. We, here, apply this perturbation theory for the pairing interaction to the model (1). For this purpose, we introduce noninteracting Green's function, where . ω n is the fermionic Matsubara frequency. As seen in the form of G 0 αβ , the Fermi surface splits into two bands whose dispersions are ε + and ε − with the splitting ∼ α/v F (v F is the averaged Fermi velocity of the two bands). Note that L 0 = 0 at the van Hove points (0, ±π), (±π, 0) and the Fermi surface is changed little around them since the Rashba SO interaction is small there. The effective pairing interaction is expanded up to the third order with respect to U. It is expressed as, and k = (ω n , k), and T and N are, respectively, temperature and the number of Ru sites. χ 0 and φ 0 do not depend on spins because G 0 ↑↑ (k) = G 0 ↓↓ (k) is satisfied. We, here, neglect many terms in V which arise from non-zero off-diagonal elements of Green's function G σσ (k), because G σσ (k) is smaller than G σσ by the factor of α/ε F where ε F is the Fermi energy. Besides, the terms in V with spin-flip processes represent the perturbative effects of the Rashba SO interaction and, in itself, do not have crucial importance as long as α ≪ ε F . Each V σ 1 σ 2 σ 3 σ 4 consists of the RPA-like terms V RPA and the vertex-correction terms V Ver . The former is included within random phase approximation(RPA), and the latter is not. For spin-singlet pairing, the RPA-like terms give dominant attractive interaction and play significant roles for its stability, while the vertex-correction terms do for triplet pairing. The transition temperatures for the superconductivity are calculated by solving the linearized Eliashberg equation where ∆ is the anomalous self-energy and λ is the eigenvalue. We identify the temperature for which λ(T ) = 1 as the transition temperature. The normal self-energy is neglected because it is not important in the present study. In this equation, spinflip processes are included only in the factor G(k)G(−k) and are not in V within our approximation. This is because the factor G(k)G(−k) behaves like a window function which allows electrons only near the Fermi surface to participate in the superconductivity and therefore has non-perturbative effects of the Rashba SO interaction, while the spinflip scattering processes in V are perturbative. We note that some of the elements of G 0 βγ (k)G 0 β ′ γ ′ (−k) are strongly anisotropic in the k-space, which restrict the possible symmetries of the gap functions. Furthermore, these elements with β = β ′ or γ = γ ′ give rise to parity-mixing between spin-singlet and spin-triplet pairs, which is one of the most remarkable features of NCSC. The anomalous self-energy is generally written as, where D 0 (k) and D(k) are the singlet and triplet parts, respectively, and the kdependence of D µ (iω n , k) represents the symmetry of the superconductivity. The structure of the d-vector D is determined by the two factors. One is the pairing interaction V and, in the case of Rashba superconductors, there exists the other factor L 0 . The microscopic origins of these two factors are generally different, and the dvector which V favors does not necessarily coincide with the one which L 0 favors. For sufficiently large α with D µ ≪ α ≪ ε F , the most stable direction of the d-vector is D L 0 because, if this condition is satisfied, ∆(k) in the matrix form can be diagonalized with respect to the τ = ± bands. Conversely, when D is not parallel to L 0 , inter-band pairs between the SO split bands are induced, which generally lead to pairbreaking effects. In contrast, for enough small α, the structure of D is determined by the pairing interaction. Generally, these two factors which can determine the structure of the d-vector compete with each other. In the next section, we discuss the effects of the Rashba SO interaction on the transition temperature and the structure of D µ . To solve the Eliashberg equation (9) numerically, we divide the Brillouin zone into 64 × 64 meshes and take 1024 Matsubara frequencies. Results for the pairing state In this section, we present the results for stable pairing states and their transition temperatures calculated by using the formulation given in the section 2. Structure of pairing interaction We first show the momentum profile of χ 0 (q) in figure 1 for α = 0 and α = 0.1 at T = 0.01, which plays an important role for the pairing mechanism. The difference in χ 0 for α = 0 and α = 0.1 are so small that the k-dependence of V is also changed little by the Rashba SO interaction. Indeed, we have confirmed that the k-dependence of V is almost unchanged at least up to α ∼ 0.1. This means that, in view of the pairing interaction, the most stable symmetry of the superconductivity for α = 0 is stable also for α = 0. As shown in figure 2, however, the amplitude of the pairing interaction for the triplet part V t = 1 2 [V RPA σσσσ + V Ver σσσσ ] is decreased for large α, because the main part of V t is V Ver which is sensitive to the electronic structure. On the other hand, the interaction for the singlet part mainly determined by V RPA which is directly related to the α-insensitive function χ 0 . Therefore, it is expected that the triplet superconductivity would be suppressed while the singlet superconductivity unaffected through the change in V by the Rashba SO interaction. However, as mentioned before, the Rahsba SO interaction has the other important effects on T c which are non-perturbative in the sense that G(k)G(−k) strongly restricts the possible symmetries of the gap functions, and gives rise to parity-mixing of Cooper pairs. 3.2. Pairing state and transition temperature in the case without parity-mixing As mentioned above, there are two important effects of the Rashba SO interaction on pairing states: one is to constrain the direction of the d-vector of spin-triplet pairings, and the other is parity-mixing. We, first, examine the former effect, neglecting the effect of parity-mixing for a while. Actually, the parity-mixing is not negligible in the present study, and will be discussed in the next subsection. Neglecting the terms which mix the singlet and the triplet gap functions in the Eliashberg equation (9), we calculated the transition temperatures T c for the spinsinglet channels and for the spin-triplet channels separately. We also computed the Figure 3. α versus T cs and T ct at U = 5.5. The gap functions roughly expressed as D 0 ∼ (cos k x − cos k y ) for T cs , D ∼ (− cos k x sin k yx + cos k y sin k xŷ ) for T ct (i), D ∼ [−(a cos k x − b cos k y ) sin k yx + (b cos k x − a cos k y ) sin k xŷ ] for T ct (ii), and D ∼ (cos k y sin k x + i cos k x sin k y )ẑ for T ct (iii). k-dependence of the gap functions self-consistently from (9). Figure 3 shows the αdependence of T c for the singlet (T cs ) and the triplet (T ct ) superconductivity at U = 5.5. The solid line with closed squares is T cs and the gap function for the spin-singlet pairing is roughly given by that with d x 2 −y 2 symmetry, D 0 ∼ (cos k x − cos k y ). For the singlet pairing, this is the only one stable gap function, as in the case without the Rashba SO interaction [33]. The other lines in figure 3 are for the spin-triplet states, and calculated with the assumption that the d-vector belongs to (i) the A 1 representation of the point group C 4v , (ii) B 1 and (iii) E, respectively. For these representations, the d-vector is roughly of the form of (i) D A 1 ∼ (− cos k x sin k yx + cos k y sin k xŷ ), All of these gap functions are p-wave gap functions. Note that they are all different from the form (cos k x − cos k y )L 0 for which the gap function ∆ can be diagonalized with respect to the SO split bands and no inter-band pairing is realized [31,35,36,37]. Thus, in the spin-triplet pairing states obtained in this calculation, there are always inter-band Cooper pairs. This is due to incompatibility between the symmetry of the pairing interactions and the symmetry of the Rashba SO interaction, as mentioned before. It should be also notified that the triplet states with D A 1 and D B 1 are timereversal invariant, while the triplet state with D E is not, but a chiral p + ip state. This is easily seen as follows. For the state with D A 1 or D B 1 , under time-reversal operation, the gap function for ↑↑ pairs ∆ ↑↑ (k) = −D 1 (k) + iD 2 (k) is transformed as −D 1 (−k) − iD 2 (−k) = ∆ ↓↓ (k), and also, ∆ ↓↓ (k) → ∆ ↑↑ (k). Thus, the D A 1 state and the D B 1 state are time-reversal invariant. As can be seen in figure 3, T cs is not so strongly affected by the Rashba SO interaction. In contrast, T ct is rapidly decreased, as α increases, especially for D E . The direction of this d-vector is not compatible with the Rashba SO interaction at all. For this case, the factor G(k)G(−k) in eq.(9) can be negative on a wide area of the Fermi surface. Therefore, the direction of the d-vector strongly tends to align in the xy-plane. On the other hand, the superconductivity described by the gap function D A 1 is most stable for large α, because L 0 in the Rashba SO interaction, which tends to direct the d-vector so that the condition D L 0 is satisfied, belongs also to A 1 irreducible representation. In this sense, D A 1 is, to some extent, compatible with the Rashba SO interaction, though it does not yet fully optimize the Rashba interaction, leading to strong inter-band pair correlations. In the case of , the parameters a and b in the gap function changes as α is increased. For α = 0, (a, b) ∝ (1, 0). When α is turned on, a is decreased while b increased so that D B 1 would become close to the form compatible with L 0 . This change is continuous with respect to α and the gap function D B 1 is transformed gradually. As will be shown later in figure 5 and figure 6, D B 1 has pwave like character for small α and is gradually changed into the f -wave like gap function (i.e. a = b) as α is increased. As seen in figure 3, T c for D B 1 has a minimum around α ≃ 0.04 and a hump around α ≃ 0.06. This α-dependence is understood as follows. As mentioned before, T c is determined by competition and interplay between the pairing interaction and the Rashba SO interaction. As α increases, because of the change of the electronic structure due to the Rashba SO interaction, the pairing interaction in the p-wave channel becomes weak in our model. This results in the overall decrease of T c for D B 1 state. On the other hand, the increase of α also changes the structure of the D B 1 gap function more compatible with the Rashba SO interaction, suppressing inter-band pairings between the SO split bands and also associated pair-breaking effects. A slight increase for 0.04 < α < 0.06 is caused by this suppression of the inter-band pairings. In our model, for α > 0.1, the decrease of T c for D B 1 is substantial. Thus, the pairing state with a = b, i.e. f -wave state, can not be realized. Pairing state and transition temperature in the case with parity-mixing The pairing states obtained in the previous subsection is drastically changed once we take into account the parity mixing of the singlet and the triplet gap functions. According to the behaviors of T cs and T ct in figure 3, the d-wave and the p-wave pairing states may be mixed through the Rashba SO interaction. For the admixture of the gap functions, however, only the gap functions which belong to the same irreducible representation of the point group are allowed to coexist. The p-wave gap function D A 1 with the highest T ct belongs to A 1 representation of C 4v and the d-wave gap function D 0 ∼ (cos k x − cos k y ) to B 1 . This implies that these two gap functions can not be mixed. Then, the next candidate is the admixture of the d-wave state and the p-wave state with D B 1 . The symmetry argument allows this admixture. Indeed, we found that the only one solution of eq.(9) is the d+p(B 1 ) wave state. Figure 4 shows the transition temperature T c for this We show the k dependence of D 0 (iπT c , k) and D 1 (iπT c , k) for the d + p(B 1 ) state in figure 5 for α = 0.005 and in figure 6 for α = 0.1. Note that D 2 (iω n , k x , k y ) = D 1 (iω n , k y , k x ) is satisfied for the B 1 symmetric superconducting state. For small α, D 1 exhibits a conventional p-wave behavior, in the sense that the Fermi surfaces cross the nodal lines of D 1 only near (±π, 0) and the amplitude of D 1 is much larger than that of D 0 . Meanwhile, for large α, D 1 is more like a f -wave state in the sense that the Fermi surfaces cross the nodal lines of D 1 around (±0.4π, ±0.7π) in addition to (±π, 0), though the single-particle energy is fully gapped with no nodal lines as will be clarified in the next section. The change from the conventional p-wave like gap function to the f -wave like one is continuous, and actually the f -wave like state should be classified as a p-wave state with higher harmonics. In our model, even for large α > 0.1, D B 1 does not change to the form D 0L0 (i.e. a genuine f -wave state) for which no interband pairing exists. This is because V t does not favor such a structure of the d-vector. The two factors for the determination of the d-vector, the pairing interaction and the Rashba SO interaction, generally have different origins and favor different types of gap functions. Therefore, our results imply that, for the case that the pairing interaction in a spin-triplet channel is sufficiently strong, it is rather generally hard for the gap function to be compatible with the Rashba SO interaction, resulting in the existence of the inter-band pairing, in contrast to previous studies on simple models in which it is assumed that the spin-triplet pairing interaction compatible with the asymmetric SO interaction always exists [31,35,36]. In the case that the pairing interaction for the triplet superconductivity is very small compared with that for the singlet one and the asymmetric SO interaction, the triplet component is induced by the singlet component, hence D L 0 can be satisfied. We note that the relative phase of D to D 0 is determined through the Rashba SO interaction. The eigenvalue of eq.(9) for ∆ = (D 0 − D · σ)iσ 2 where (D 0 , D) is the gap function illustrated in figures 5 and 6 is smaller than that for ∆ = (D 0 + D · σ)iσ 2 . The resulting gap function has no degrees of freedom with respect to the relative phase between the singlet component D 0 and the triplet component D. As mentioned in the previous subsection, the d+p(B 1 ) pairing state is time-reversal invariant. Thus, the asymmetric SO interaction restores the time-reversal symmetry of the superconducting state, which is broken in the bulk of Sr 2 RuO 4 where the chiral p+ip state is realized. This restored time-reversal symmetry bears an important implication for the realization of time-reversal invariant topological superconductivity, as will be discussed in the next section. A possible realization of topological superconductivity In this section, we present the scenario of the Z 2 topological superconductivity on the basis of the results obtained in the previous section. As mentioned in the introduction, the stability of the topological superconductivity is ensured by time-reversal symmetry and the existence of a full energy gap which separates the topologically nontrivial ground state from excited states [10,12,14,15]; there should be no nodal lines of the gap, from which gapless excitations may emerge, destabilizing the gapless edge modes and destroying the topological phase. The time-reversal invariance is evident for our d+p(B 1 ) state, as mentioned in the previous section. Thus, we, here, examine whether the singleparticle energy for the d + p(B 1 ) wave state is fully gapped, and there is no nodal lines of the gap. To simplify our analysis, we assume the BCS mean field Hamiltonian, which is, in the matrix form, where D 0 (k) is the spin-singlet gap, and D(k) is the d-vector for the spin-triplet component. The energy eigen value of (11) is obtained as [41] and the minus branch of the eigen values −E k± . When D(k) L 0 (k), the energy spectrum (13) is reduced to E k± = ε 2 k± + ∆ 2 ± (k), with ε k± = ε k ± α|L 0 (k)| and ∆ ± (k) = D 0 (k) ± |D(k)|. The energy spectrum is diagonal with respect to the SOsplit-band-index ±, and there are no inter-band Cooper pairs. For the p + d wave state obtained in the previous section, D(k) L 0 (k) does not hold for a wide range of parameters. In this situation, the energy spectrum (13) is not diagonal with respect to the band index, which implies that there exist inter-band Cooper pairs as well as intra-band pairs. The condition for the existence of gapless excitations, E k± = 0, is recast in, k-points at which the excitation energy gap closes should satisfy both eqs. (14) and (15). We examine these conditions numerically for the d + p(B 1 ) state. The calculations presented in the previous sections are valid only for T ≥ T c . Thus, for the evaluation of (14) which requires the magnitude of the gap function, we assume that the maximum values of the superconducting gaps D 0 and |D(k)| at T = 0 are obtained from the BCS mean field relation ∆/T c = 1.764. Using this approximation, we derive k-points satisfying eqs. (14) and (15) shown in figure 7. We found that when α is sufficiently small, the left-hand side of (14) is positive for all k in the entire Brillouin zone, and thus there are no gapless excitations. As α is increased from 0, the left-hand side of (14) decreases, and when α reaches to a value α 0 ∼ 0.02, eq.(14) is fulfilled in a certain region of k where k x ≈ ±k y ≈ ±0.65π. The condition (15) is also satisfied exactly on the line k x = ±k y , because of the B 1 symmetry; i.e. D 1 (−k y , k x ) = D 2 (k x , k y ) and D 0 (k x , k x ) = 0. This implies that the gap collapses at certain k-points on the line k x = k y . Our numerical data for the gap function indicate that this gap-closing does not occur for 0 < α < α 0 . For this parameter region, the bulk excitations have the full energy gap in the whole Brillouin zone, which is a necessary condition for the realization of the topological superconductivity. We, next, consider the adiabatic deformation of the above d + p(B 1 ) wave state to a topologically equivalent state. This deformation is achieved by changing parameters of the Hamiltonian without closing the bulk gap [10,11]. Since any gradual changes of parameters cannot change the nonzero topological number which is discrete, the existence of the bulk gap ensures the topological stability of the state. As mentioned above, for 0 < α < α 0 , ε 2 k + |D(k)| 2 > α 2 |L 0 (k)| 2 + D 0 (k) 2 > 0 holds. Thus we can change adiabatically the magnitude of the spin-singlet gap D 0 (k) and the strength of the SO interaction α to zero without closing the excitation gap. After this deformation, the system is equivalent to a combined system of a p + ip state and a p − ip state, which indeed exhibits the Z 2 topological superconductivity [14,16,17,18]. As a result, the d + p(B 1 ) wave state obtained in the section 3 is topologically equivalent to the Z 2 topological superconductivity. In this Z 2 topological phase, there are counter-propagating gapless edge states, which are Majorana fermions [14,18]. The Majorana edge states may give rise to intriguing transport phenomena associated with spin currents [42]. Since there is a diamagnetic supercurrent on the boundary surface, for the detection of the spin current carried by edge quasiparticles, thermomagnetic effects may be utilized [18]. Also, the gapless edge quasiparticles may be observed as a zero bias peak of tunneling currents [17,18]. Discussion and summary In this paper, we have investigated pairing states realized at the (001) interface of Sr 2 RuO 4 by using microscopic calculations based on the Kohn-Luttinger-type pairing mechanism. It is found that at the (001) interface of Sr 2 RuO 4 , the strong admixture of p-wave pairings and d-wave pairings realizes, and thus this system is suitable for the exploration of strong parity-mixing of Cooper pairs caused by broken inversion symmetry. An important implication of our results is as follows. When there are strong spin-triplet pairing correlations in NCSC, the frustration between the pairing interaction and the asymmetric SO interaction occurs quite generally, because of incompatibility between the pairing interaction and the symmetry of the asymmetric SO interaction. This yields substantial spin-triplet inter-band pairs between electrons in two SO split bands, even when the size of the SO split is considerably larger than the superconducting gap. Because of this feature, the most stable parity-mixed pairing state realized at the (001) interface of Sr 2 RuO 4 is the d + p(B 1 ) wave state, in which time-reversal symmetry is restored, in contrast to the bulk Sr 2 RuO 4 , which is believed to be in the chiral p + ip state with broken time-reversal symmetry. Another intriguing conclusion drawn from our results is that this d + p(B 1 ) wave state can be a promising candidate of the recently-proposed Z 2 topological superconductivity. That is, the d + p(B 1 ) wave state is topologically equivalent to the state that consists of a p + ip state for ↑↑ pairs and a p − ip state for ↓↓ pairs, which supports the existence of counter-propagating gapless edge states carrying spin currents. This feature may be observed experimentally in the transport properties for heat currents and spin currents, as discussed in some literature [17,18,42]. Some concluding remarks are in order. For the spin-triplet pairing state obtained in the above analysis, inter-band pair correlation between the SO split bands is substantially large. This result for inter-band pairings implies that pairing states with center-of-mass momentum such as the Fulde-Ferrel-Larkin-Ovchinikov (FFLO) state [43,44] may be more stabilized for some parameter regions compared to uniform states considered in the current paper. The stability of the FFLO state depends on competition between pairing interaction for this state and the cost of the kinetic energy due to the finite center-of-mass momentum. The issue of a possible realization of the FFLO state at an interface of Sr 2 RuO 4 is quite intriguing, and should be addressed in the near future. In the argument for the realization of the topological superconductivity presented in this paper, we did not consider effects of the bulk SO interaction due to the d-electron orbitals, but simply assumed that the asymmetric SO interaction overwhelms the bulk SO interaction in the vicinity of the interface. Actually, there should exist a domain boundary between the chiral p+ip state in the bulk governed by the bulk SO interaction and the time-reversal-invariant p + d state near the interface. It is highly nontrivial how the interaction between these two states affects the stability of the topological superconductivity. However, it is expected that as long as the thickness of the region, where the asymmetric SO interaction is dominant, is sufficiently large compared to the coherence length of Cooper pairs, the topological superconductivity should be stable in the vicinity of the interface. Another point required for the realization of the topological superconductivity is the fabrication of the interface where the electronic structure is not so different from the bulk one. In the (001) surface of Sr 2 RuO 4 , however, it is known that a structural phase transition occurs and the surface state is ferromagnetic [45]. To observe the time-reversal symmetric superconductivity, a very carefully fabricated sample without such a structural phase transition is needed.
8,948.8
2009-02-18T00:00:00.000
[ "Physics" ]
Environmental Ethics in Poland In the 1960s, western societies discovered that unlimited technological progress has a very high price that the environment pays. This was also the beginning of the discussions on the role of ethics in the protection of the environment and the moral aspects of nature exploitation. Even though the state of nature was not better in Poland, it took Polish philosophers a few decades to recognize the moral problem and to address it. The prevailing communistic propaganda of progress had blurred the perception of Polish people and they are unable to notice the environmental problem. Thus, only in the 1990s, after the fall of communism, Polish philosophers noticed that our approach to nature can have a moral aspect and we are exhausting the resources of the Earth. Since then, Environmental Ethics in Poland has been developed. Polish Environmental Ethics (PEE) is an interesting blend of inspiration from internationally recognized thinkers as well as original approaches to the ecological problem. The first wave of PEE addresses problems discussed abroad, namely the range of ethics (whom do we include into ethical consideration?) and the problem of values, analyzing them in the context of Polish culture and philosophical traditions. It also proposed original approaches developed by Henryk Skolimowski as well as approaches inspired by the problems considered from the perspective of Catholic theology. The article will give an insight into PEE and will present how these applied ethics have been received and cultivated in Poland in the so called first wave. It will also highlight the problems of the second wave of PEE. Introduction While Carson (1962) was awaking an awareness of ecological problems in the 1960s in the USA, in Poland the area of ecological problems was not even discussed and there may be just a few who realized that there was a problem. As in the other countries behind the Iron Curtain, in Poland the problem of the destruction of nature was of minimal concern. The government of the Polish People's Republic 1 was focused on industrialization and the priority was how to produce more steel (Olszewski 2011, p. 15) and not how to protect the environment. So, any obstacle in the way of development and industrialization was ignored. In the communistic ideology the environment was of little concern and it often paid a high price for industrial development. However, the government realized how huge the destruction of nature was and there was an awareness of the problem of pollution. Up to the 1970s, pollutions was not a consideration and in later years it was not much more visible (Ibidem). The imperative of industrialization was so strong that nature and the lives of workers had to be sacrificed for the development of economy. Poland was the first country in the eastern block to realize how big the natural destruction was, but even though it was first it was still "much too late" (Ibidem). As Radosław Gawlik claims "the authorities knew that water from the rivers is not suitable even for industrial purposes, Silesia is suffocating, and lack of water sewage treatment plants is changing Masurian lakes into a cesspool." (Gawlik 2011, p. 17) However, this knowledge was not accompanied by actions; rather, it was hidden and censored. The book entitled "Czarna księga cenzury" 2 has shown how much the attention of censorship was devoted to masking the problems of destruction of nature and the threats to the natural environment. Moreover, the Academy of Social Sciences of the Central Committee of the Polish United Workers' Party in the mid-1980s prepared a document presenting the threats to the environmental. This was not published and it was not presented to the public but it was exclusively available to the Party's elite. The picture drawn there was very pessimistic. Even though government was clear about the state of the natural environment, it was not willing to deal with this issue or even to make it public. The dictatorship of progress was prevailing and any voice advocating protection of nature was subdued and not allowed to interfere with development. Luckily, the analysis of the state of nature was adopted by the opposition who started a movement for the protection of nature. Thus, before 1989 advocating for nature was also an act of opposition towards the communistic government (Kassenberg 2014, p. 6). Moreover, there was an inner contradiction in the governmental approach to this area. On the one hand, people were obliged to realize social goals and protecting nature was one of them, while on the other hand the maximization of production was one of the main goals of government and was very much in contradiction with the taking care of nature 1 The name of the country under communism/the communist regime. 2 The translation of the title is: "The Black Book of Censorship". The book presents texts and documents presenting the mechanism of communistic propaganda and exhibiting some of the contents that had been censored. The book was first published in 1977. (Ibidem). This ambiguity made any social movement advocating for nature part of the opposition, at least until 1989. Sketch of the History of the Beginning of PEE The historical context explains why environmental concern was raised in Poland almost two decades later, and in Polish academia three decades later, than in the USA and in Western countries. Even though, in the 1970s and in the 1980s, a group of scientists had raised the issue of the environment, the problems were made known to the governing elite and not to the public at large. Therefore, ethical reflection on the environment only emerged after the fall of communism and it has been very much a live issue since the 1990s. In 1995, during the sixth Polish Philosophical Congress in Torun, there was even a section on Environmental Philosophy. In the 1990s, we noticed the beginning and flourishing of environmental reflection in Polish philosophy. How vivid and diverse the movement was can be seen in Dołęga's (2006, pp. 18-19) typology of Environmental Philosophy listing 15 types of environmental philosophy cultivated in Poland. Moreover, the typology only covers the situation up to 2006, when the article was published. The Congress in Torun was an important event in the history of reflection on environmental issues in Poland. The Congress was the inspiration for the setting up of a scientific seminar that has met a few times a year to discuss environmental issues. For many years, it took the form of informal cooperation. In 2016, there has been a written agreement by the participants of the seminar to set out a framework for formal cooperation. The initiators of the meetings were four philosophers: Professor Józef M. Dołęga from Cardinal Stefan Wyszyński University in Warsaw, Professor Zbigniew Hull from the University of Warmia and Mazury in Olsztyn, Professor Andrzej Papuziński from Kazimierz Wielki University and Professor Włodzimierz Tyburski from Nicolaus Copernicus University in Torun-the host city of the Congress. These four philosophers could be called the founders of environmental reflection in Polish philosophy and ethics. However, there were also other Polish philosophers working on environmental ethics and environmental philosophy. One of the areas of interest was the ethical aspect of environmental protection and one of the most important contributors in those times was Marek Bonenberg (1992), the author of a book entitled "Etyka środowiskowa. Założenia i kierunki". 3 According to Professor Tyburski, this was the first and very competent presentation of a variety of environmental ethics developed in Anglo-Saxon philosophical and ethical reflection (Tyburski 2006, p. 12). In this book, the author presented a wide spectrum of environmental ethics and prevailing discussions in this area. It includes the thoughts of Tom Regan, Robin Attfield, Paul W. Taylor, Aldo Leopold, J. Baird Callicott, Holmes Rolston III, Edward Goldsmith, Henryk Skolimowski and ideas like deep ecology, and the Gaia hypothesis. The book was an important introduction to Polish readers of the phenomenon of environmental ethics and it served as a tool for the popularization of the moral aspect of environmental protection. In the 1990s, a few other books were published presenting an original approach towards the discipline. Professor Włodzimierz Tyburski made an important input into the development of PEE. He has a significant influence on its development and he set up the framework for discussions. His first books about the issues discussed were: "O idei humanizmu ekologicznego" (1990), "Pojednać się z Ziemią. W kręgu zagadnień humanizmu ekologicznego" (1993) and "Etyka i ekologia" (1995a, b). 4 These works present the main approaches to environmental ethics; they introduce and analyze the issue of values in applied ethics; they present the deontological interpretation of environmental ethics and the educational aspect of the discipline (Tyburski 2006, p. 13). These books have given the impulse for others to carry out research in environmental ethics and have started the intellectual debate within this field. Later, other authors also contributed. Among the first published books, there are a few that have to be mentioned: "Ekofilozofia i bioetyka" (1996a), 5 "Ekonomiaekologia-etyka" (1996b), 6 "Kryzys ekologiczny w świetle ekofilozofii" (1996), 7 "Etyka środowiskowa. Nowe spojrzenie na miejsce człowieka w przyrodzie" (1998), 8 "Etyka środowiskowa. Teoretyczne i praktyczne implikacje" (1998), 9 "Ż ycie-nauka-ekologia. Prolegomena do kulturalistycznej filozofii ekologii" (1998), 10 "Próba zbudowania chrześcijańskiej etyki środowiska naturalnego" (1998) and "Rozdroża ekologii" (1999). 11 These are just few books that were published in Poland during first few years of reflection over environmental ethics; the first phase of movement can be called its first wave. The first wave of environmental ethics is constituted by the group of those thinkers who laid the foundations of PEE. Nowadays, many young philosophers address the problems and they continue the work of the representatives of the first wave or they start reflection in new areas of ethical concern over the environment. 4 The titles of the books are: "On the Idea of Ecological Humanism", "Unite with the Earth. Issues in Ecological Humanism" and "Ethics and Ecology". 5 English title: "Environmental Philosophy and Bioethics"; the book consists of the texts presented during the VI Polish Philosophy Congress in Torun by Philosophers like: Zdzisława Piątek, Włodzimierz Tyburski, Józef M. Dołęga, Zbigniew Hull, Andrzej Papuziński (editor Włodzimierz Tyburski). 6 English title: "Economy-ecology-ethics"; book edited by Włodzimierz Tyburski, consisting of articles devoted to the analysis of axiological problems connected with the development of the economy. 7 English title: "Environmental Crisis in the Light of Environmental Philosophy", this book by Konrad Waloszczyk presents philosophical reflections on the ecological crisis. 8 English title: "Environmental Ethics. New Insights into the position of Human Beings in Nature". The book was written by Zdzisława Piątek, who uses biocentric perception to try to explain the role and the location of human beings in nature and their obligations towards the environment. 9 English title: "Environmental Ethics. Theoretical and practical implications." This book, edited by Włodzimierz Tyburski, presented the results of the first Polish conference on environmental ethics. 10 English title: "Life -science -ecology. Prolegomenon to the Culturalistic Philosophy of Ecology". This book by Andrzej Papuziński presented the original interpretation of the philosophy of ecology in the context of previous discussions by Polish Philosophers. 11 These two books make an effort to interpret environmental ethics from the perspective of Catholic theology. Their English titles are: "Attempt to build Christian Ethics for a Natural Environment" (Julisław Łukomski) and "Crossroads of Ecology" (Tadeusz Ś lipko and Andrzej Zwoliński). However, it is not only books that represent the development of the discipline. There are at least two journals dealing with environmental issues from an ethical and a philosophical perspective, namely: Studia Ecologiae et Bioethicae 12 and Problems of Sustainable Development. 13 It has to be emphasized that articles presenting problems in the discipline have also been published in other journals incorporating this area of applied ethics into wider ethical discussions. As can be seen by the number of publications, the discipline has been interesting for Polish philosophers. However, numbers and quantitative approach never give an adequate insight into the matter of the subject itself. In the next paragraph, I will look to the more qualitative aspect, namely to the problems addressed by PEE. Problems Addressed in the First Wave of PEE The history of first few years of environmental reflection in ethics has been quite dynamic and recently even more so. More philosophers have become interested in the issue. Even though the source of inspiration for the appearance of the discipline and for the problems addressed were mostly from Anglo-Saxon and German philosophers, the history and culture of the country has shaped the peculiar perception of it. PEE is a discipline that can be divided into two historical periods: the first wave that was developed from the start of PEE, which laid the foundation for further discussions, and the second wave-the contemporary philosophers and ethicists, students of the founders, their philosophical children, who either continue their work or develop their own original work. The first wave can be divided into four main research areas, two of them are connected with the two most frequently asked questions in environmental ethics (See Brennan and Lo 2016) namely questions about frames of ethical consideration and environmental values. The other two are the ecophilosophy of Henryk Skolimowski and the opposition to Skolimowski's philosophy-environmental ethics inspired by Catholic theology and teaching. Dispute Over the Frames of Ethical Consideration Setting the frames for ethical consideration is one of the crucial issues that environmental ethics undertakes. The privileged position of human beings is very flattering for us and it makes us feel that we can subdue nature in so many areas. The anthropocentric approach is the subject of doubt for many environmental ethicists and it has been questioned in philosophy since at least the time of Nietzsche (Gunkel 2012, p. 109). Even though human beings rely strictly on nature and on the ecosystem, we do not treat nature as it should be treated and natural goods are overexploited making our natural debt bigger than ever before. Therefore, setting the wider context for our ethical consideration and including nature and the natural world in our ethical choices is a very challenging task. It reaches the roots of our cultural identity as a species that rules over the world and changes the world according to its fancies and whims. We operate under the illusion that we can control nature. We forget that we are dependent on nature. In such an intellectual atmosphere, when someone comes along saying that we are not different from the other beings in the natural world it creates a controversy and it disrupts the way we think about nature and our place in it. The first one in Poland to declare herself as a biocentrist was Professor Zdzisława Piątek. In her book "Etyka środowiskowa. Nowe spojrzenie na miejsce człowieka w przyrodzie", she presented the theoretical aspect of reframing the ethical borders of moral consideration. Piątek tries to answer the question of the relation of new ethics to traditional ethical approaches and the role of ethics in our relations with others, emphasizing that biocentrism approves: • Not only human beings, but also nonhuman living organisms have an intrinsic value. • Not only human beings, but also nonhuman living organisms have an awareness proper to them of vital values and know how to live according to their nature. • Every living organisms is the measure of those aspects of the environment, with which it co-works to live. • The biosphere should not be exploited and managed only from the perspective of human interests (Piątek 1998, pp. 11-12). Biocentrism demands the rejection of speciesism and this new thinking is a way of thinking that can be achieved when philosophy is rethought in the context of new scientific discoveries and is a fruit of critical thinking on the subject of the place of human beings in the universe. The ontological gap between human beings and other living beings has to be removed. The theory of evolution has provided the reasons to overcome homocentrism and anthropomorphism (Piątek 1998, p. 13). According to Piątek, the "The theory of evolution is a test of the neutral perception of living Nature. It rejects the illusion of anthropocentrism and it overcomes the tendency to accept teleological explanations (…) some illusions may be fully justified in the techno sphere but, when they are projected to the biosphere, they do not make sense." (Ibidem) Unlike Paul Taylor, for Piątek living organisms are not teleological centers of life. Teleology is reserved for artefacts, for the technological world not for the natural one. However, Piątek recognizes Taylor's philosophy as the most appropriate in terms of enabling human beings to live harmoniously with the other living organisms on Earth. Thus the classical ladder of beings is changed. 14 It becomes the tree of life where the human race is just one of the branches of the tree. The branch is not better than the others. It is equal to all other branches, thus human beings are dethroned from the center of the universe and become just another element of the natural ecosystem. Human beings are not different from other living organisms in the natural ecosystem, so a differentiation between human living organisms and nonhuman living organisms is not valid, both are equal. Both groups have their unique qualities and their special place in the natural ecosystem. Each one has an intrinsic value and the value of natural organisms does not depend on its utility for human beings, it is valuable in itself. Arrogant anthropocentrism has to be rejected and we should employ the attitude of neutral observer of the natural world. This attitude, especially biocentric egalitarianism, has been criticized, since in a non-hierarchical world there are no norms directing preferences towards choosing the norm (Wróblewski 2002, p. 80). If there is no hierarchy in the world of values, there are no moral values. Then, all values are same. However, biocentric ethics can be accepted for pragmatic purposes. Even though the theoretical background is not perfect it has a great capacity to stimulate environmental thought and actions (Wróblewski 2002, p. 81). The declaration of Professor Piątek as biocentrist makes her an important figure in PEE. Her declaration has been admired and followed by some, but it has been criticized by others. Therefore I present the discussion on her framework of ethical consideration based on her works. 15 The wider discussion on the context of moral consideration has been mostly focused on the theory of the four following approaches: anthropocentrism, 16 biocentrism, holism and animal ethics (Tyburski 1999, pp. 101-113). 17 The latter has recently been experiencing a revival and it is one of the most original and promising areas of research in Poland. The Axiology of PEE The other most important question of environmental ethics, namely the question about values has also been a subject of debate in PEE. One of the main disputants in this area is Professor Włodzimierz Tyburski. His philosophical background is his education in Nicolaus Copernicus University in Torun and most of his research work has been devoted to ethics and axiology. His work is inspired by the eminent Polish philosopher from Nicolaus Copernicus University, namely Henryk 15 On the other side of the barricade is anthropocentrism, trying to protect the traditional approach. The dispute over anthropocentrism in PEE has been limited in Poland. There are some glimpses in the Catholic inspired PEE area. Catholic philosophers protected the tradition Aristotelian-Thomistic perception of the hierarchy of beings. However, there were critical about perceiving human being as too privileged. For example Tadeusz Ś lipko has written about anthropolatria (from the Greek terms: ἄνθρωπος-human beingand λατρεία-addoration). The term means that the human being is not only the center of world but also a subject of quasi religious cult. (See Ś lipko and Zwoliński 1999, p. 105). 16 In many forms from strong to weak or-as the Stanford Encyclopaedia of Philosophy calls itprudential or even cynical (Brennan and Lo 2016). 17 However, it is not the only typology of environmental ethics. The other popular one was inspired by Teutsh and proposes the following approaches: egocentric (the approach to environmental protection that protects the interests of certain individuals or social groups); anthropocentric, patocentric (from Latin patiens-suffering, proclaiming ethical consideration for every sentient being); biocentric and holistic (see Pawłowska 1993). Environmental Ethics in Poland 141 Elzenberg, whose main research area was axiology and who has set a tradition, in Torun University, of philosophical reflection over values. So, the continuation of the research interest of the university is clearly visible in the issues analysed by Tyburski in environmental ethics. According to Tyburski, the construction of a catalogue of values and norms has to enable the following: 1. Solving conflict between human beings and the natural environment. 2. Moral judgement of human actions oriented on environment. 3. Motivating to act for preserving and protecting the environment, thereby sustaining such a state of environment that is safe, beneficial for the human world as well as for animals and plants." (Tyburski 1999, p. 114). Values as stated by Henryk Skolimowski, are the guardians of natural goods (Skolimowski 1993, p. 189); they play a crucial role in protecting the environment and are like a lighthouse showing the way to people on the sea of life. Tyburski proposes the division of environmental values to two groups: 1. Values that are the aims themselves: life and health. 2. Values that are the way to an aim: responsibility, moderation, justice, solidarity/community. The first group has a crucial role in the axiology of environmental ethics, it is a category of values that are of tremendous importance and are valuable in themselves. No other conditions have to be realized to find them important or relevant for environmental ethics. In this group, Tyburski lists two values-life and health. Life is a fundamental value in biocentric ethics that has made the life of the organism a criterion for protection and care. This criterion puts some organisms within the context of ethics and puts the others far beyond this, making them irrelevant for ethical consideration. As Schweitzer's philosophy motto says: "I am life that wants to live, in the midst of life that wants to live." Life is a crucial value for environmental ethics, but it is also problematic one. Since life is present in bacteria or insects as well as in human beings, this makes the category of life problematic. The problematic equality of life itself has been answered in various ways. One of the solutions to the problem was delivered by Skolimowski who said that the greater the complexity of the organism the greater is their right to be protected and to live. Thus the problematic insects will retreat in their fight for survival with human beings. 18 The other crucial value that is an aim itself is health. It is one of the most underappreciated values and probably the humblest one that is recognized as precious only when it is lost. As in the Polish poem from the end of the XVI century "my good and noble health. Thou matt'reth more than wealth. None know'th thy worth until Thou fad'st, and we fall ill" (Kochanowski 2007). It is a value that is transparent, it is like Jonas' heuristic concept of fear, when he explains the subtle presence of goodness, that is not noticed until it is lost, while something bad is more apparent and visible. Health has the same characteristic. Without it, nothing can be done and nothing contributes more to the quality of life. However, it is a silent witness of our life that remains humble and stays unnoticed, hence it is often neglected. An ecological crisis raises many questions about human health; new environmental problems bring diseases and lead to previously unknown threats to human life. There is another group of values that are equally important and they play a regulative role in the human-nature relationship (Tyburski 1999, p. 120), since they determine behavior and human choices. These are the values that are the way to moral choices in environmental protection. It has to be emphasized that Tyburski is one of the most recognized Polish environmental ethicists: his works are iconic and serve as a roadmap in the world of Polish environmental ethics. The influence of German Philosophy can also be seen in PEE (Konstańczak 2006, p. 210). It can be seen in the reflection over the philosophy of responsibility. The paradigm of philosophy started by Georg Picht and propagated by Hans Jonas and his student Dieter Birnbacher has also been a subject of wide discussion in Poland. Generally, the category of responsibility has been widely discussed in Poland. 19 One of the contributors to this discussion was Helena Ciążela (2006), who made a comparison between the theoretical basis of philosophy of responsibility and its practical application by Aerelio Peccei and the Roman Club. The philosopher recognizes the role that philosophy of Georg Picht has had in building the theoretical framework for the responsibility over nature. Even though the Jonas philosophy is much more evident, it was Picht who laid the foundation for thinking about the environment as a subject of our responsibility. This responsibility is a central category of global ethics, it is a category that has been rejected and needs to be brought back to ethics and made to constitute our responses to global challenges Responsibility is the foundation of the enlightened utopia postulated by Georg Picht. According to Picht, we cannot escape thinking about the future, thus we often tend to finish constructing some sort of utopia (or, as is the etymology of the word, "a place that does not exist"). However, if our thinking about utopia is critical we do not create this non-existing place but rather, as a result of enlightened thinking, the place that can exist in future. If the methodical criticism is used, the picture of the future is not utopian but it is a realistic one. Being responsible for the world makes us use rationality in the right way and helps us to create not u-topos but a possible world for the future. 19 One of the greatest contributions is that made by Jacek Filek, who has presented responsibility as a philosophical category in the philosophies of chosen, mostly German thinkers, thus presenting an interesting study on the issue. The study that enables one to learn how the concept of responsibility has been understood in philosophy and how has it changed over the years, among analysed concepts there were reflections of the following philosophers: Edmund Husserl, Nicolai Hartmann, Martin Buber, Eberhard Grisebach, Wilhelm Weischedel, Dietrich Bonhoeffer, Friedrich Nietzsche, Søren Kierkegaard, Jean-Paul Sartre, Georg Picht, Richard Wisser, Roman Ingarten, Johannes Schwartländer, Manfred Riedel, Hans Jonas and Emmanuel Levinas. (More in : Filek 1996: Filek , 2004: Filek , 2010. Environmental Ethics in Poland 143 An example of enlightened utopia is the work done by the Club of Rome that has been an eye-opener for the public. The reports prepared for the Club have attracted the interest of the mass-media thus informing the public about the problems of the environment, often in a very alarmistic manner. The structure of the Club has enabled its members to undertake actions leading a rise in global responsibility. In the Club, people from the worlds of academia, business and politics have met to solve global challenges. As Picht had postulated, it has changed the traditional structures in which problems had been analysed. People that met in the discussion forums of the Club were representing various approaches to the world, various interests, various outlooks (sometimes even contradictory towards each other). The aim of the Club was to initiate discussions over global challenges, that had not been discussed in traditional research institutions or other forums. The impressive role of the Club in preparing such accurate reports and the use of systems of thinking methods, and the work of the Club is an example that a utopia of responsible worlds can be realized when right and reasonable tools are employed. Thinking about values has constituted an important part of the discussion of environmental ethics and it is an important input for PEE. It is also one of the trends that is being developed. New ideas and trends are being implemented in the second wave. Environmental Ethics Inspired by Catholic Teaching The environmental crisis is a very complex and multi-layered phenomenon, it can be analysed from various perspectives, not just from the perception of ethics and philosophy but also from a religious one. In PEE, a movement has become apparent that tries to analyze the moral obligations of man towards nature in the context of Catholic theology and teachings. In other European countries it was usual that Protestants developed ethical frameworks for ecological inclusiveness in theology. In Poland, 20 mostly because of the quantitative advantage of Catholics, the eco-theology tended to be Catholic. Theologians addressing environmental issues have tried to deal with the ecological crisis from the perspective of the "Christian concept of a human being, whose connections with nature also have ethical meaning" (Łukomski 2000, 35). They make an attempt to construct realistic and personalistic ethics regarding the natural environment. There are answers to environmental ethics inspired by Far Eastern philosophies, such as those proposed by Skolimowski and by Naess. The theoretical framework for this approach has a few sources of inspiration: the thinking of John Paul II, Catholic teaching, Polish Neo-Thomistic Philosophy and personalism. This philosophy is an anthropocentric one, it is built on the hierarchy of human beings as higher on the classical ladder of beings, where human beings rule over nature. However, the context of the famous quotation from Genesis that according to White (1967) is the root of destruction of nature by Judeo-Christian religions has been analysed widely. White's words have been rejected on the basis of the claim that the quote has been misunderstood and led to an imperfect understanding of the role of human beings in protecting the rest of nature rather just cruelly ruling over it without considering its natural constraints. The basis of Catholic inspirations are the words of Popes, mostly Paul VI and John Paul II. The first one mostly emphasized the problem of unequal distribution of goods. However, in 1970, during the meeting with the representatives of the Food and Agriculture Organisation he recognized that intensive agriculture had led to an unbalanced ecosystem that might lead in future to an ecological crisis. He urged his listeners to rethink the scale of humans' interference with the Earth. He also emphasized the deep respect for nature that underlies Christian tradition (Ś lipko and Zwoliński 1999, p. 24). Many more remarks on our obligations towards the natural world can be found in the teachings of John Paul II, who recognized the correlation between care for the natural world and a society built on peace. The world as created by God is the kind of good that has to be managed by man in a good and wise way, without overusing it and unbalancing natural order. We should rein back unlimited and greedy consumption. Our duty is to stop our progression towards a civilization of trash, a civilization of waste. A culture based on consumption leads to the destruction of basic human values and it destroys the value of human life. Unlimited consumption is dangerous not only for nature but also for ourselves. Rather than aiming to "have" we should aim to "be". Thus the basic teaching of the Popes is that the Earth is a gift from God and the natural world is a heritage that belongs to all mankind. The premise from this teaching is that the gift given by God makes us responsible for keeping the natural environment in such a state that it could serve higher goals than it has. Man has an obligation to rule over nature and its components in a wise way and with love (Ś lipko and Zwoliński 1999, p. 49). The practical aspect is overcoming the prevailing paradigm of a life-style based on consumption and the adoption of more moderate lifestyles in terms of consumer goods. Also, the other practical aspect is to protect the environment and to make efforts to restore those ecosystems that have been destroyed by human beings. Thus, on the basis of Catholic teaching, philosophy has tried to construct Christian environmental ethics, whose first rule is: "man has a right and obligation to treat the world according to its triple function in shaping the moral excellence of the human being as a person. Thus he should respect the natural status of the natural environment, protect it from devastation and use it within the framework designed by the ratio of reasonable harmonization with the entitlements that man has" (Ś lipko and Zwoliński 1999, p. 139). Under the influence of theological reflections, PEE has employed a few terms of theological origin, such as ecological conscience, meaning as a special inner power that enables one to make ethical choices in terms of environmental protection. This term and the term "ecological sin" have been also included in public discussion about the ecological crisis. The terms have appeared in a television campaign (in 2010) by the Ministry of Environment that aimed at teaching proper waste management. The light and funny plot of the short spots make this an easy way to make the terms part of public discussion. Therefore, terms with a theological background have become widely known to the public as the symbols of moral aspects of environmental protection. Henryk Skolimowski's Ethics Finally, yet importantly is the philosophy of Henryk Skolimowski, the most original approach in PEE, yet a very controversial one. He has built an original concept of ecophilosophy 21 with interesting ethical dimensions. First of all, he has rejected the traditional philosophies, especially the following ones: (1) Mechanistic conception of the world (Isaac Newton, Pierre Simon de Laplace); (2) Homo homini lupus est, man as an egoist (Thomas Hobbes); (3) The invisible hand of the market (Adam Smith); (4) The Theory of Class Warfare (Carl Marks, Friedrich Engels). These four are to be blamed for ethics built on: control, manipulation, efficiency, competition and reification of the world. So, for new ethics they have to be rejected and substituted with more constructive values that the universe needs now. (1975) and in Eco-philosophy: Designing New Tactics for Living in the mid-1980s. The second one was the crucial work, where he also explains his concept of environmental ethics. Skolimowski claims to call his work a reflection an ecological ethics, since, according to him, environmental ethics "concerns itself with the appropriate management of natural resources and is often guided by costbenefit analysis" (Skolimowski 1984, p. 45), while ecological ethics "is much broader as it spells out the relationships between man and nature; and also analyses those attributes of man which can make him an ecological animal" (Ibidem). In both types of ethics, values play a fundamental role, and it is crucial to recognize the intrinsic value of life and living forms that deserve our respect. Skolimowski emphasizes that "the conservation strategy is a value programme" (Ibidem). The first step towards the discovery of the value of the world is right thinking. Right thinking leads to right actions. In our thinking about the world and conservation, we have to learn how to include values and the environment. Values are like the road signs that show the right way and lead to the right choices, to the right conservation -conservation is nothing else other than caring, caring for nature seen as a being that has an intrinsic value rather than as a property. In conserving nature, we have to recognize its uniqueness and we need to see ourselves as nothing other than the guardians of the sacred universe. We are obliged to be responsible for the universe and yet it is a great privilege to serve in a sanctuary of the world and to save life in all its diversity. Every form of nature is an important part and has to be protected regardless of its current value for man. Even though it may not seem significant, it is one of the elements of a wider unity and man has no right to interfere in a way that will destroy the integrity of the ecosystem. Skolimowski was one of the first thinkers who employed the concept of stewardship, instead of ruling over nature. As he emphasizes "man did not weave a web of life, he is merely a strand in it. Whatever he does to the web, he does to himself" (Ibidem,p. 46). We are just a part of an evolutionary processes and we have to accept evolution and its results. He suggests the following precepts (imperatives) that can be "extricated from an intelligent reading of evolution: • Behave in such a way as to preserve and enhance the unfolding of evolution and all its riches. • Behave in such a way as to preserve and enhance life, which is a necessary condition for carrying on evolution. • Behave in such a way as to preserve and enhance the ecosystem, which is a necessary condition for further enhancement of life and consciousness. • Behave in such a way as to preserve and enhance the capacities which are the highest developed form of the evolved universe: consciousness, creativeness, compassion. • Behave in such a way as to preserve and enhance human life which is the vessel in which the most precious achievements of evolution are contained" (Ibidem, p. 49). These are the five evolutionary imperatives; they are nothing more than variations of the idea that nature has to be protected in its all diversity. The followers of deep ecology have criticized the Skolimowski's imperative as too anthropocentric. In his defense, he claimed that old-fashioned anthropocentrism cannot be found in his imperatives. He has been very critical about egalitarianism as postulated by biocentrism. According to him, the right understanding of evolution does not give same value to the life of a mosquito and of a human being. He claims that an egalitarian approach is "also against the modus operandi of nature; and of all of evolution" (Ibidem, p. 50). He has also been critical of deep ecology for their view on the population, namely the concept that returning to original natural conditions of hunter-gather societies would lead to the elimination of 80% of the world population, thus leading to massive genocide. This postulate is hidden in the concept of deep ecology. The other point of criticism was that the platform of deep ecology, is according to Skolimowski not comprehensive enough to realise all of the aims that deep ecology announces. Also, in his disputes with deep ecology proponents, he was very critical about their connections with radical environmental movements like Earth First. Skolimowski's ethics were much more open to social issues than those of other environmental ethicists. He emphasized the role of harmonious cooperation in nature as well as in society. The relations in nature and the relations between people should mirror and exemplify the good and harmonious relations between human beings and nature. He claims that the "self-reliance of individuals and nations is one side of the coin; ecological diversity is the other side of this coin. In order to do justice to the variety of lands, climates, circumstances and traditions we have to cultivate diversity in ecological, agricultural (as well as in cultural) terms. Because this diversity is the basis of self-reliance; and vice versa; self-reliance in the vastly Environmental Ethics in Poland 147 varied circumstances of our globe, and within different traditions simply means encouraging and maintaining diversity. Diversity means heterogeneity; it means the opposite of homogeneity. Homogeneity profits central economies, as in high-tech homogeneity. Heterogeneity profits local people; it enables them to be self-reliant; ultimately it enables them to be responsible and good stewards (Ibidem)." Skolimowski's thinking is presented at the end even though it should perhaps have opened the discussion, because, historically, he was the first environmental philosopher of Polish origin. The reason for such a sequence of presentation is that most of his influential works were written while he was working abroad at universities in the USA. Nevertheless, he played an important role in Polish environmental thought and he is recognized as one of the most interesting and original thinkers. His thinking has been an inspiration for many and it has encouraged reflection on the concept of environmental ethics in those who did not agree with his concepts. Future Development of PEE The concepts described above can be called the first wave of environmental ethics and environmental philosophy in Poland; these are the thoughts and ideas of the first generation of philosophers who were inspired by the ecological crisis and who tried to analyze it in the context of philosophical or ethical reflection. I focused deliberately on the beginning of the reflection and on the very early works in PEE since these concepts have laid the foundation for further development. These philosophers started environmental reflection in Poland. Some of them are still active and all of them still have an impact on the development of the discipline. In some way, they are like the founders of the discipline who will always be quoted and remembered by their future followers. Some of the books quoted here are as iconic as Walden or Life in the Woods or Silent Spring are for American environmental philosophy. What is interesting is also the direction in which the discipline will develop. It seems as if some directions will be strengthened and some new areas of reflection are appearing. Let us look at the possible future developments even though it is hard to know what the future reflection will look like and all scenarios are just probabilities and there are no certainties. However, on the basis of current discussions at environmental philosophy seminars and current publishing activity we can anticipate what kind of reflection will be continued in future by the representatives of the first wave as well by the emerging generation of the second wave as presented below. On the basis of the current analysis it seems that there is a huge interest in axiological reflection. There are many interesting works being done on the value of moderation in the context of de-growth and zero growth ideas. Thus the concept of minimal life has become the corner stone of philosophical analysis at environmental philosophy seminar and in the articles (see Stachowska 2016a, b). While this value is the high ideal which is being analysed as a practical activity, it is virtue that plays a tremendous role as a practical application of values. Environmental virtue ethics is an emerging area of reflection that has been discussed, and we can see the renaissance of virtue ethics in the reflection on human flourishing that has also entered environmental discussions (see Dzwonkowska 2013Dzwonkowska , 2014Dzwonkowska , 2016a. 22 The other field that could constitute a second wave of environmental ethics is the continuation of theological-religious reflections understood in a very wide context. There is a certain amount of interesting research on religious aspects of the humannature relationship, like the one analyzing White's (1967) claim that Judeo-Christian religions are to be blamed for the ecological crisis (Sadowski 2015). Moreover, Pope Francis' encyclical Laudato Si has raised more interest in Polish academia than in the Polish Church and a few interesting initiatives (conferences and publications) have been undertaken in this area. However, the most vivid and promising development is in the area of animal ethics. Research from many disciplines has been undertaken on the problem of ethical obligations towards non-human others and the human-animal relationship. 23 This area of research has been a part of environmental ethics (Tyburski 1999). It has become a subject of research for which the term animal ethics or even animal philosophy would not be sufficiently broad. It is rather human-animal studies-an interdisciplinary approach that analyses the nature of the relation of human beings to non-human others. Summary The overview of PEE presented above is a very general one. It has been a quick look over few chosen concepts than a reflection on them. The panorama of Polish Environmental Ethics is much wider than this. However, the composition of this article has forced me to highlight a few of the most visible and widely discussed problems or philosophers that can be summed up in the four problems and trends of the first wave of PEE presented above. As can be seen, PEE is a blend of concept and ideas inspired by the ideas of Anglo-Saxon or German philosophers. However, the reception of the thought has been connected with the specific historical and cultural background. At the beginning, these works were very important in promoting discussion on international literature. The development of discipline has produced original approaches that were developed during the first wave of Polish environmental ethics. Time is still needed to see how these approaches will influence the development of the second wave of PEE. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 22 In May 2016 Institute of Philosophy at Nicolaus Copernicus University organized conference on Environmental Virtue Ethics. 23 One of the most impressive initiatives was opening in 2014 Laboratory on Animal Studies-Third Culture, it is a research group uniting researchers from various disciplines and institutions to carry out research on animals and human-animal relation. Environmental Ethics in Poland 149
10,617.6
2017-02-01T00:00:00.000
[ "Philosophy" ]
Theoretical predictions on the electronic structure and charge carrier mobility in 2D Phosphorus sheets We have investigated the electronic structure and carrier mobility of four types of phosphorous monolayer sheet (α-P, β-P,γ-P and δ-P) using density functional theory combined with Boltzmann transport method and relaxation time approximation. It is shown that α-P, β-P and γ-P are indirect gap semiconductors, while δ-P is a direct one. All four sheets have ultrahigh carrier mobility and show anisotropy in-plane. The highest mobility value is ~3 × 105 cm2V−1s−1, which is comparable to that of graphene. Because of the huge difference between the hole and electron mobilities, α-P, γ-P and δ-P sheets can be considered as n-type semiconductors, and β-P sheet can be considered as a p-type semiconductor. Our results suggest that phosphorous monolayer sheets can be considered as a new type of two dimensional materials for applications in optoelectronics and nanoelectronic devices. Since the successful preparation of graphene 1 , two-dimensional(2D) atomically-thick materials, such as graphdiyne sheet 2 , boron nitride sheet 3 , silicene 4 and layered transition-metal dichalcogenides 5,6 have attracted intensive attention owing to their unique physical properties and potential applications in nanoscale devices. Recently, due to the synthesis of few layers black phosphorus (BP) 7-10 , named phosphorene, 2D phosphorous materials have become the focus of science community [11][12][13][14][15][16][17][18][19][20][21][22] . BP is the most stable phosphorus allotrope under normal conditions with a direct band gap of about 0.3 eV [23][24][25] . The direct band gap will increase to ~2.0 eV 26,27 as BP reduces to a monolayer, which opens doors for applications in optoelectronics. Furthermore, bulk BP is found to have high carrier mobility in the order of 10 5 cm 2 V −1 s −1 at low temperatures 28,29 . The field-effect carrier mobility of few-layer BP is measured to be still higher, up to 1000 cm 2 V −1 s −1 for electron 7 and 286 cm 2 V −1 s −1 for hole 8 at room temperature. Also, few-layer BP exhibits ambipolar behavior with drain current modulation up to 10 57 . Owing to the direct gap and high mobility, there is a high potential for BP thin crystals to be a new 2D material for applications in optoelectronics, nanoelectronic devices and so on [30][31][32][33][34][35][36] . So far, there have been few reports about the mobility of monolayer BP experiment researches. Theory studies based on effective mass calculation 25 have shown that the room temperature electron mobility of monolayer BP is over 2000 cm 2 V −1 s −111 or 5000 cm 2 V −1 s −119 . However, the method is subject to the parabolic properties of the energy bands. In this paper, both of electron and hole mobilities are investigated with Boltzmann transport equation (BTE) method beyond the effective mass approximation. Furthermore, besides the monolayer BP (marked as α -P), other three stable 2D phosphorus allotropes, namely β -P, γ -P and δ -P based on theoretical predictions 20,21 , have also been investigated. We therefore report the first theoretical prediction on the charge mobility of those 2D phosphorus allotropes in this work. Results The atomic structures of four different types of phosphorus sheets are shown in Fig. 1. In order to get an intuitive demonstration of carrier conduction along the armchair and zigzag directions, an orthogonal supercell covered by a green shadow is used in Fig. 1. The lattice length is shown in Table 1, and is in agreement with previous studies. There are four phosphorus (P) atoms in the supercell of α , β , Scientific RepoRts | 5:09961 | DOi: 10.1038/srep09961 Figure 2. Energy band structures, valance band edge surface heat map and conduction band edge surface heat map of α -P, β -P, γ -P and δ -P. Red arrows are the direct gaps at Γ point. The black lines and red lines in band structures are calculated by PBE and HSE06 respectively. K point: Γ (0,0,0), Μ (0.5, 0, 0), Ν (0, 0.5, 0), Τ (0.5, 0.5, 0). γ phosphorus (α -P, β -P, γ -P), and eight P atoms in the supercell of δ phosphorus (δ -P). There are two phosphorus sub-layers in each phosphorus sheet. The distance of two P sub-layers (d) is shown in Table 1. The energy per atom indicates that α -P sheet is the most stable. The average P bond length is about 2.23 ~ 2.27 Å. Energy band structures and Fermi surface heat map of the phosphorus sheets are shown in Fig. 2. All four types of phosphorus sheets are semiconductors. For α -P, β -P, γ -P and δ -P, as shown in Table 2, the energy band gaps based on PBE (HSE06) calculation are 0.91(1.70), 1.93(2.64), 0.42(1.03) and 0.10(0.78) eV, respectively. The energy band structures and band gaps are consistent to those reported in previous studies 21 . Our calculations indicate that only δ -P is a direct gap semiconductor. The other three type sheets are indirect semiconductors. These results are in good agreement with previous study 20,21 . Interestingly, when we zoom in the energy band spectrum around the Γ point in α -P, as shown in the inset of Fig. 2 we can find that the top of the valence band is located at (0, 0.035) K point (skewed slightly along the zigzag direction) and is about 0.75 (0.53) meV higher than the Γ point based on the PBE (HSE06) calculation. This tiny skewing away Γ point on the top of the valence band is in good agreement with Ref. 37 and has also been demonstrated in α -P zigzag nanoribbon 16 . The optical characteristics should be influenced only slightly in α -P sheets due to such tiny skewing. Based on the band structures, we calculate the effective mass of the charge carrier by parabolic fitting near the Fermi surface, which is presented in Table 3. It can be found that most of |m*| is smaller than the mass of the free electron (m e ), which means that the phosphorus sheets have considerably high carrier mobility. Our results show that the m* for electrons and holes in α -P are 0.1382 and 1.2366 m e , respectively, which are in good agreement with Yang's report 11 . Furthermore, it is clearly seen that the |m*| of electron or hole along the armchair direction over an order of magnitude smaller than that along the zigzag direction. It indicates that the carrier transport is anisotropic and the armchair direction is the main transport direction in α -P. The case in β -P is the opposite. The |m*|of electron or hole along the zigzag direction is three times larger than that along the armchair direction, which means that the carrier transport ability is stronger along the armchair than the zigzag direction in β -P. It is easily to see that the |m*| of hole along the zigzag direction in α -P and γ -P is much larger than others' , which result from the almost flat valence band in those materials. The variation of total energy (E) with uniaxial strain (δ) applied along the armchair and zigzag directions are shown in Fig. 3. Based on those energy-strain curves, the in-plane stretching modulus C 2D can be obtained. In α -P sheets, we can also find that C 2D is obvious anisotropic, and it is about four times larger along the zigzag direction (103.278 N/m) than along the armchair direction (24.255 N/m). These are in good agreement with Qiao's report (101.60 and 28.94 N/m) 25 . In general, the three-dimensional Young's modulus can be estimated as C 3D = C 2D /t 0 . Based on the optB86b van der Waals functional, the interlayer separation of α -P, β -P, γ -P and δ -P have been calculated as 5.30, 4.20, 4.21 and 5.47 Å 21 , respectively. By assuming a finite thickness (t 0 = 5.30, 4.20, 4.21 and 5.47 Å) for α -P, β -P, γ -P and δ -P sheet, the Young's modulus along the armchair and zigzag direction are shown in Table 4. The previous theoretical study has shown the Young's modulus of monolayer α -P sheet to be 44 GPa (armchair direction) and 166 GPa (zigzag direction) 31 . Figure 4 shows the shifts of band edges as a function of strain along the armchair and zigzag directions. Through dilating the lattice along the armchair and zigzag directions, the DP constant E 1 is then calculated as dE edge /dδ, equivalent to the slope of the fitting lines, where E edge is the energy of the conduction (valence) band edge. Each line is fitted by 11 points. The E 1 values of phosphorus sheets are shown in Table 5. The standard error of all E 1 values is smaller than 1% excluding three valuses marked in Table 5. Energy gap (eV) α-P β -P γ-P δ -P On the basis of our energy band spectrum, we calculated E 1 and C 2D , the acoustic phonon-limited mobility (using Eq. 1) and relaxation time (using Eq. 2) at room temperature (300 K). The results are shown in Table 5. It can be seen that the electron relaxation time (τ e ) in phosphorus sheets is much longer than the hole relaxation time (τ h ), excluding β -P. The electron mobilities of α -P, β -P, γ -P and δ -P sheets are about 1.1 × 10 4 , 4.7 × 10 2 , 2.9 × 10 5 and 3.0 × 10 3 cm 2 V −1 s −1 , respectively. The corresponding τ e 's are about 1.29, 0.09, 70.73 and 0.61 ps. The hole mobilities of α -P, β -P, γ -P and δ -P sheets are about 2.0 × 10 2 , 1.7 × 10 3 , 7.3 × 10 1 and 5.9 × 10 2 cm 2 V −1 s −1 , respectively. The corresponding τ h 's are about 0.02, 0.86, 0.24 and 0.09 ps. It can be found that all four phases have higher mobility than MoS 2 monolayer sheet (the hole mobility is 86 cm 2 V −1 s −1 and electron mobility is 44 cm 2 V −1 s −1 ) 38 . Due to the very small conduction band deformation potential (0.187 eV), the electron mobility along the zigzag direction in γ -P sheet is as high as ~3 × 10 5 cm 2 /Vs, which is in the same order of magnitude of that in graphene 42,47 , silicone 39 and germanene 40 . The minimum is the electron mobility along the zigzag direction in β -P sheet, which is about 47.32 cm 2 /Vs. The electron carriers move faster than the hole ones in α -P, γ -P and δ -P sheet. Only in β -P sheet, the hole mobility is higher than the electron mobility. γ -P sheet has the best electron carrier transmitting capacity and the biggest difference between the electron and hole mobility in four type sheets. Moreover, the obvious anisotropy in carrier mobility can be found. The charge carriers move faster along the armchair direction than the zigzag direction in α -P, β -P and δ -P sheet. While in γ -P sheet, the zigzag direction is preferred. Discussion It must be noted that the mobility in our calculation is a theoretical value. Only the acoustic phonon scattering mechanism is considered. Actually, there are inevitably impurities and defects in the vast majority of materials, and they have a great influence on the charge transport properties, especially at low temperatures where phonon has little effect 47 . For example, in a MoS 2 sheet, owing to scattering from charged impurities, the mobility at low-temperatures signally decreases with temperature 41 . So the mobility of phosphorus sheets measured experimentally can be much smaller than theoretically predicted. The band decomposed charge density around the Fermi level of phosphorus sheets is shown in Fig. 5. The composition of the top valence and the bottom conduction band are shown in Table 6. Atomic orbital analysis shows that the top of valence states in phosphorus sheets are mainly composed of 3p z orbits. Our calculations indicate that, in α -P, the conduction bands are mainly composed of p z orbitals with mixed s and p x orbits. In β -P, the conduction bands are hybridization orbitals with mixed s and p orbits. In γ -P, the conduction bands are mainly composed of p y and p z orbitals. In δ -P, the conduction bands are major composed of p z and s orbitals. In α -P, β -P and γ -P, owing to the valence bands partly composed of in-plane p orbits, the top of the valence band is skewing awaly the Γ point. Due to the distribution of the mainly charge density of the valence is along the armchair direction, the hole carrier will move faster along the armchair direction than the zigzag direction in α -P, β -P and δ -P. While in γ -P, due to the contributions of s and p z orbitals, the distribution of valence band charge density is along the zigzag direction (Fig. 5c). So the hole mobility in γ -P is slightly higher along the zigzag direction than the armchair direction. For the conduction band, the distribution of charge density is along the armchair direction, as shown in Fig. 5e,f, and g. This is identical with the electron mobility except γ -P. The orbital analysis shows that the proportion of p x orbital is larger than that of p y orbit except γ -P and δ -P. Due to the contribution of p x orbitals, the electron mobility is higher along the armchair direction than the zigzag direction in α -P and β -P. In γ -P, Table 5. The in-plane elastic constant (C 2D ), deformation potential (E 1 ), electron relaxation time (τ e ), hole relaxation time (τ h ), electron mobility (μ e ) and hole mobility (μ h ) in phosphorus sheets. The temperature is 300K.The Standard Error is a 1.01%, b 1.78%, c 8.06% and others smaller than 1%. Figure 5. Band decomposed charge density of phosphorus sheets: (a)-(d) is the valance band edge for α -P, β -P, γ -P and δ -P respectively; (e)-(h) is the conduction band edge for α -P, β -P, γ -P and δ -P respectively. The isosurface value is 0.01. Drawings are produced by VESTA software 42 . the contribution of p y orbits is over 50%. At the same time, there is much lower deformation potential. So the electron mobility along zigzag is surperisely high in γ -P. Conclusions In summarily, we have calculated the electronic structures and the intrinsic charge carrier mobility of four type phosphorus sheets (α -P, β -P, γ -P and δ -P), using first-principles density functional theory and the BTE with the relaxation time approximation. We find that α -P, β -P and γ -P are indirect gap semiconductors. The numerical results indicate that the electron mobility of α -P, γ -P and δ -P sheets at room temperature (about 1.107 × 10 4 , 2.895 × 10 5 and 3.022 × 10 3 cm 2 V −1 s −1 , respectively) is much higher thanthe corresponding hole mobility (about 204.288, 72.645 and 586.339 cm 2 V −1 s −1 , respectively). Nevertheless, in β -P sheet, the hole mobility (1.711 × 10 3 cm 2 V −1 s −1 ) is about four times of electron mobility (466.262 cm 2 V −1 s −1 ). Owing to the huge difference mobilities in hole and electron, α -P, γ -P and δ -P sheets can be considered as n-type semiconductors, and β -P sheet can be considered as p-type semiconductors. All four types of phosphorus sheets present anisotropy in carrier mobility. Charge carriers move faster along the armchair direction than the zigzag direction in α -P, β -P and δ -P sheet. But in γ -P sheet, the more favorable charge transmission direction is along zigzag. Methods In this paper, the carrier mobility is calculated by BTE method beyond the effective mass approximation which is used to predict the mobility of semiconductor nanometerials, like graphene, carbon nanotubes and so on 2,43-48 . Within the BTE method, the carrier mobility μ in the relaxation time approximation can be express as Ref. In order to obtain the mobility, three key quantities ( i k k i τ ε ( , → ) , ( → ) and i k υ ( , → ) ) must be determined. The coherent wavelength of thermally activated electrons or holes at room temperature in inorganic semiconductors, which is much larger than their lattice constant, is close to that of acoustic phonon modes in the center of the first BZ. The electron− acoustic phonon coupling can be effectively calculated by the deformation potential (DP) theory proposed by Bardeen Here the delta function denotes that the scattering process is elastic and occurs between the band states with the same band index. E 1 is the DP constant of the i-th band, and C is the elastic constant. The band energy k i ε ( → ) is calculated by the Vienna ab-initio simulation package (VASP) 51 . The k → -mesh is chosen as 11 × 11 × 1 for electronic structures calculation and 61 × 61 × 1 for band eigenvalue calculation, which is fine enough to give converged relaxation time and mobility. The generalized gradient approximation (GGA) 52 with the Perdew-Burke-Ernzerhof (PBE) 53 exchange correlation function is used with the plane-wave cutoff energy set at 600 eV for all calculations. The criterion of convergence is that the residual forces are less than 0.001 eV/Å and the change of the total energy is less than 10 −7 eV. The vacuum space between two adjacent sheets is set at least 15 Å to eliminate the interactive effect on each other. The group velocity of electron and hole carriers can be obtained from the gradient of the band energy k
4,216.8
2015-06-02T00:00:00.000
[ "Physics" ]
A Survey of Agent-Based Modelling and Simulation Tools for Educational Purpose Simulation is an experimentation with the imitation or model of the observed system, and observation of its behaviour over time, with the purpose of better understanding and/or improving the system. It is often used in situations where research cannot include the real system because of its inaccessibility, dangerous or unacceptable involvement, the fact that the system is designed but not built yet, in situations where the system is abstract or simply does not exist. Elements of these systems can be implemented as software entities which can percept their environment and autonomously react to the stimulation, i.e. intelligent agents. These tools, which allow the research of complex natural, social and technical phenomena and systems, are called agent-based modelling and simulation tools. This paper presents a review of some of these modern computer tools that can be successfully implemented in the teaching process. INTRODUCTION The rapid development of information and communication technology (ICT) in the last few decades gave the opportunity to a wider scientific and professional community to approach the problems in previously unimaginable ways. Many natural, social and technical phenomena and systems are difficult to access or are completely inaccessible. They can be dangerous for observation and exploration or abstract and invisible to the human eye. By applying modern computing technologies, it is possible to model and simulate such phenomena and systems and manipulate them so that the system can be analysed and strategies for its functioning evaluated. Elements of these systems can be implemented as software entities which can percept their environment and autonomously react to the stimulation. This description largely coincides with the standard definition of an intelligent agent. Therefore, this paper deals with agentbased modelling and simulation (ABMS) tools. This approach can significantly contribute to the understanding of complex phenomena from different areas of life. In doing so, different groups of users emphasize the different aspects of used software tools. Social scientists, without programming experience, want a simple-to-use tool with an intuitive interface. On the other hand, computer scientists are concerned with the type of license and the code openness that will enable them to manipulate the tool. Teachers, who use modelling and simulation tools in teaching, want easy-to-learn packages that enable students to transfer knowledge and experience to realworld situations [32]. Over the years, many different tools have been developed for different purposes and areas of application. The primary purpose of this paper is to provide a comprehensive overview of the segment of these software tools that can be successfully applied in teaching. This paper begins by defining the concepts in the field of modelling and simulation in Section 2. In Section 3, we continue with pointing out the possibility and good practice of applying simulation in the teaching process. After that, in Section 4 we brought the basic facts of the agent paradigm. Section 5 reports some related past surveys while in Section 6 we give an overview of the state-of-theart in ABMS simulation tools for educational purpose. Finally, in Section 7, the implications of this research and concluding remarks are presented. SIMULATION In everyday life, we often encounter the concept of simulation. Meteorologists regularly display simulations of weather in the days to come. Children, and sometimes adults, play with physical models of cars and roads, trains and railways, ships and, more recently, aircrafts that, remotely controlled, simulate the behaviour of real vehicles. Numerous computer applications simulate a wide range of human activities, which enables us to test our skills as drivers, researchers, urbanists, etc. From a historical point of view, the simulations were largely independently developed and used in different areas of human work and life. However, the researches in cybernetics and system theory in the 20th century, combined with the ever-increasing use of computers in all these areas led to a certain unification and systematic presentation of the concept of simulation. Simulation is today an increasingly present methodology for solving problems from different areas of human activity. It is used to describe and analyse the behaviour of the system, to detect the causalities within the system, as well as to assist in the design of realistic systems, both existing and conceptual. In its most general sense, the simulation can be defined as an imitation of the system. The imitation represents the attempt to replicate or copy something else. For example, a forger tries to imitate the work of a great artist or attempts to copy the looks of official money bills. For an athlete who behaves as if the foul was committed on him, though it was not the case, we say that he simulates. Computer-aided design (CAD) systems allow imitation of designs of physical artefacts or entire production facilities. However, the key difference between these imitations and the examples described above is that they do not include the time lapse. Therefore, there is a difference between the concept of static simulation, that imitates the system at a given moment, and the concept of dynamic simulation, which includes the time lapse [1]. The term simulation is mainly used in the context of dynamic simulation and can be defined as the imitation of the process or real-world system's function over time [2]. Simulation of the system requires the development of a system model. The system model represents the key characteristics, behaviours and functions of the selected physical or abstract system or process. The model, therefore, represents the system itself, while the simulation represents the functioning of the system over time. Another important aspect of the definition is the purpose of the simulation model. In his general discussion of models, Pidd identifies the need for understanding, changing, managing and controlling reality as the purpose for building the model [3]. Simulation models, specifically, should provide a better understanding of the system and detect possible system improvements. A better understanding of the system as well as identification of possible improvements, are important for future decisionmaking in the real system. Another important feature of the model is the emphasis on its simplification. The system model rarely includes all aspects of it, but it concentrates on those that are crucial for the functioning of the system. Even if it is possible to capture all the details, it is probably not desirable because it would take too much time to collect and engage all aspects of the system in the model. When choosing simulations as a method of system modelling, the purpose of modelling needs to be taken into account. The simulation represents an experimental approach to modelling in which user simulates changes of system inputs, explores alternative scenarios, and observes the behaviour of the model until it gains enough understanding of its work and identifies how it can improve the real system. For example, it is possible to foresee the average waiting time for telephone users in the call centre when a certain number of operators are busy, and the task of the person using the simulation model is to change the input (operator number) and observe the simulation model to determine the effect of the change. In addition to the described essential aspects of simulation, it is possible to define simulation more precisely as experimentation with simulated (computer) imitation of the observed system and observation of its behaviour over time, with the purpose of better understanding and/or improving the system. Simulation is often used in situations where research cannot include the real system because of its inaccessibility, dangerous or unacceptable involvement, the fact that the system is designed but not built yet, in situations where the system is abstract or simply does not exist [4]. COMPUTER SIMULATION IN EDUCATION Today, computers are used as an integral part of many classrooms and laboratories, as a teaching and learning tool for real or virtual investigations in various subjects. These applications include using computers to facilitate data acquisition, to provide real-time data display, to analyse these data, and to simulate complex phenomena. Several studies show that this approach is as effective as, or even more effective than its non-computer-based counterparts [5][6][7][8][9]. These studies proved that computer simulations could be a learning tool as productive as hands-on equipment, given the same curricula and educational setting [10]. Students who used computer simulations instead of real equipment performed better on conceptual questions and developed a greater competency in handling real physical artefacts [11]. As mentioned before, simulation can be effectively and efficiently used in describing systems which are inaccessible or dangerous, systems that normally do not accept students' involvement, abstract systems or systems which do not (yet) exist. These systems can bring particular benefits to the process of learning and teaching. With simulation environments, we are enabled to have our students vary the force of gravity, explore nuclear fission at the molecular level, move tectonic plates while investigating the differences between divergent and convergent boundaries. Computer simulations give us these interactive, authentic and meaningful learning opportunities. One advantage of a computer-based simulation is the ability to make normally unobservable occurrences plainly visible for the student [12]. Students and researchers can observe, explore, recreate, and receive immediate feedback about real objects, phenomena, and processes that would otherwise be too complex, timeconsuming, or dangerous [13]. In a simulated environment, time changes can be speeded up or slowed down; abstract concepts can be made concrete and tacit behaviours visible. Teachers can focus students' attention on learning objectives when real-world environments are simplified, a causality of events is clearly explained, and unnecessary cognitive tasks are reduced through a simulation. Otherwise, when students have many choices within simulations, it represents the potential distraction [14], and too many details in the visualisation of the system hamper the ability to notice relevant details [15]. Another useful feature of simulation is the possibility to employ perceptual cues that would direct learner's attention to critical features and events that are functionally invisible [16]. Simulations provide opportunities for immersive understanding and adaptive exploration of the diverse realworld and constructed environments, thus affording a wide range of exploration opportunities, ranging from the scientific to the social and artistic. [17] For all the reasons mentioned, simulation has recently become one of the most popular instructional tools for delivering quality teaching. When using realistic simulations to study different phenomena, students are encouraged to apply acquired skills and motivated for advanced learning [18][19][20]. The experience of working with the simulation environment positively affects the correct adoption of the concepts, and successful application of the learned material [21]. The study conducted with high school students in the United Kingdom, who used the simulation in the classroom, demonstrated that this approach has inspired participants to express scientific thinking and approach to analysis and evaluation of the data collected [22]. The results of these and similar researches point out the great potential of applying technology to teaching and its impact on students' achievements [23][24]. Research has shown that participation in technically enriched teaching can positively influence the attitude of students towards science and encourage them to choose STEM-oriented careers. Such high-tech, computer-assisted cooperative simulation of a real-life situation helped to trigger application learning as well as professional identity/awareness/interest/construction [25]. INTELLIGENT AGENTS AND MULTI-AGENT SYSTEMS An agent is a physical and/or virtual system that perceives its environment by various sensors and acts on it through its actuators. Agents work in their environment, and can include people, robots, computer programs, but also some simple gadgets. The software agent is a persistent, goal-oriented computer program that responds to its environment, works without continuous direct surveillance and performs certain functions for the end user or other computer programs in a particular environment. Such independent action implies the power to choose a suitable action in a particular situation if it exists. The agent is inhibited by other processes and agents, but also can be able to learn from its experience of working in the environment over a long period. Software agents can be independent or work in collaboration with other agents or humans. In interaction with humans, software agents can have qualities characteristic for humans, such as the understanding of natural language and speech, personality, or humanoid embodiment. Software agents represent an evolutionary step in comparison to conventional computer programs. They can be self-activated and run and they do not require input or interaction with a human user. Software agents can also start, monitor, and shut down other programs or agents. All agents are computer programs, but not all programs are agents. Key features that differentiate agents from arbitrary programs are, according to Franklin & Graesser [26], reaction to the environment, autonomy, goal-orientation and persistence. Related and derived terms are: -intelligent agents who express a certain aspect of artificial intelligence, e.g. learning or concluding, -autonomous agents who can modify the way they achieve their goals, -distributed agents that run on physically different computers, -multi-agent systems, i.e. the group of distributed agents working together to achieve the goal that an individual acting agent alone cannot achieve. Thus, an agent is an entity that we can define through its perception of the environment in which it is located, by the built-in sensors, and acting in such an environment and on that environment through actuators. The agent's perception sequence is a complete history of everything that an agent ever perceives. The agent's choice of action at any time may depend only on the last perception or on entire perception sequence so far. In mathematical terms, we can say that agent behaviour is described by an agent function that maps every perception into the corresponding action. Russel & Norvig listed five basic types of agents characterized by a degree of expressed intelligent behaviour and ability [27]: -Simple Reflex Agents that select action on the basis of a current observation, ignoring the history of observation; -Model-Based Reflex Agents that select the action based on the history of observations; -Goal-Based Agents who choose the action to achieve a specific goal; -Utility-Based Agents that select the action that will achieve the maximum degree of success; -All these agents can also be Learning Agents who analyse the experience to choose the action. Systems in which intelligent agents work, as well as multi-agent systems, are examples of abstract systems. The study of these systems is hardly feasible at the theoretical level. The particular difficulty is a prediction of system behaviour in different situations that the intelligent agent, or several agents in a multi-agent system, perceives by its sensors, and responds to them depending on the defined causal rules. As such, these systems are very suitable for studying and researching using simulation models. Over the last few years, with the growing interest in using an ABMS tool, numerous researches of available tools have been conducted. In the next section, we give a review of relevant past surveys. RELEVANT SURVEYS Widespread use of computer agents that, with ubiquitous digitization, slowly penetrate into all spheres of life also caused the need for investigating the behaviour of the systems in which individual agents or groups of agents act autonomously. The number of environments that enable modelling and simulation of such agent-based systems is constantly increasing. This motivated numerous authors to conduct screening of available tools, focusing on different aspects of system modelling and simulation. In an extensive research of relevant databases such as IEEE Xplore, Google Scholar, Scopus, ScienceDirect, and SemanticScolar, using the terms: "agent-based modelling and simulation", "agentbased simulation environment", "agent toolkit", "state-ofthe-art", "comparision", "review" and "survey", we have identified seven surveys that have been conducted in last 20 years, a period of growing interest for research in the field of agent paradigm. We will list those surveys chronologically In their review, Serenko & Detlor [28] explore tools for modelling and simulation of agent-based systems based on their application as pedagogical tools for teaching purposes. They have classified 20 available tools by looking at four features: the ability to create mobile agents, the ability to develop a multi-agent system, the ability to create different types of agents for different purposes, and the ability to retrieve information. They also explored the basic programming language used in these tools. The authors conducted a user satisfaction survey about tool functionality, performance, and interaction with the user, involving 87 teachers. Our work in this paper is similar in that we are trying to explore tools that can be successfully used in the teaching process. In 2004, Tobias & Hofmann [29] discussed four open code tools: Repast, Swarm, Quicksilver and VSEit, and evaluated them based on 19 characteristics, including general criteria, modelling and experimentation, modelling support, and modelling options. They ranked the tools by awarding points according to each criterion. The main limitation of this review is the fact that authors are limited to tools that use Java as the main programming language, and are mostly used by the social sciences community. The survey conducted by Railsback, Lytinen & Jackson, published in 2006 [30], compares four major platforms: NetLogo, Mason, Repast and Swarm. The authors have created a template, called StupidModel, which they used to evaluate and compare these tools on multiple levels. With each new level, they added more features and, through a total of 15 levels, authors investigated different features such as environmental issues, model structure, agent scheduling, file entry and output, random number generation, and statistical capabilities. The main limitation of this review is a small number of tested platforms and their similar nature. In this paper, we will expand the number of described tools, avoid their ranking and allow the user to choose which tool is most appropriate for his project. In the next major survey, conducted in 2006, Castle & Crooks [31] investigated eight simulation platforms. Particularly they focus on the assessment of their geospatial capabilities. They also paid attention to the date of creation, the implementation language, required programming experience and the availability of demonstration models and tutorials. This review also compares similar tools specialized for use in social science researches. Significantly increasing the number of observed simulation tools, Nikolai & Madey [32] examine their five characteristics. With a total of 53 tools, they compared the programming language required to create a model or simulation, the operating system required to run the tool, the type of license, the primary domain for which the tool is intended, and the degree of support available to the user of the tool. Their intent was to provide users with enough information to choose the most suitable tool in the form of an easy-to-use compendium. Based on personal experience and available information, Allen [33] recalled the significant features of existing tools. The author has divided the tools into the simulation environments for individual intelligent agents (31 tools) and those designed to simulate multi-agent systems (13 tools). It points to important shortcomings such as: difficult of use; insufficient tools for building, especially tools for representing space; insufficient tools for executing and observing simulation experiments; and the lack of tools for documenting and communicating software. This survey also lists general application areas: Biology and Medicine, Physics and Chemistry, Security and Cyber Security, Environment, Social and Economic Modelling, and Supply Network and Transport Optimisation. In the most comprehensive survey of ABMS tools, Abar et al. [34] reviewed 85 existing tools. The authors present a concise characterization of existing tools, including the underlying programming language, the type of agent that the system supports, the complexity of the end user system, and the domain for which the system is primarily intended, highlighting their advantages and disadvantages. In this way, they provide a useful reference for engineers, researchers, students and teaching staff for choosing the appropriate ABMS tool, when designing and developing their models and prototype systems. Referring to most of the above-mentioned reviews, Wikipedia [35] provides another updated overview of available tools, including their primary domain, institution developing them, programming language and licensing, user documentation, 3D and GIS capabilities, and compliance with applicable standards. A brief overview of the above-mentioned surveys is given in Tab. 1. The above-mentioned researches provide useful information on a wide range of ABMS tools. They also give an insight into the period of active use of certain tools, which also indicates their objective quality. In this review, we will pay special attention to tools that can be successfully used for teaching purposes, when students are introduced to abstract concepts of the agent paradigm. SURVEY OF ABMS TOOLS IN EDUCATION As a basis for the selection of ABMS tools for this survey, we took the latest and most comprehensive surveys mentioned earlier [34,35], as well as some tools that are not covered by these surveys, but seem interesting and very applicable. From the wide range of available software platforms, we have selected those tools that list education as a possible field of application. Guided by the goal that this survey can be used as an ABMS tool catalogue, and knowing the limitations of using software tools for teaching purposes, we will list the basic characteristics of each tool and classify them by several categories that, by our experience, play an important role in choosing the teaching and learning software. One of the categories is a type of software tool license. This feature is of great importance due to the financial aspect of organizing the instructions. Other important categories are the programming language for modelling and simulation of the system, as well as the development environment of the tool. Getting acquainted with the programming language with which students did not have any contact until then would be timeconsuming. A complicated development environment would also require too much time. In this way, there would not be enough time for a quality teaching of the agent paradigm, which would be the focus of attention for this kind of instruction practice. We also paid attention to the platform on which ABMS tools are running, as well as the required operating system. These characteristics represent another significant item from the organizational point of view. As the last characteristic, for each of the tools, we will indicate the level of complexity of the model development. This feature also has a significant impact on the liability to engage some ABMS tool in teaching and learning process. We are aware that the selection of listed characteristics is, at the same time, a limitation of this survey. Emphasis on the general characteristics of this group of software tools has potentially omitted the more subtle features indigenous to each package. Sometimes, in research of specific domains and with specific modelling requirements, these particular characteristics can be crucial for selecting a particular tool. However, we hope that this survey will serve as a starting point for educators who want to include some of the ABMS tools in their teaching process. Before the brief overview of the selected AMBS tools' characteristics, we will give some basic information for each of them. The tools we choose on the basis for their applicability as a teaching tool are: -AgentScript is a minimalist system-modelling framework, based on agents. This tool belongs to a group of tools based on NetLogo agents' semantics. Its goal is to promote an agent-oriented programming model in a deployable CoffeeScript/JavaScript implementation. -AgentSheets is an authoring tool that empowers casual computer users with no formal programming training to build and publish web-based interactive simulation. This tool combines Java authoring, enduser programmable agents and spreadsheet technology. -BehaviourComposer is a web-based tool for constructing, running, visualising, analysing, and sharing agent-based models. These models can be constructed by non-experts by composing pre-built modular components called microbehaviours, small, coherent, and independent program fragments. -Cellular is a block-based programming environment for creating agent-based simulations. Different types of agents can be created, and their behaviour controlled by constructing scripts using a drag-and-drop interface. -ExtendSim is a simulation environment for modelling continuous, discrete event, discrete rate, and agentbased systems. Its design facilitates every phase of the simulation project, from creating, validating, and verifying the model, to the construction of a user interface. An integrated database enables faster and handling with simulation models. -Framsticks is a system built to support a wide range of experiments, and to provide all of its functionality to users, who may use this open system in a variety of ways. This tool tries to fill the gap between advanced artificial life models with their consequences, and advanced simulation tools with their realism. -JAS-mine is a Java-based computational platform that features tools to support the development of largescale, data-driven, discrete-event simulations. The platform is specifically designed for both agent-based and microsimulation modelling, anticipating a convergence between the two approaches. -MIMOSE is a modelling and simulation software system that consists of a model description language and an experimental frame for simulation of the described models. The purpose of MIMOSE project was the development of a modelling language that considers special demands of modelling in social science and supports the creation of structured, homogeneous simulation models. -MOBIDYC is a system that helps the user in the four main steps involved in the construction and the use of models for population dynamics: defining biological entities, defining their environment, launching and controlling simulations, editing results. It includes tools for every aspect of agent functioning. -NetLogo is a multi-agent programming language and modelling environment for simulating natural and social phenomena. It is particularly well suited for modelling complex systems evolving over time. Despite the entry-level programming interface, NetLogo is capable of quite sophisticated modelling and allows experienced users to add their own Java extensions. -Scratch is a visual programming environment that lets users create interactive, media-rich projects. These projects include animated stories, games, music videos, science projects, tutorials, simulations, and sensor-driven art projects. -SeSAm provides a generic environment for modelling and experimenting with agent-based systems. Based on a declarative, explicit model representation and visual programming, it allows implementing models on specification level, including the easy construction of complex models. -SimSketch is an integrated drawing and modelling tool that allows students to create sketches and apply behaviours to elements of their drawing. A multi-agent simulation engine interprets and executes the model, thus building an intuitively usable and motivating learning environment and confronting the learner with the results and consequences of his externalised mental model. This survey is part of a broader research of teaching agent paradigm using simulation environments. Taking into account the above characteristics of ABMS tools, as well as the specific needs of our research, from the group of listed tools we selected Cellular and NetLogo. The main reason for this choice was the fact that students were already familiar with the programming languages of the same or very similar syntax. With this choice, of course, we do not want to claim that the chosen ABMS tools are better than other previously mentioned tools. We leave the choice of appropriate ABMS tool to each researcher individually, in accordance with his needs and the purpose of the research. CONCLUSION The increasing use of ICT in all aspects of life as well as the need to optimize different systems and processes using computer models and simulations, in which intelligent agents play a key role, justify the use of ABMS tools in education, to give future experts an insight into the possibilities of these software solutions. This paper provides an insight into the range of available modelling and simulation tools available for agents that can be used in the learning and teaching process. The paper also outlines the essential features of these tools that, according to our experience, can play a key role in choosing one of them. That is why we believe this work can serve as a reference book for the teachers at all levels of the education system when choosing the most appropriate tool for their work.
6,433
2020-06-14T00:00:00.000
[ "Education", "Computer Science" ]
Inhibition of the Notch1 pathway induces peripartum cardiomyopathy Abstract Increased expression and activity of cardiac and circulating cathepsin D and soluble fms‐like tyrosine kinase‐1 (sFlt‐1) have been demonstrated to induce and promote peripartum cardiomyopathy (PPCM) via promoting cleavage of 23‐kD prolactin (PRL) to 16‐kD PRL and neutralizing vascular endothelial growth factor (VEGF), respectively. We hypothesized that activation of Hes1 is proposed to suppress cathepsin D via activating Stat3, leading to alleviated development of PPCM. In the present study, we aimed to investigate the role of Notch1/Hes1 pathway in PPCM. Pregnant mice between prenatal 3 days and postpartum 3 weeks were fed with LY‐411575 (a notch inhibitor, 10 mg/kg/d). Ventricular function and pathology were evaluated by echocardiography and histological analysis. Western blotting analysis was used to examine the expression at the protein level. The results found that inhibition of Notch1 significantly promoted postpartum ventricular dilatation, myocardial hypertrophy and myocardial interstitial fibrosis and suppressed myocardial angiogenesis. Western blotting analysis showed that inhibition of Notch1 markedly increased cathepsin D and sFlt‐1, reduced Hes1, phosphorylated Stat3 (p‐Stat3), VEGFA and PDGFB, and promoted cleavage of 23k‐D PRL to 16‐kD PRL. Collectively, inhibition of Notch1/Hes1 pathway induced and promoted PPCM via increasing the expressions of cathepsin D and sFlt‐1. Notch1/Hes1 was a promising target for prevention and therapeutic regimen of PPCM. tively. We hypothesized that activation of Hes1 is proposed to suppress cathepsin D via activating Stat3, leading to alleviated development of PPCM. In the present study, we aimed to investigate the role of Notch1/Hes1 pathway in PPCM. Pregnant mice between prenatal 3 days and postpartum 3 weeks were fed with LY-411575 (a notch inhibitor, 10 mg/kg/d). Ventricular function and pathology were evaluated by echocardiography and histological analysis. Western blotting analysis was used to examine the expression at the protein level. The results found that inhibition of Notch1 significantly promoted postpartum ventricular dilatation, myocardial hypertrophy and myocardial interstitial fibrosis and suppressed myocardial angiogenesis. Western blotting analysis showed that inhibition of Notch1 markedly increased cathepsin D and sFlt-1, reduced Hes1, phosphorylated Stat3 (p-Stat3), VEGFA and PDGFB, and promoted cleavage of 23k-D PRL to 16-kD PRL. Collectively, inhibition of Notch1/ Hes1 pathway induced and promoted PPCM via increasing the expressions of cathepsin D and sFlt-1. Notch1/Hes1 was a promising target for prevention and therapeutic regimen of PPCM. K E Y W O R D S Cathepsin D, Notch1, PPCM, PRL, sFlt1 soluble fms-like tyrosine kinase-1 (sFlt-1) has also been demonstrated to impair cardiac capillary network through inhibiting pro-angiogenic vascular endothelial growth factor (VEGF) and placental growth factor (PIGF) activities. 3 The combination of bromocriptine (PRL inhibitor) and recombinant VEGF is a curative option for PPCM. 1,4 As a canonical target gene of Notch pathway, Hes1 has been found to regulate angiogenesis. 5,6 Our previous work has suggested that Hes1 is able to protect ischaemic myocardium via mediating the phosphorylation of Stat3. 7 Reportedly, Stat3 modulates proliferation, differentiation, survival, oxidative stress and/or metabolism in cardiomyocytes, fibroblasts, endothelial cells, progenitor cells and various inflammatory cells. 8 Indeed, Hilfiker-Kleiner 2 and colleagues have found that activated Stat3 can inhibit cathepsin D, which then suppresses PPCM development. Furthermore, our team has also found that Notch1 can promote VEGF-mediated cardiac angiogenesis in ischaemic regions 9 and inhibit myocardial fibrosis. 10,11 Bioinformatic analysis shows that there are multiple Hes1 binding sites in the promoter region of cathepsin D and sFlt-1, indicating a potential role of Hes1 in PPCM. We hypothesized that inhibition of Notch1/Hes1 induced and promoted PPCM via increasing cathepsin D and sFlt-1. In the present study, we used LY-411575 (γ-secretase inhibitor) to suppress Notch1 pathway and decrease Hes1 expression 12 to explore the potential role of Notch1/Hes1 in PPCM. | Animal experiments Female C57BL/6J mice (6-8 weeks of age) were purchased from Slaccas Co., Ltd. All animal studies were performed at Experimental Animal Center of Nanchang University in accordance with the Guideline of US National Institutes of Health (NIH), and animal-related protocols were approved by the Institutional Committee for Use and Care of Laboratory Animals of Nanchang University. The mice with PPCM during pregnancy and breastfeeding were assigned into the peripartum group, while the nulliparous mice were used as the control group. Mice were administered by gavage with LY-411575 (10 mg/ kg/d, diluted in 0.4% methylcellulose) daily starting 3 days before delivery until 3 weeks after delivery. The blank mice were dosed with 0.4% methylcellulose vehicle. The PPCM phenotype was verified as previously described. 13 At the end of the dosing period, mice were sacrificed by CO 2 asphyxiation, total blood was collected and centrifuged at 1500 g for 10 minutes at room temperature to obtain serum. Serum samples were stored at −80°C until analysis. The heart tissues were surgically isolated for further analysis. The heart rate and body temperature were maintained and recorded. | Measurement of cathepsin D and sFlt1 in serum Cathepsin D and sFlt1 were detected using commercial ELISA kits for cathepsin D and sFlt1 according to the manufacturer's instructions. All samples were simultaneously detected. Serum concentrations of cathepsin D and sFlt1 were determined using standard curves and expressed as units per litre (mg/L). The linear ranges for cathepsin D and sFlt1 were 0-50 mg/L. | Histological analyses For histological analyses, mouse hearts were fixed in situ by retrograde perfusion with PBS (pH 7.4) containing 50 mM KCl and 200 U/ mL heparin for 2 minutes at 80 mm Hg, followed by in situ paraformaldehyde fixation. Sections were embedded in paraffin and stained with H&E and wheat germ agglutinin (WGA, Alexa Fluor 488 conjugate; Thermo Fisher). Masson (HT15-1KT; Sigma-Aldrich) staining was performed to determine collagen deposition following the manufacturer's instruction. Tissue morphometry was performed in a blinded fashion using the Quantimet 500MC digital image analyzer. | Western blotting analysis The left ventricle tissues were lysed in cell lysis buffer (Beyotime Institute of Biotechnology) at 4°C. Equal amounts of proteins were subjected to 8%-10% SDS-PAGE and then transferred onto nitrocellulose membranes (Millipore). The blots were blocked in 10% non-fat milk in TBST. Membranes were incubated with primary antibodies at 4°C overnight, followed by incubation with secondary antibodies at room temperature for 1 hour. The immunoreactive bands were visualized using ECL kit (Thermo Scientific) and analysed by ImageQuant LAS4000 (GE). | Quantitative real-time PCR Total RNA was extracted with TRIzol reagent (Thermo Fisher). | Statistical analysis Data were expressed as mean ± SD and analysed by SPSS 18.0 package (SPSS Inc). The obtained data conform the ANOVA assumptions as evaluated using Shapiro-Wilk normality test and Levene's test for the equality of variances. Comparisons between groups were analysed by two-way ANOVA with Bonferroni's post-test. P < .05 was considered statistically significant. | Inhibition of Notch1 induces and promotes postpartum ventricular dilatation To explore the role of Notch1 in PPCM, we randomly formed four groups (n = 6) as follows: nulliparous, peripartum, nulliparous LY-411575 and peripartum LY-411575 . Figure 1 shows that LVEDD Collectively, these data suggested that the Notch1 pathway played a protective role in PPCM, and inhibition of Notch1 could induce and promote PPCM. | Inhibition of Notch1 promotes ventricular hypertrophy and myocardial interstitial fibrosis Ventricular hypertrophy and myocardial interstitial fibrosis are remarkable pathological changes in PPCM. To confirm whether Notch1 was involved in these histopathological changes, we compared the HW/BW ( Figure 3A) and HW/TL ( Figure 3B) among different groups and evaluated histological changes of myocardium ( Figure 3C). It demonstrates that there was just interstitial fibrosis ( Figure 3D | Inhibition of Notch1 suppresses postpartum myocardial angiogenesis Myocardial angiogenic imbalance is essential for PPCM. Here, we evaluated myocardial capillary density and detected angiogenic factors among different groups. F I G U R E 3 Inhibition of Notch1 promotes myocardial hypertrophy and interstitial fibrosis in the left ventricle. A, The ration of heart weight/bodyweight, HW/BW. B, The ratio of heart weight/tibia length, HW/TL. C, Haematoxylin-eosin was used to evaluate the ventricular wall thickness and cavity. D, Masson's trichrome staining was used to evaluate the fibrosis. E, Wheat germ agglutinin staining was used to analyse the cardiomyocyte surface area. F, The relative expression of ANP, BNP, β-MHC and COL1A1 mRNA was evaluated by real-time PCR. All data were presented as mean ± SD (n = 6). *P < .05, **P < .01 versus indicated group. Comparisons between groups were analysed by two-way ANOVA with Bonferroni's post-test | Inhibition of Notch1 decreases N1ICD, Hes1 and p-Stat3 and increases cathepsin D The above-mentioned findings showed that Notch1 was involved in PPCM. As a canonical target gene of Notch1, we wondered whether Hes1 mediated Notch1-regulated PPCM. | D ISCUSS I ON The aetiology and aetiopathogenesis of PPCM remain elusive. However, myocardial hypertrophy, interstitial fibrosis and ventricular dilatation are common pathological changes. 4 Our data here showed that inhibition of Notch1 by LY-411575 significantly down-regulated the expressions of N1ICD, Hes1, p-Stat3 and pro-angiogenic factors, such as VEGFA, PDGFB and BFGF, and increased the production of cathepsin D and anti-angiogenic factors, such as sFlt-1 and 16-kD PRL. Consistent with these changes, we found significant myocardial hypertrophy and myocardial interstitial fibrosis as well as reduced myocardial capillary density in LY-411575-treated mice. Overall, these findings strongly indicated that the sequential activation of Notch1/ Hes1/Stat3 might contribute to correcting angiogenic imbalance and alleviating PPCM. | CON CLUS ION Our data strongly supported the idea that imbalances in angiogenic signalling contribute to PPCM, and Notch1/Hes1pathway may play a protective role in such disorder via regulating cathepsin D and sFlt-1. CO N FLI C T O F I NTE R E S T None. DATA AVA I L A B I L I T Y S TAT E M E N T The data sets used and analysed during the current study are available from the corresponding author on reasonable request.
2,255.4
2020-06-11T00:00:00.000
[ "Biology", "Medicine" ]
Transient Response of a Novel Displacement Transducer for Magnetic Levitation System Problem statement: In magnetic levitation system, position sensors ar e used to obtain a voltage proportional to the position of the suspend ed object. This is an essential feedback signal for stabilizing the system. These sensors make the syst m clumsy and prone to failures. To eliminate any physical attachment on the levitated object for the purpose of measuring its displacement, a novel magnetic displacement transducer has been designed. Approach: Variation in inductance of the transducer with the position of the levitated objec t was used to detect the position of the object. Co il of the transducer was excited by a 5 kHz voltage and v riation in phase angle of its current was measured by synchronous demodulation method. Transient respo nse f this system was also obtained for step change in the position of the levitated object. Results: By simulation as well as by experiments it was observed that a minimum delay equal to one and a ha lf times the cycle time of the exciting frequency was always present. The delay further increases wit h increase in order of the filter. In magnetic levitation applications, mechanical frequency of th e levitated object was generally below 10 Hz and therefore a delay of around 300 micro seconds with an exciting frequency of 5 kHz was acceptable. Steady state characteristic of the transducer was n e rly linear and it was further linearized by using a look up table and cubic interpolation. Signal outpu t from synchronous demodulation circuit had been digitally processed for application to magnetically levitated system. Conclusion: A novel yet simple circuit for sensing the position of the moving obje ct for electromagnetic levitation system is developed. The transient response of the developed system is also obtained and the simulation results are verified experimentally. INTRODUCTION Magnetic levitation systems have practical importance in many Engineering applications such as high speed maglev trains, frictionless bearings etc. Of the various methods by which magnetic suspension can be achieved, the work here is concentrated on electromagnetic attraction type of magnetic levitation system. These kinds of systems are open-loop unstable. Therefore, feedback controllers (El Hajjaji and Ouladsine, 2001;Zhang and Suyama, 1995;Varatharaju et al., 2011) sliding mode control (Al-Muthairi and Zribi, 2004;Deshpande and Mathur, 2011), neural (Lin et al., 2005) and fuzzy logic (Dukan et al., 2008) are used for stabilizing the position of the levitated object. Adaptive controllers commonly designed for motors (Husain et al., 2008) and Hybrid fuzzy controllers (Pratumsuwan et al., 2010) are also gaining popularity in control of servo systems. An important signal required for the controller is the position of the levitated object. Optical sensors and Hall-effect sensors are generally used for detecting the position. These sensors are to be mounted on and near to the levitated object. This needs additional mountings that is either difficult or may be sometimes impossible. Earlier various novel sensing techniques have been designed by the authors. (Deshpande and Mathur, 2009;Deshpande and Badrilal, 2010) .Yet another novel position sensor not using any moving part has been developed to overcome the drawbacks of conventional position sensors. Schematic of the proposed system is shown in Fig. 1. The electromagnet used is of E shape laminations and the levitated object is of I shape. Inductance of the lifting magnet is a function of the proximity of the levitated object and its inductance can be exploited for determination of the separation. The lifting coils W 1 and W 2 connected in series are located on the outer limbs of the core where as the distance measuring coil W 3 is located on the central limb. The windings W 1 and W 2 are so connected that the flux produced by outer limbs is cancelled on the central limb. The technique developed is based on the concept that variation in inductance of the central limb is a function of the position of the levitated object. The range of movement of the object is small as is the case in magnetic levitation. The variation in the inductance is observed as a phase displacement between current and voltage of the coil W 3 . The phase displacement is measured by the average value of a multiplier circuit. The average is determined by using a digital low pass filter. The simulation results show that a time delay of nearly one and half cycle of the exciting frequency of the coil W 3 is introduced between displacement and signal output of the transducer. This delay is mainly due to the low pass filter. Simulation results were verified experimentally. Fig. 2 the coil W 3 is excited by 5V 5 kHz. Impedance angle Φ+ α of the winding W 3 is a function of the distance Y. Φ is the phase difference when displacement y=0. Opamp1 forms a current sensor which converts the coil current into a proportional voltage without introducing any impedance in the measuring circuit. Opamp2 converts this voltage into square wave at point A. Phase angle of the square wave with respect to the applied voltage is (Φ+α) where α is the phase difference introduced due to displacement Y. A voltage proportional to the applied voltage to W 3 is derived directly at point B and its inverted value through unity gain inverting Opamp3 at point C. Self sensing circuit: In Opto coupler multiplier: Voltages at points A, B and C of Fig. 2 are multiplied by an opto-coupler multiplier shown in Fig. 3. LEDs of opto-couplers P and Q are connected in series. LEDs of P and Q are ON when the voltage of point A is positive. Similarly LEDs of R and S are ON when A is negative. Collectors and emitters of BJTs of P and Q are connected in anti parallel and similarly, collectors and emitters of BJTs of R and S are also connected in anti parallel. A conducting path is established between B and D when A is positive and between C and D when A is negative. Wave shape of the voltage at D is shown in Fig. 4. Proportions of the positive and negative areas depend on the phase difference between sinusoidal voltage at points B and C and square wave voltage at point A. MATERIALS AND METHOD Method of measurement of distance y and inductance: The distance y was measured by a travelling microscope and the inductance of the coil w3 was measured by an auto balancing a.c. bridge. The inductance versus y plot obtained experimentally is shown in Fig. 5. To smooth out the experimental readings an exponential curve fitting was done. The relation between displacement and inductance is expressed as: where, a = 0.4737, b= -227.2157, c= 0.1578 and d = -0.9830 Fig. 1-3 were connected in cascade as shown in Fig. 6. Output D of Fig. 3 was given to a Micro Controller. The microcontroller converts the analog voltage at point D into digital and applies it to a two stage IIR low pass filter having a cut off frequency of 100 Hz. Simulation and experimental results: Systems shown in Steady state response: Output of the low pass filter is a DC voltage which is proportional to sin (Φ+ α). Relation between the filter output V and the displacement Y was found to be nearly linear. To further improve the linearity, the digital signal output from the low pass filter was applied to a look-up Table. 1 and cubic interpolation was used to improve the linearity. Transient response of the proposed transducer: A step change was made in the displacement Y at 500 microsecond and corresponding change in V was recorded Fig. 7 using digital storage oscilloscope, Agilent 7000B.The voltage V took a time of about 300 microseconds to change from one steady state value to the other. This time delay is one and half times the cycle time of the exciting frequency of 5 KHz for the coil W3. The delay also depends upon the order of the filter and its cut off frequency. The results obtained experimentally and also by simulation are matching. DISCUSSION As the lifting coils w1 and w2 should be magnetically decoupled from the measuring coil w3 of Fig. 1, the choice of core shapes are limited. Exciting frequency of the coil w3 should be very high as compared to the mechanical frequency of oscillations of the levitated object. For every high frequency of excitation, ferrite may be used for lifting and levitated objects of Fig. 1. CONCLUSION A novel yet simple circuit for sensing the position of the moving object for electromagnetic levitation system is developed. A linear relationship is obtained between the voltage and the position. The variation in phase of output voltage with respect to inductance and hence the position of the moving object is obtained. The transient response of the developed system is also obtained and the simulation results are verified experimentally. Simplicity of the designed circuit and absence of moving parts make it more attractive for magnetic levitation applications.
2,104.2
2011-10-05T00:00:00.000
[ "Engineering", "Physics" ]
Estimation of shape factor for irregular particles using three-axial measurement approach Particle shape is an imperative term in civil engineering applications that play a significant role in the overall behavior of the particles; however, the shape factor becomes complex when it is related to the irregular-sized particle. The manuscript focuses on quantitative estimation of the shape factor of highly irregular metal particles of diameters ranging from 2.00 mm to 5.00 mm using a three-axial microscopic measurement. The measured data is used to compute the nominal diameter of a particle representing a circle of equivalent diameter and shape factor is computed. The result has been compared with the previously established studies and found to be corroborative. Introduction In geology, interest in particle shape emerged earlier than in geotechnical engineering. The particle shape is attained by its transportation from the original position to the deposits or during the machining process required in the metal industries. There are also considerations for the process of particle genesis itself (rock structure, mineralogy, hardness, etc.). Many factors have been considered to define particle shape to classify and compare grains (axis lengths, perimeter, surface area, volume, etc.) to specify a particle shape and put forth empirical equations to substantiate the same. On that line [1] are endorsing form, roundness, and surface texture to describe the shape of a particle. Over the years, several attempts have been made to develop a methodology for measuring particle shape. Besides, other techniques of characterising a particle shape, a manual method including chart comparison [2] [3], also sieving [4], and, more recently, three-axial measurements using microscope were adopted in an industry with good results of characterizing a particle shape. Furthermore, using the computer-aided approach to measure particle shape saves a significant amount of effort [4]. The Objective The shape of a particle can be described qualitatively or quantitatively. The qualitative description of the particle (e.g., elongated, spherical, flaky, etc.) is expressed in words, whereas the quantitative description relates to the measured dimensions; the quantitative description is more important in the engineering field due to reproducibility. Particle quantitative geometrical measurements can be used to support qualitative classification. To describe the particle form, a few qualitative measures and several quantitative measures can be used. Despite the abundance of qualitative descriptions, none were widely accepted. To analyse particle dimensions and shape factors, microscopic measurements are required. In myriad applications, particle shape is crucial to its behaviour. This property is finding its place in the various fields of engineering such as hydraulics, cross-drainage works, transportation, mining, etc. due to its importance of changing behaviour in different applications. The global form, major surface feature scale, and surface roughness scale are used to determine particle shape. Each scale reflects aspects of the particle's formation history and contributes to the overall behaviour of the particle, from particle movement to mechanical response. Background A particle's shape is captured using three independent relative scales [5] form, which describes differences in particle proportions Roundness refers to the variations at the particle's corners that are superimposed on it. The characteristics that are superimposed on both corners and surfaces are referred to as roughness. Because these descriptors are distinct, one can differ significantly without affecting the others. The Fourier method, as well as fractal analysis if the shape is self-similar, can be used to characterise it. Form, Roundness and Roughness Form The shape of a particle can be described using terms such as cubical, spherical, elliptical, elongated, flat, tubular, platy, lathlike, and needle. The form can be quantified using the length ratios of the three orthogonal axes. Barrett (1980) provides a list of at least 15 parameters that can be defined using these ratios. The aspect ratio is the ratio of the long axis L to the intermediate axis B. It is also known as the elongation. The ratio of the intermediate, B, and short axes is another term for flatness. Two more mathematical descriptors of the form are sphericity and eccentricity. Sphericity is defined by [6] as the ratio of particle volume to circumscribing sphere volume. This definition is flawed because it includes a measure of roundness. The ratio of the particle's surface area to the surface area of an equal volume sphere is an improved definition of sphericity [7]. Eccentricity is defined as the 8p/Rp ratio of an elliptical particle, whose two-dimensional outline has been expressed as Rp = 8p.cos (20). Roundness The radius of curvature of each corner is averaged and compared to the radius of the particle Wadell's maximum inscribed circle to determine roundness [6]. The procedure is two-dimensional, but it can be made three-dimensional by replacing circles with spheres. Due to the difficulty in determining what constitutes a corner, subjectivity enters the test. Roughness Scale considerations are critical in the characterization of particle roughness. Because all surfaces are rough at some scale, roughness must be characterized at the scale deemed relevant to the problem at hand [7]. Krumbein's chart is used in this study to determine sphericity and roundness (1963). The procedure is as follows: A pinch of sand is placed on a Petri dish and examined under a Leica MZ6 stereomicroscope. Particles are indirectly illuminated by light reflected from a reflective shield. Approximately 30 grains are studied at various magnifications. The chart is then used to calculate representative grain sphericity and roundness finds a 10% variation in the roundness of particles from the same vial of sand when multiple students use the Powers chart. The roughness of the particles subjected to shear wave velocity testing is also determined. The roughest sample particles are arbitrarily assigned a roughness value of 3, the smoothest sample particles a value of 1, and all others are assigned an intermediate value based on their relative roughness [6][3][2][5] depicts particle shape irregularity on three different scales: Sphericity S is preferred over ellipticity or flatness, roundness R over angularity, and smoothness over roughness. The sphericity is computed based on a diametric ratio of the largest inscribing sphere and smallest circumscribing sphere. On the other hand, roundness is computed by comparing a IOP Publishing doi:10.1088/1755-1315/1032/1/012014 3 curvature radius with a radius of the biggest sphere inscribing a particle under consideration. Surface features that are much smaller than particle diameter are referred to as roughness. Sphericity and roundness can be estimated visually using charts like the one shown suggested by Folk 1955, and Barrett 1980. However, advanced techniques such as digital image analysis can also be applied to characterize the particle of irregular shape and size [8]. Because rough surfaces are fractal, they lack a characteristic scale, making the direct measurement of roughness difficult. As a result, the relevant roughness observation length is transformed into the inter-particle contact area: this is a particle's connection to its neighbor. The shape parameters can also be derived from the soil mass's macro-scale behavior. The fall of particles or flow of particles in various applications is affected due to its shape, which results in complex drag force phenomena. Methodology, Data Measurement and Estimation of Shape Factor In hand measurement technique, sliding rod caliper, instruments were used to obtain the accurate data of particle geometry [3]. A particle under consideration needs to adjust on the sliding rod caliper to get the length accurately using a graduated scale attached to the instrument. Similarly, a convexity gauge is used to measure the curvature of the particle. To arrive at the shape of rock particles, an instrument was used and concluded that the tool gives good attribution [9]. The results so obtained was further reviewed and analyzed by many researchers. Microscopic Measurements Irregular mineral particles can be conveniently measured under a microscope. The accuracy further can be improved by attaching a video camera linked to a computer. The most obvious method is to compute the arithmetic or geometric mean of the number of measurements taken as well as the average distance between two cross-hairs. The third dimension (Z-axis) is obtained using a screw gauge. The extremities of the measured distance are standardized and used to calculate the mean cord length of the particle defined by Martine's or Ferret's diameter. Martine's diameter is the length of the line that divides the practical image in half. The dividing line is drawn in a parallel fixed direction regardless of the practical orientation. The mean distance between two tangents on opposite sides of the apparent outline of the particle is defined as Ferret's diameter. And the arithmetic or geometric mean value of length computed using both the methods is equal to the diameter of the reference circle; where arithmetic mean diameter is given as: Dmax and Dmin are the mean diameters of several Martine or Ferret measurements, and Dam is the arithmetic mean diameter. Computation of Shape Factor The coefficient of Drag and Reynolds Number can be very well related for particles of regular shape. However, it becomes difficult for irregularly shaped particles as the drag becomes complex. The empirical relationships have been developed for regularly shaped particles by many researchers to calculate the drag on it when it falls in a liquid media but to develop a correlation between Cd and Re for irregularly shaped particles, more precisely minerals between particles, shape factor is an essential parameter the can truly define an irregular particle in an equivalent sphere. S.F. is a shape factor that appears to be as satisfactory as any other. where D 1 is the longest axis, D 2 is the intermediate axis, and D 3 is the shortest axis of the three mutually perpendicular axes. The shape factor considers three of its axial dimension of irregular particles and estimates the particle shape. Particles with the same shape factor can have rounded angular, rough, or smooth shape factors. This study is related to the particle of natural grain and some of the minerals having irregular shape and size. Shape factors based on particle roundness, sphericity, or other physical properties could be used, but they would not be adequate for hydraulic studies. The set of irregular particles have been taken and three-axial measurements are noted using a microscope (major and intermediate axes) and screw gauge (thickness, as minor axis). Using equation 2, the shape factor is estimated as shown in Table 1. The plot of average diameter against the shape factor is shown in Figure 1 Table1. Microscopic measurement and shape factor Discussion and Conclusions Based on the above microscopic measurements, experimental analysis, and graphical presentations, it is concluded that the shape factor's value varies with change in the perpendicular direction of the particle (Z-axis). The graph depicts how the shape factor changes in the shorter direction. As the shape factor is directly proportional to the shortest axis. The graph shows that the shape factor decreases as the minor axis value increases. Microscopic analysis is objective, repeatable, produces quick results, and works with more data, but it still requires improvement to avoid the edition process being poorly-contrast particles. Although there are numerous methods for defining shape factors and describing quantities, the measurement-based breakdown is quite practical and useful. When performing microscopic analysis, the resolution must be considered because the effects can be significant. The resolution must be based on the requirements. The R-value reflects the effect of resolution on diameters as a perimeter.
2,612.4
2022-06-01T00:00:00.000
[ "Materials Science" ]
UAVs Path Planning based on Combination of Rapidly Exploring Random Tree and Rauch-Tung-Striebel Filter Aiming at the problem of Unmanned Aerial Vehicle(UAV) formation path planning under complex constraints, a UAV formation path planning method based on the combination of Rapidly exploring Random Tree (RRT) and Rauch-Tung-Striebel (RTS) filter is proposed. Firstly, a path planning algorithm based on the improved RRT algorithm with adaptive step size is de-signed to solve the problem that the RRT algorithm is easy to fall into local optimum. Then, an RTS filter is introduced to smooth the trajectory planned by the improved RRT algorithm to achieve curvature continuity. Finally, taking the smooth trajectory as the reference, a UAV formation path planning algorithm over the Artificial Potential Field (APF) method is designed. The simulation results show that the designed UAV formation path planning algorithm can solve the planning problems of single trajectory and formation trajectories in complex constrained space, and can plan the formation trajectory with continuous curvature, to facilitate the UAV trajectory tracking control. Introduction UAVs have been extensively employed in the field of disaster rescues, such as communication support, evaluation with remote images, and small supplies delivery [1].UAVs are adopted to fulfill these tasks because of their low requirements on takeoff and landing conditions, long hover duration, and low R&D and maintenance costs.UAV smarms that can expand the capability boundary and service radius of a single UAV have become an essential field for the development and application of UAVs [2].Path planning and flight safety of swarms problem will be a multi-order NP-hard problem especially when considering constraints including flight time, UAV formation, safety, terrain, and fuel consumption [3]. There have been some researches on the path planning of UAV swarms based on certain planning methods.For example, in [4], the authors designed a multi-objective path planning method using the graph theory search algorithm.In [5], the authors reduced the re-planning by studying the shortest path problem on the dynamic road network.These path planning methods can only be executed over a graph that the destinations and feasible paths are given.Therefore, it is not practical for objects with random destinations and flexible flight areas, like UAVs.In view of the above shortcomings of the graph method, considering the characteristics of path planning in the perspective of large-scale planning, there are many kinds of research on path planning using bionics and evolutionary theory.In [6], a multi-objective optimal path is planned over Gray Wolf Optimizer (GWO).In addition, Xiande Wu et al. proposed a multi-objective UAV path planning method by hybrid of particle swarm optimization method and RTS method [7].According to a summary of these studies, it is often better to plan flight trajectories through hybrid algorithms and take advantage of the strengths of each [8]. The Rapidly Exploring Random Tree (RRT) algorithm is able to dynamically generate random positions in the planning horizons of UAVs, which will form several connected tree structures until reaching the target position [9].However, it is impossible for UAVs to follow the path generated by the RRT algorithm because it is a tree-like path that is composed of a large number of zig-zag line segments.To solve this problem, in [10], a UAV path planning method over mixed population RRT algorithm is proposed, the authors smoothed the planned trajectory to meet the UAV dynamic constraints.The result shows that after smoothing the path nodes generated by the RRT algorithm, the trajectory curvature is more continuous, which is more convenient for UAVs' tracking control. In a review of current research on UAV path planning, we propose a planning method that combines the RRT algorithm with adaptive steps and the Rauch-Tung-Striebel (RTS) filter, which can innovatively solve the UAV formation path planning problem under complex constraints.The remainder of this paper is organized as follows.In Section 2, the UAV path planning problem models are established.Section 3 depicts the detailed design of the planning algorithm based on the Leader-Follower UAV formation that we use in this study.We will analyze the simulation results in Section 4 and summarize the conclusions in Section 5. UAV trajectory model The takeoff process of UAVs is not considered in this paper for it always is under control by the ground operators and is highly affected by air route, convective cloud, and some other factors, before reaching the desired flight altitude.The starting point in trajectory planning model studied is set as at ( 0 , 0 , 0 ) that is a certain point on the cruising altitude, and ending at ( , , ).The reference coordinate system is − , as shown in Figure 1. The trajectory model of UAVs can be defined as: = −1 + (2) where = 1,2, … , , is the count of trajectory nodes, and 0 , 0 , 0 are the starting position of the trajectory.Where , , indicate the position of the th search, = , = , < Δ, and is the step-length in th search, Δ indicates the acceptable error of cruising altitude.In the following contents, UAV cruising altitude changes won't be considered, namely, = 0. UAV flight constraint model 1) Flight boundary constraint: in the planning process, the upper boundary of UAVs' flight area on the X and Y axis is ( , ). 2) Flight range constraint: set the maximum flying range of UAVs is , so that the sum of all step-length on trajectories should be less than , which is: where + indicates takeoff range and − indicates landing range.Considering the takeoff and landing process is controlled by ground operators, so + and − are set to be a fixed value, + + − = . 3) Yaw angle constraint: the turning radius of UAVs cannot exceed the design value, so yaw angle range is [− , ], as is shown in Figure 2. The yaw angle at each RRT tree node at step + 1 is where | ⋅ | indicates the norm of a vector.Then the yaw angle constraint that should be satisfied at step + 1 is: where , respectively indicate the count of nodes on trajectory and . 5) Flight forbidden zone constraint: during path planning procedure, UAVs can't fly pass unsuitable areas caused by terrain, weather, and other factors.Supposing the enclosed area (⋅) is the flight forbidden zone (shown in Figure 3), then for any flight forbidden zone (⋅) and line segment +1 (connecting two adjacent nodes and +1 ) should satisfy: Path evaluation model Flying range and the sum of yaw angels are main factors that affect the performance of the path.In this paper, we take these two factors as the evaluation parameters, and the form of evaluation function J(x) is: where 1 and 2 are weight coefficients, and 1 + 2 = 1; indicates the total count of trajectory nodes, the flying range of each step is , while indicates the yaw angle of each step. Model of the UAV formation The formation model keeps each UAV member in a specific shape and distance to realize collision avoidance or mission cooperation.Here, set a UAV formation composed of N UAVs with one leader UAV and N-1 followers.The configuration is kept by formation model, that is each follower relative to the leader maintains the formation.The formation model is expressed as follow: where (, ) and (, ) respectively indicate the position information of the follower and the leader.While (, ) is position of the th follower relative to leader, which means that every follower can choose an exact position in its selectable region that can be acceptable by formation configuration. Standard RRT algorithm In the standard RRT algorithm, the root node is taken as the planning starting point as the root node, and from there, leaf nodes are added by random sampling to generate an exploratory random tree.The tree stops searching when some of the leaf nodes cover the target point or enter the target region.It is now we can find a no-collision path composed of root nodes, from the starting point to the target location.The process of basic essential algorithm is outlined in Figure 4 as: Step 2: take a point +1 on the line segment , satisfying | +1 | = ; Step 3: repeat step 1 and step 2 until the distance between and the new generated node is less than the threshold ; Step 4: find a path from 0 to in the RRT tree as an alternative trajectory. Improvement of RRT algorithm During traditional RRT extending, the step-length is fixed 1 = 2 = ⋯ = = as the subtree → +1 → +2 is shown in Figure 5.The extending node is too close to or cross the no-fly zone due to the fixed step-length, which will result in lots of invalid search attempts, as the subtree +1 → +2 passing through the flight forbidden zone 1 .The expansion process is easy to be trapped into the local minima area, especially when the number of obstacles is large or the flying area is congested, which will directly increase in invalid searches.In response to the above problem, an improvement of the fixed step-length RRT algorithm is proposed in this paper.We dynamically adjust the extending step-length of new tree node according to the no-fly zone situation, which means either = or ≠ is acceptable.Figure 5 shows the variable step-length subtree → +1 ′ → +2 , where the steplength ′ = +1 ′ .This paper studies the dynamic adjustment of the RRT expansion step-length.Therefore, the RRT tree can jump out of the local minima area and accelerate the expansion speed toward the target direction with high obstacles avoiding efficiency. RTS filter There are two stages in the RTS filter: Kalman forward filtering and RTS backward filtering.) where ,−1 is the prediction result over the previous states, ∈ indicates the status measurement of the system, is the parameter of the measurement system, indicates the gain, , and ,−1 respectively represent the covariance corresponding to the new state and the covariance corresponding to ,−1 , and indicates the gaussian white noise covariance.The forward filtering process over Kalman filter from node 0 to node of the RRT tree is outlined in Figure 6. 2) RTS filter design In the process of Kalman filter to smooth the UAV path planning results, we save the filter value , , predictive value ,−1 , filter error variance matrix , , predictive error variance ,−1 , and state transition matrix [11].RTS smoother is then performed from the final state to initial state, and the recurrence formulas of which are: ( ) where = 0,1,2, … , , | and , respectively represent the state vector and covariance after RTS smoothing, while | and | indicate predicted value and covariance after Kalman filtering, respectively.Furthermore, is the smoothing gain which is expressed in formula as: The smoothing process and direction of RTS backward filter are given in Figure 7. Smoothing result is mainly calculated based-on the results of the forward Kalman filter.Thus, the RTS filtering results are positively correlated with the Kalman filtering results. Path planning algorithm In this paper, the standard RRT algorithm is improved with an adaptive step to improve its tendency to fall into local minima and to extend invalid nodes.To facilitate the UAV path tracking control, we combine the improved RRT algorithm with the RTS filter, and the RTS filter is used in this paper to smooth the trajectory to achieve the curvature continuity of the planned path [7]. In Figure 8, the path planning process combined the adaptive RRT algorithm with the RTS filter is outlined.The new algorithm can be divided into three stages, the first stage is the leader's path nodes generation using improved adaptive step RRT; the second stage is the leader's trajectory smoothing based on RTS filter, the path nodes that are generated by RRT are smoothed to make the path ease to track; the third stage is the UAV formation path generation, mainly using the Artificial Potential Field (APF) method to generate followers' trajectory.APF is a virtual potential field that is composed of attraction field generated by target and repulsion field generated by obstacle [12].trajectory of the leader UAV and the initial configuration of formation, here the trajectory of the leader is planned by the path planning method mentioned above.More specifically speaking, the trajectory of the followers is calculated based on the APF algorithm: using the trajectory nodes of the leading UAV as a reference from which to calculate the attraction applied to each follower.Then we calculate the repulsion of the obstacles, leading UAV, and other following UAVs, and finally, the position of each follower is determined by the resultant.The detailed steps of the APF algorithm-based UAV formation path planning method are as follows: Step 1: The trajectory nodes sequence of the leading UAV is calculated as the reference trajectory node sequence for the others. Step 2: The expected trajectory node sequence of the followers is solved using the formation reference trajectory nodes and the formation configuration. Step 3: According to the reference trajectory nodes and the formation configuration, the attraction fields that applied on each follower are built taking current expected trajectory node as the target, while the repulsion fields are built taking both the flight forbidden zones and the known trajectory nodes as obstacles.After establishing a potential field model, the resultants and their directions are calculated.Then following the direction of the resultant, a new trajectory node of each follower is obtained base on the last node, and step-length of the followers is consistent with leader's. Step 4: If the current follower has not reached the target position, then turn to step 3 and keep calculating the next node; if the target position has been reached, the calculation ends. Step 5: Check whether the path planning of all the followers is completed, if not, return to Step 2; if the calculation is completed, then check whether the planned trajectories conflict with the obstacles, if there is no conflict, output all the trajectories; if there is a conflict, then return to Step 1 to re-plan all the paths. Simulation and verification To verify the feasibility of the path planning method combined the RRT algorithm with adaptive step and the RTS algorithm, a simulation with a 100 × 100 2 space size is set in this section, where flight forbidden zones are randomly distributed.And four groups of UAVs with different initial positions and target locations are utilized in the simulation, and the coordination is given in Table .1.It can be seen from the simulation results in Figure 9 that the path planning algorithm proposed in this study can plan the trajectory at different starting and ending locations, by which it can be proven that the planning algorithm is well-adapted.Figure 9 illustrates the planning results of both the planned trajectory with RRT algorithm only and the planned trajectory with a combination of RRT and RTS algorithm.The solid magenta line is the path planned by the improved RRT algorithm, and the black dashed line is the path after the RTS interpolation.The curve fitted by RTS interpolation is smoother and does not have an excessive turning angle. Conclusion In this study, a UAV formation path planning algorithm based on a combination of RRT algorithm and RTS filter is designed.The adaptive step improvement of the RRT algorithm is carried out according to the planning needs, and then the RTS filter is applied to the planned folded trajectory, from which the curvature continuous UAV formation leading UAV trajectory is planned.Based on the leader trajectory, the APF algorithm is used to solve the follower's trajectory of the UAV formation and finally generate a smooth trajectory for the whole UAV formation.Through the construction of the simulation scenario, it is verified that the proposed algorithm can plan the smooth path of the UAV formation in the flight forbidden zone, and can provide the desired trajectory output with continuous curvature for the UAV trajectory tracking control. Figure 2 . Figure 2. Schematic diagram of yaw angle 4) Distance constraint: the distance between each UAV's trajectory nodes is not less than the safety distance of formation, which is: Figure 4 . Figure 4.The extension process of the basic RRT algorithm Step 1: generate a random sampling point at current RRT trajectory node , meanwhile | | > and − ≤ ≤ ;Step 2: take a point +1 on the line segment , satisfying | +1 | = ;Step 3: repeat step 1 and step 2 until the distance between and the new generated node is less than the threshold ;Step 4: find a path from 0 to in the RRT tree as an alternative trajectory. Figure 5 . Figure 5. Schematic diagram of adaptive variable step-length expansion Figure 8 . Figure 8. Route planning algorithm based on the combination of RRT and RTS A Leader-Follower formation configuration is proposed to maintain the UAV relation of relative positions, the theoretical trajectory of each UAV member in formation is calculated over the reference Figure 9 . Figure 9. Result curves of path planning under different starting and ending node to demonstrate the effect of introducing RTS filter, the yaw angle of the planned Trajectory in Figure 9A is statistically compared in this section, and the results are shown in Figure 10.It can be seen that the yaw angle of the RTS-smoothed trajectory (black dashed line in the figure) is significantly better than that of the non-smoothed trajectory, which is more suitable for UAV's trajectory tracking control. Figure 10 . Figure 10.Comparison of track yaw angle before and after RTS smoothingIn order to verify the proposed formation path planning algorithm, the trajectory in Figure9Ais used as the trajectory of the leading UAV, and the planning result with 4-UAV formation is shown in Figure11.Where the trajectory numbered "11" corresponds to the leader trajectory, and the other trajectories correspond to the followers' trajectory respectively.It can be seen that the formation Figure 11 . Figure 11.4-UAV formation diagramAs shown in Figure12, curve 12, curve 13, and curve 14 represent the distance between leader and follower 2, leader and follower 3, follower 3 and follower 4, respectively.It can be seen that the distance between UAVs remains basically stable.When encountering a flight forbidden zone (as shown in the curve "34" in the figure), the shortest distance between follower 4 and follower 3 is still more than 800m, which is greater than the minimum distance constraint between UAVs.The simulation results indicate the formation configuration keeping ability of the algorithm and the flight forbidden zone processing ability of the followers' trajectory, and demonstrate the effectiveness of the proposed formation keeping strategy and the improved APF algorithm. 1) Kalman filter design For discrete control process systems such as UAV path planning, the best estimation of the current state , can be obtained by using Kalman filter with inputs of measured values under known conditions and current state prediction values. Table 1 . Different initial position and target location parameters
4,396
2024-05-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Expenditure Decentralization: Does It Make Us Happier? An Empirical Analysis Using a Panel of Countries This paper analyzes whether fiscal decentralization of education, health, housing, social protection, recreation, culture and religion, public order and safety, and transportation have a significant effect on individual well-being. The empirical analysis is based on a non-linear hierarchical model that combines individual data (level 1) with country-level data (level 2). We match 89,584 observations from the World Value Service and the European Value Service (various years) with the average value of data recorded for 30 countries by the Government Financial Statistics (IMF). While fiscal decentralization in education and housing appears to have a negative effect on well-being, this effect is positive in the cases of health and culture and recreation. We interpret this as evidence in favor of a “selective” decentralization approach. INTRODUCTION According to the economic analysis perspective, decentralization is justified because it improves the efficiency of the public sector management. On the one hand, decentralization favors the consumer, since the sub-national governments know and satisfy the preferences of the citizens better (Oates 1972). On the other hand, decentralization also improves the territorial productivity of public goods and services (Oates 2005;Lockwood 2009 andWeingast 2009). On the theoretical level, it is argued that the gains in efficiency generated by decentralization can contribute to greater economic growth, although no conclusive empirical evidence in favor of this hypothesis exists (Martínez-Vázquez J. and McNab R. 2003). Nevertheless, there is a greater consensus about the significant effects of decentralization on public spending and its indirect contributions to public well-being (Letelier 2012). Still, there are numerous arguments that deny the potential positive effects of decentralization, among which are the weakness of local bureaucracy, the implicit risk in the excessive proximity between private and public interests, and the scale economy losses in the provision of public utilities,… (Prud'homme 1995). Though some studies have been carried out that analyze the relation between decentralization and happiness (Frey andStutzer 2000 andBjørnskov et al. 2008 has not yet been investigated. Given that decentralization in general, and fiscal decentralization (FD) in particular, is a complex phenomenon whose impact differs according to the specific area of public management Saez 2013 2015), this paper states that its effect on subjective well-being depends on the specific area that is decentralized. As previously noted in the abstract, this effect is identifed through a model one multilevel ordinal logit with I intercept random and fixed effects (Goldstein 2003;Rabe-Hesketh, Skrondal and Pickles 2005;Raudenbush and Bryk of 2002), the analysis object areas are education, health, housing, social protection, recreation, culture and religion, public order and safety, and transportation. The sample of data utilized includes 89,584 individual observations of 30 countries. From the estimations carried out, it is inferred that decentralization in the areas of recreation, culture and religion, and health have a positive effect on happiness; where as decentralization of the functions of education and housing have a negative impact. The remainder of this document is presented in the following structure: in section 2, we review the literature that links the theory of decentralization to happiness, epigraph 3 puts forth, the theoretical framework, the methodology utilized and the data employed, while in section 4, the results of the econometric estimations are analyzed. Conclusions are presented in section 5. LITERATURE REVIEW The theory of subjective well-being is a field of investigation whose origins go back to the 1970`s and 1980`s (Easterlin 1974;Scitovsky 1975;Kapteyn and they Go Praag 1976;Morawetz 1977;Ng 1978;Wansbeek and Kapteyn 1983;Martin and Lichter 1983;Sirgy et al. 1985;and Headey and Krause 1988). The goal of this theory is to explain life satisfaction through the lens of ordinal utility. The works of Praag, Frijters and Ferrer-i-Carbonell (2003) and Ferrer-i- Carbonell and Frijters (2004) have been very important in advancing the theory of subjective well-being in different spheres: personal health, family financial situation, working conditions, and leisure and free time, among others. In terms of the economics of happiness, Frey and Stutzer (2000) differentiate three categories of exogenous factors that determine subjective well-being: i) personality and demographic factors (age, sex, marital status, level of education, ideology, religion, etc; ii) micro and macroeconomic factors (level of income, the unemployment, the inflation, etc); and iii) the institutional context (democratic state, federalism, decentralization of spending, etc). There are various studies analyzing the impact of the institutional context. Frey andStutzer (2000 and, Stutzer and Lalive (2004), Frey (2008) and Bjørnskov et al., (2008and 2010 have analyzed the effect of some institutions on happiness. Radcliff (2001) investigated the role of government ideology and other characteristics of the Welfare State. Veenhoven (2000) showed that in affluent countries, political and individual liberty have a positive effect on happiness. Moreover, in less affluent countries economic institutions and courts have a greater influence. Bjørnskov et al., (2010) concluded that in more affluent countries, political institutions have more influence on personal satisfaction. More recently, Voigt and Blume (2012) have found that a positive correlation exists between happiness and federalism. Because the analysis of the effect of institutions on subjective well-being is still a new area of investigation, less is known about the link between decentralization and happiness. Frey and Stutzer (2000) carried out a pioneering study in which they analyzed the effects of decentralization on an interregional level in Switzerland. They concluded that institutional factors, such as government initiatives, referendums, and local autonomy, have a significant and positive effect on the satisfaction of the Swiss. Nevertheless, this effect is dependent upon the direct link that exists between the binomial democracyvoter preferences and subjective well-being. Similiar studies were carried out by Díaz-Mountain and Rodriguez-Pose (2012), who extended their analysis to every European country and studied how different powers and resources of regional and local European governments improve the level of individual satisfaction. Bjørnskov et al., (2008), making use of a more extensive database that included 60000 individual observations of 66 countries, concluded that decentralization of spending does not have a significant impact on happiness. Sujarwoto and Tampubolon (2015) found that in Indonesia a developing country, fiscal decentralization if it increases happiness, but political decentralization is not significant. From the standpoint of the probable impact of decentralization on the performance of the State, theoretical literature has made significant contributions. On the one hand, theoretical literature shows the positive effect of decentralization on efficiency and the quality of public spending (Oates 1972). On the other, there is an intense debate regarding the possibility of reaching greater degrees of decentralization in the context of the limited professionalization in the lower levels of government, the greater feasibility of corruption and capture of the elite brought about by the excessive proximity between the private and public interests, and, finally, the insufficient quality of democracy in developing countries (Prudhomme 1995, Inman andRubinfeld 2000;Storper 2005). In terms of the first statement, it is inferred that the impact of decentralized management and financing is framed as a trade-off between the benefits of having more realistic information about the local context and the cost of a reduction in the operation scale (Letelier and Sáez-Lozano 2013). In general, it is expected that those spending functions that require considerable coordination at the national level will be more efficiently carried out at the central or intermediate levels of government. On the contrary, the management and financing of those services recognized as being local public goods which seeing as they affect the quality of said goods require specific knowledge of their local context, should generally be considered the responsibility of local government. (Letelier and Sáez-Lozano 2013). Therefore, we can conclude that the impact of the functional decentralization on the quality of public goods and, by extension, its effect on happiness, depends on the specific functional area and/or public good object of analysis. THEORETICAL FRAMEWORK, METHODOLOGY AND DATA To analyze whether or not the decentralization of different spending functions influences happiness, we should begin by developing a theoretical framework and describing the methodology of explaining the happiness relation. We end this section with a presentation of the basic characteristics of the data used. Theoretical Framework Happiness * ij S is a continuous and latent variable that reflects the level of subjective well-being. * ij S is determined by two sets of explanatory variables: i) individual X ij , that represents the characteristics of individuals i (level 1) in the country j (level 2), and grouped Z j , in order to measure the degree of decentralization that public spending functions have undergone in each country j. We define the following lineal relation between the endogenous and the explanatory variables as:  and λ are the coefficients to be estimated, and ij  the error term. Given that happiness is a unobservable variable, we define it through the level of individual satisfaction S ij . The relation between * ij S and S ij, for the m-category of S ij, is: S m  A priori, we assume that individuals i are nested in the countries j. Therefore, we assume that   ij VAR  is different for each group j and that there is conditional independence among the observations. This assumption allows us to relax the condition of homoskedasticity. Additionally, we assume that the influence of the predictor variable is fixed in two levels of analysis and that a random term U 0mj exists, that collects the inter-group differences. In expression (3), the model one is deduced multilevel ordinal logit with I intercept random and fixed effects 1 of m-category of S ij 2 (see the appendix): is the constant term,  lmj represents the parameters that measure the effect of X ij individual variables, λ rmj associated coefficients to Z sj variables, and U 0mj is the random effect of country. From the review of the literature carried out in the previous section, it is inferred that the following hypothesis is feasible: Hypothesis 1. The effect of fiscal decentralization can differ according to the specific type of spending function. The net impact will depend on the differential between the benefits of more information that results from decentralization and the cost of a more reduced scale of operation. The previous hypothesis is based on the assumption that individuals are capable of identifying the magnitude of the net profit of decentralization, since there is an observable relation between the quality of the public good in 1 There are three reasons for not specifying multilevel ordinal logit model with intercept and slope random of X ij : i) the theory does not justify the effect of X ij differs in each unit j due to unobservable factors; ii) differential effect of grouped j X ij is explained by Z j variables; and iii) the literature does not justify the inclusion of variables representing the interaction between X ij and Z j variables. 2 Upon being the endogenous ordinal variable S ij , we can specify ordinal logit o probit model. It given that we suppose fixed effects in the explanatory individual variable, we reject the option of ordinal probit model since produces inconsistent estimator. question and its cost. The cost is implicit to the relative magnitude of local taxes. Hypothesis 2. Only the decentralization of expense functions, which citizens consider within the jurisdiction of sublevel governments, contributes to an increase of happiness. On the contrary, the decentralization of public goods (services), whose provision is perceived as cost efficient if provided by the central government, will result in a reduction in individual satisfaction. Methodology To estimate the model (4) We also estimate two additional models: logit ordinal with I random intercept and fixed effects in the individual explanatory variable, and ordinal logit with I random intercept. In the appendix, the specification of both models is described. The data The database of this investigation has been built from the information supplied The data for the endogenous and individual explanatory variables were derived from the WVS and the EVS. The source is cited in the second column of Table 2, indicating the wave of information extracted and the year in which the study was carried out. For the majority of the countries we have selected the wave of the WVS and EVS whose year of execution coincides with the last period of the time series of the decentralized spending. In the cases of Chile, Spain and the Netherlands, we selected the wave that was carried out two years after the last data regarding the decentralization of spending was published, in order to having a longer time series. The seven grouped explanatory variables reflect the medium value of the time series of decentralized spending in education, health, housing, social protection, recreation, culture and religion, public order and safety, and transportation. The FD level of spending is measured as the relation between the spending carried out by subnational (state and local) governments, and the total spending at all three levels of government (central, state and local). Table 3 shows the descriptive statistics of the explanatory variable. The value of the intraclass correlation coefficient (ICC) of the model 1 shows that .2374 of the changeability of the subjective well-being is explained by the characteristics unrelated to country. Given the individual explanatory variable in the model 2, the variance of the random part diminishes by almost more than half; which explains why the ICC is reduced to .1286. If we compare the statisticians -2 log likelihood, Akaike information criterion (AIC) and Bayesian information criterion (BIC) of both models, we corroborate that Model 2 is appropriate for explaining level of individual satisfaction. The incorporation of the national explanatory variable in Model 3 contributes to reduce the variance of the random part, if is compared with model 2, which explains why the statistical ICC diminishes. Given the statisticians -2 log likelihood, AIC and BIC show that Model 3 is most adequately explains happiness. In the same way, if we compare the coefficients of the individual explanatory variable from Models 2 and 3, we can confirm that they have the same sign and that the magnitude hardly differs. Therefore, Model 3 is the most adequate for explaining the effect of spending decentralization policies, given that it reflects the heterogeneousness of the countries. The coefficients estimated for the grouped variables confirm Hypothesis 1. That is to say, the effect of decentralization differs in function of the policies, and is specific to each spending function. The spending parameters in education, health, housing and recreation, culture and religion are significant. The decentralization of spending functions in health and recreation, and culture and religion, make individuals happier, just as Hypothesis 2 predicts. On the contrary, the transfer of responsibility in the areas of in education and housing reduces citizens' satisfaction with these services. Assuming there is no general consensus with regard to the effect of this variable (Bjørnskov et al., 2008). As the economics of happiness theory predicted, individual income levels positively influence happiness. Another socioeconomic factor that determines the satisfaction level of the population is that of unemployment: those unemployed are less happy than those employed. As opposed to previous analyses, these findings show that the decentralization of spending in recreation, culture and religion, housing, education and health are the exogenous factors that determine subjective well-being, though the sign of its effect is specific to each spending function. Also, the magnitude of its effect is greater if it is compared with the impact of the individual variables. CONCLUSIONS The main objective of this paper, as presented in the beginning, was to cover a prominent "gap" within the research into the economics of happiness: analysis of the influence of public spending decentralization in relation to subjective wellbeing. We specify one multilevel ordinal logit with I random intercept and fixed effects model for two reasons: i) happiness is a latent variable, which we measure through the level of satisfaction declared by individuals, and ii) there are two types of explanatory variables, the individuals, that represents individual characteristics, and grouped, which reflect the spending decentralization in education, health, housing, social protection, recreation, culture and religion, public order and safety, and transportation. The central hypothesis of this work is that decentralization, measured through FD, has a different effect on happiness depending on the nature of the State function through which it is analyzed. This is the first study in this scientific environment which applies the multilevel analysis that permits us to quantify the influence of the explanatory variable on subjective well-being. On an empirical level, we contribute three significant findings: i) happiness is determined, chiefly, by spending decentralization in recreation, culture and religion, housing, education, and health; ii) the decentralization of the policies governing recreation, culture and religion, and health contributes to greater satisfaction in a country's citizens; and iii) the transfer of spending responsibility in housing and education causes a decrease in subjective well-being. Of particular importance is the case of decentralization of policies regarding recreation, culture and religion, which is one of the exogenous factors that most heavily influences happiness. In terms of future research, our results suggest that we should turn our efforts to study the feasibility of fiscal decentralization, examining the multiple dimensions of each specific area of spending. For example, education involves the administration of human resources, infrastructure maintenance and improvement, and the management of academic content and teaching methodologies. The same thing is true for health; an area in which the logistical aspects can easily be separated into other elements of public management. The various spending items included in the category of social protection require similar analysis. The availability of specific data at the national level would make repeating this exercise worthwhile, in order to further break down the information, which may, in turn, prove to be extremely useful in the development of good public policies. Upon being nested i individuals inside j grouped,  0mj can vary among j groups. Therefore we can rewrite  0mj thus: (1.7) 1 exp( ) Upon supposing that the effects of X lij and U 0mj are fixed and random, respectively, the following is confirmed: Calculations have been done using the GLAMM routine.
4,197.2
2020-09-04T00:00:00.000
[ "Economics", "Political Science" ]
Focus on Quantum Control Control of quantum phenomena has grown from a dream to a burgeoning field encompassing wide-ranging experimental and theoretical activities. Theoretical research in this area primarily concerns identification of the principles for controlling quantum phenomena, the exploration of new experimental applications and the development of associated operational algorithms to guide such experiments. Recent experiments with adaptive feedback control span many applications including selective excitation, wave packet engineering and control in the presence of complex environments. Practical procedures are also being developed to execute real-time feedback control considering the resultant back action on the quantum system. This focus issue includes papers covering many of the latest advances in the field. Focus on Quantum Control Contents Control of quantum phenomena: past, present and future Constantin Brif, Raj Chakrabarti and Herschel Rabitz Biologically inspired molecular machines driven by light. Optimal control of a unidirectional rotor Guillermo Pérez-Hernández, Adam Pelzer, Leticia González and Tamar Seideman Simulating quantum search algorithm using vibronic states of I2 manipulated by optimally designed gate pulses Yukiyoshi Ohtsuki Efficient coherent control by sequences of pulses of finite duration Götz S Uhrig and Stefano Pasini Control by decoherence: weak field control of an excited state objective Gil Katz, Mark A Ratner and Ronnie Kosloff Multi-qubit compensation sequences Y Tomita, J T Merrill and K R Brown Environment-invariant measure of distance between evolutions of an open quantum system Matthew D Grace, Jason Dominy, Robert L Kosut, Constantin Brif and Herschel Rabitz Simplified quantum process tomography M P A Branderhorst, J Nunn, I A Walmsley and R L Kosut Achieving 'perfect' molecular discrimination via coherent control and stimulated emission Stephen D Clow, Uvo C Holscher and Thomas C Weinacht A convenient method to simulate and visually represent two-photon power spectra of arbitrarily and adaptively shaped broadband laser pulses M A Montgomery and N H Damrauer Accurate and efficient implementation of the von Neumann representation for laser pulses with discrete and finite spectra Frank Dimler, Susanne Fechner, Alexander Rodenberg, Tobias Brixner and David J Tannor Coherent strong-field control of multiple states by a single chirped femtosecond laser pulse M Krug, T Bayer, M Wollenhaupt, C Sarpe-Tudoran, T Baumert, S S Ivanov and N V Vitanov Quantum-state measurement of ionic Rydberg wavepackets X Zhang and R R Jones On the paradigm of coherent control: the phase-dependent light-matter interaction in the shaping window Tiago Buckup, Jurgen Hauer and Marcus Motzkus Use of the spatial phase of a focused laser beam to yield mechanistic information about photo-induced chemical reactions V J Barge, Z Hu and R J Gordon Coherent control of multiple vibrational excitations for optimal detection S D McGrane, R J Scharff, M Greenfield and D S Moore Mode selectivity with polarization shaping in the mid-IR David B Strasfeld, Chris T Middleton and Martin T Zanni Laser-guided relativistic quantum dynamics Chengpu Liu, Markus C Kohler, Karen Z Hatsagortsyan, Carsten Muller and Christoph H Keitel Continuous quantum error correction as classical hybrid control Hideo Mabuchi Quantum filter reduction for measurement-feedback control via unsupervised manifold learning Anne E B Nielsen, Asa S Hopkins and Hideo Mabuchi Control of the temporal profile of the local electromagnetic field near metallic nanostructures Ilya Grigorenko and Anatoly Efimov Laser-assisted molecular orientation in gaseous media: new possibilities and applications Dmitry V Zhdanov and Victor N Zadkov Optimization of laser field-free orientation of a state-selected NO molecular sample Arnaud Rouzee, Arjan Gijsbertsen, Omair Ghafur, Ofer M Shir, Thomas Back, Steven Stolte and Marc J J Vrakking Controlling the sense of molecular rotation Sharly Fleischer, Yuri Khodorkovsky, Yehiam Prior and Ilya Sh Averbukh Optimal control of interacting particles: a multi-configuration time-dependent Hartree-Fock approach Michael Mundt and David J Tannor Exact quantum dissipative dynamics under external time-dependent driving fields Jian Xu, Rui-Xue Xu and Yi Jing Yan Pulse trains in molecular dynamics and coherent spectroscopy: a theoretical study J Voll and R de Vivie-Riedle Quantum control of electron localization in molecules driven by trains of half-cycle pulses Emil Persson, Joachim Burgdorfer and Stefanie Grafe Quantum control design by Lyapunov trajectory tracking for dipole and polarizability coupling Jean-Michel Coron, Andreea Grigoriu, Catalin Lefter and Gabriel Turinici Sliding mode control of quantum systems Daoyi Dong and Ian R Petersen Implementation of fault-tolerant quantum logic gates via optimal control R Nigmatullin and S G Schirmer Generalized filtering of laser fields in optimal control theory: application to symmetry filtering of quantum gate operations Markus Schroder and Alex Brown Quantum control through Hamiltonian engineering Much of what we know about science at the atomic and molecular scales comes from various spectroscopies, whose initial studies date back more than a hundred years. In spectroscopy, the general paradigm is to apply weak radiation in order to produce a pristine fingerprint of the undistorted system. In contrast, the field of quantum control seeks to actively manipulate dynamical processes at the atomic scale with systems in the gaseous or condensed phases. The primary means for executing such control is through the application of tailored strongly interacting electromagnetic fields. In recent years, the technology in this area has focused on employing suitably shaped laser pulses, which can orchestrate the quantum dynamics of the electrons and atoms in a sample to attain states or products that would not ordinarily be accessible. Thus, the general paradigm for quantum control rests on engineering the system's Hamiltonian in order to guide the ensuing quantum dynamics phenomena. The power of at will manipulation of atomic scale Hamiltonians opens up almost limitless fundamental investigations and potential practical applications. The history of this subject can be traced back to at least the 1960s at the time of the invention of the laser. The modern era of the field over the last two decades has witnessed dramatic changes through the introduction of appropriate theoretical and experimental tools leading to the establishment of the key principles of quantum control and a rapidly growing number of demonstrations of the concepts. This focus issue presents a cross section of many recent advances. These latter developments are separated into theoretical and experimental components, but in some cases these designated labels blur as theoretical analyses are often performed to guide laboratory studies while some experiments are done to provide theoretical insights. This synergism is the hallmark of the quantum control field. Recent developments: theoretical The theoretical study of quantum control encompasses many subjects, including mathematical and computational analyses with an emphasis on the optimal design of laser pulses to manipulate model systems. Much physical insight has been gained from these studies, and the recent efforts are proving to be most valuable. A particular topic of interest is the impact upon control behavior of field constraints, as inevitably arise in the laboratory. Optimized laser fields may still be effective for control even with reasonable constraints [1]. An additional issue in the laboratory is operation with significant uncertainties (e.g. lack of full knowledge about the system Hamiltonian, inhomogeneities in the sample, and statistical variations in the applied controls). Robust controls may mitigate against these factors, and in some cases it appears possible to obtain high quality control along with robustness [2,3]. The modeling of quantum control often considers Hamiltonians that are linear in the applied field, which couples into the system through the electric dipole operator. However, at high field intensities nonlinear effects can directly enter starting with a quadratic field dependence through the electric polarizability. Control field design in this domain involves additional complexities and trajectory-tracking techniques can be effective [4]. Commonly accessible ultrafast laser techniques can produce pulse trains including those consisting of half-cycle pulses. Controls of this form repeatedly 'hammer' on a quantum system in a prescribed fashion, and a number of applications are amenable to control through this process such as tailored excitation and dissociation [5] as well as various nonlinear spectroscopies [6]. Although the control of quantum systems isolated from their surroundings provides an ideal physical picture, in all realistic experiments the quantum system interacts with an environment of some form. Typical cases involve the quantum system residing in a solvent or host lattice with the system-environment coupling possibly playing a significant role in the ensuing dynamics. The modeling of controlled quantum dynamics in an environment poses challenging demands, and relatively few studies of this sort have been carried out. The formulation of such controlled dynamics requires the development of viable approximate models [7], and time dependent Hartree-Fock techniques are suitable for treating an environment of many identical particles [8]. The control of molecular rotation, alignment and orientation attracted early attention in the quantum control field, and recent years have seen a resurgence of activity in this area with experimental successes. Much still remains to be explored including the prospect of rotating molecules clockwise or counterclockwise by suitably tailored ultrashort laser pulses with spatial polarization [9]. It is also possible to combine the capabilities of ultrafast optimally shaped laser pulses along with an applied dc field to create high degrees of orientation [10]. Additionally, special controls for producing orientation may also be utilized to induce the molecules to become a source of pulsed THz radiation for use in other applications [11]. A promising direction is the application of laser pulses with polarization shaping to atoms and molecules bound to nanostructures containing metallic components. In this case, the laser field's spatiotemporal shape can depend on the local environment of the nanostructure, and proper optimization of the applied field as well as the nanostructure can create unique opportunities for control [12]. Most control experiments involve an adaptive feedback process, whereby a new sample of quantum systems is introduced on each cycle of the feedback loop with the goal of homing in on the proper optimal control field in an iterative fashion. In contrast, real-time feedback control aims to follow a single quantum system through a sequence of control experiments in a feedback loop. In this case, the feedback process may involve either active measurements with real-time computation to design an updated control [13] or continuous correction of the system's dynamics by linking it to an auxiliary system serving as a real-time controller. Experiments of this type are a significant challenge, and theoretical insights can be valuable for providing guidance [14]. Although traditional bench top lasers are capable of creating field strengths directly competitive with those that bind electrons to atoms, there is a continuing effort to develop lasers with increasing intensity. The resultant high fields could open up possibilities for producing extreme nonlinear optical effects and even directly manipulating nuclear reactions. An understanding of the basic physics of these controlled processes is at an early stage of research in keeping with the evolving laboratory capabilities [15]. Recent developments: experimental Intense laser pulses were available in the nascent period of laser development and the femtosecond regime was also broached many years ago. With these resources, the application of a simple unshaped intense laser pulse to any sample of matter can readily produce dramatic and permanent changes including high degrees of excitation, ionization, bond breaking, etc. Notwithstanding the transformations produced through application of such brute force, experiments of this type are not normally categorized as examples of achieving control. Under these conditions, there are few variables to serve as active controls to manipulate the induced dynamical processes. The advent of ultrafast laser pulse shaping changed this situation with routine applications now involving upwards of nearly a thousand pulse shape control variables 4 for optimizing the resultant quantum dynamical outcomes. In this regard, one goal is selective vibrational excitation on the ground electronic state of molecules with lasers operating in the mid-IR. Suitable shaping of these pulses can lead to controlled vibrational dynamics for a wide variety of applications [16]. In some circumstances, selective vibrational excitation may be used to discriminate for the presence of one species over that of a background of similar ones [17]. These latter studies generate the required complex laser pulse waveforms through adaptive feedback control guided by evolutionary algorithms. Other techniques for achieving control are also available, including the manipulation of quantum mechanical interferences along two pathways leading to photoionization and dissociation. In this case, the spatial phase of the laser can play a special role as demonstrated in experiments on acetone and dimethyl sulfide [18]. There is much interest in the detailed nature of the quantum coherences generated by ultrafast pulses interacting with a sample. In this regard control-probe experiments can often exhibit oscillatory signals, which may reflect either true quantum mechanical interferences or possibly intra-pulse interferences masking the quantum effects [19]. The sculpting of atomic wave packets provides a clear example of manipulating quantum dynamics phenomena, and operation with exited Rydberg states can lead to a detailed understanding of the controlled dynamics. Such a demonstration has been provided upon creating excited Ba + ng states [20]. Photoionization is often either a direct goal or one of secondary accompaniment resulting from the application of strong laser pulses to atoms and molecules. It is now common to observe the resultant positive ions, and further insight can be obtained from the simultaneous observation of the ejected electrons. Chirped femtosecond laser pulses have been applied to sodium atoms in various excited states for this purpose along with a thorough theoretical analysis of the experiments [21]. The algorithms guiding adaptive feedback control experiments function by observing the patterns of the evolving phase and amplitude variables of a pulse shaper. In many cases, the desired product can be optimized to high yield without specific knowledge of the actual control field. There is much interest in accelerating the process of learning the optimal control field as well as fundamentally understanding the dynamical mechanisms induced by the control, and both goals may be aided by suitable representations of the fields. Control fields can be expressed in the time or frequency domains, and the richest flexibility arises in dual time-frequency representations of fields [22]. Two photon processes have proved to be a basic training ground for quantum control experiments, including the physical insights gained due to their ease of modeling. Much remains to be learned from these simple nonlinear control problems, and new field representations are proving to be helpful [23]. Future developments Research attempting to control quantum phenomena has been underway for some 50 years, but in many respects the field could be viewed as a few years young with the basic physical principles only now beginning to reveal themselves. An important challenge is to explain why optimally controlling quantum phenomena is relativity easy to attain in the laboratory. The understanding of this behavior appears to lie in the topology of the control landscape defined by the observable as a function of the control variables [24]. The control landscape topology has a remarkably simple generic form, and the full implications of this finding remains to be explored. A prime focus of current laboratory studies is on control at femtosecond time and Angstrom length scales. This regime fits many molecular and condensed phase applications, which are readily accessible with the Ti:sapphire laser. New laser resources operating in the attosecond domain are also becoming available. Dynamics at this scale directly addresses electron motion, and control in this domain will surely be of increasing attention in the future. There are serious challenges in this regard, including for the generation of suitable pulse shapes. In parallel to operating at ever shorter timescales is the desire for achieving control at ultrashort length scales, ultimately in the atomic nucleus. In this case, advances towards increasing laser intensity will be important. Thus, one can envision the full scope of quantum control as eventually ranging over many orders of magnitude of length and timescales. Despite the evident limitations of currently available laser resources, they have already enabled the successful control of many types of quantum phenomena. Each advance in control resources is expected to open up significant new realms for quantum control and enhance the quality of the achieved results. The recent advances in the quantum control field emerged through joint efforts from the theoretical and experimental communities, and this cooperation is expected to be crucial to the future development of the field. Conclusion The wide diversity of research represented in this focus issue clearly shows the scope of the quantum control field along with a glimpse of what can be imagined in the future.
3,777.2
2009-10-01T00:00:00.000
[ "Physics" ]
Pentaquark states in a diquark-triquark model The diquark-triquark model is used to explain charmonium-pentaquark states, i.e., $P_c(4380)$ and $P_c(4450)$, which were observed recently by the LHCb collaboration. For the first time, we investigate the properties of the color attractive configuration of a triquark and we define a nonlocal light cone distribution amplitude for pentaquark states, where both diquark and triquark are not pointlike, but they have nonzero size. We establish an effective diquark-triquark Hamiltonian based on spin-orbital interaction. According to the Hamiltonian, we show that the minimum mass splitting between $\frac{5}{2}^+$ and $\frac{3}{2}^-$ is around $100$~MeV, which may naturally solve the challenging problem of small mass splitting between $P_c(4450)$ and $P_c(4380)$. This helps to understand the peculiarities of $P_c(4380)$ with a broad decay width whereas $P_c(4450)$ has a narrow decay width. Based on the diquark-triquark model, we predict more pentaquark states, which will hopefully be measured in future experiments. The hadron spectrum has played an important role in understanding the inner hadron structure and for testing various models of hadrons with fundamental freedom. The study of hadron physics is also crucial for understanding the dynamics of quark and strong interaction, according to quantum chromodynamics (QCD). Conventional hadrons can be understood well by using the naive constituent quark model, where a meson comprises two constituent quarks, qq , while a baryon is constructed from three constituent quarks, qq q , with all in a color singlet. This simple description has been highly successful in the past half century. However, the quark model and QCD do not include a rule that forbids the existence of other multiquark states [1], such as tetraquark or pentaquark states. In contrast to the conventional meson and baryon, finding the multiquark state, also known as the exotic state, has been a goal of particle physicists for many years. Recent developments in exotic heavy hadron research started with the discovery of X(3872) by the Belle Collaboration in 2003 [2], which is distinguished by its narrow decay width (Γ < 1.2MeV). Subsequently, a series of exotic states, XY Z, were determined experimentally, which are difficult to embed in the conventional meson and baryon spectra, and thus they have attracted much attention from both theoretical and experimental researchers (e.g., see [3] and the references therein). Recently, the LHCb Collaboration observed two exotic structures in the J/ψp channel of Λ b decay, which they † Corresponding author * Electronic address<EMAIL_ADDRESS>‡ Electronic address<EMAIL_ADDRESS>referred to as pentaquark-charmonium states, P c (4380) and P c (4450) [4]. One of these two structures has a mass of 4380 ± 8 ± 29MeV, a width of 205 ± 18 ± 86MeV, and a preferred spin-parity assignment of J P = 3 2 − , whereas the other is narrow with a mass of 4449.8±1.7±2.5MeV, a width of 39 ± 5 ± 19MeV, and a preferred spin-parity assignment of J P = 5 2 + . The binding mechanism associated with these newly observed structures is still unclear. In previous studies, theoretical predictions of pentaquark states in the charmonium energy region were made before the LHCb observations. Previous predictions of hidden charm pentaquarks were reported by [20,21]. The production and decay properties of the structures P c (4380) and P c (4450) were also investigated by [22][23][24][25][26][27][28]. In this letter, we attribute the P c (4380) and P c (4450) states to possible diquark-triquark states with a [cu] [udc] configuration, where both the diquark and triquark are loosely bound units, which is a generalization of the com-pact diquark δ and triquarkθ introduced by Brodsky, Hwang, and Lebed [16,29,30]. The diquark interaction model was first employed by Jaffe and Wilczek [31], while Karliner and Lipkin [32] gave an interpretation of the unconfirmed pentaquark state Θ + . According to QCD, their analyses can be simply transferred to the heavy quark sector, where two quarks attract each other to form a diquark, and two quarks with an antiquark are also bound up to a triquark. In the following, we show that the small mass splitting between P c (4450) and P c (4380), and their peculiar decay widths can be understood using the diquark-triquark model. According to group theory, the color group SU (3) of a diquark can be represented either by a antitriplet or sextet in the decomposition of 3 ⊗ 3 =3 ⊕ 6, whereas a triquark may belong to one of the four different repre- It should be noted that in the one-gluon-exchange model, the binding of the q 1q2 or q 1 q 2 system depends solely on the quadratic Casimir C 2 (R) of the product color representation R to which the quarks couple according to the discriminator I = 1 2 (C 2 (R) − C 2 (R 1 ) − C 2 (R 2 )), where R i denotes the color representations of two quarks [29]; thus, we can immediately obtain the discriminators I = 1 6 (−8, −4, +2, +1) for R = (1,3, 6, 8), respectively. When I is negative, the interaction force will be attrac-tive, which is somewhat analogous to the Coulomb force in QED. Thus, the only color attractive configuration of q 1q2 is in the color-singlet 1, whereas the color attractive configuration of q 1 q 2 is in the color antitriplet3. In the one-gluon-exchange interaction, the attractive force strength in the color-singlet q 1q2 is two times that in the diquark q 1 q 2 . Without any loss of generality, the color structure of the triquark q 3 q 4q5 can be taken as the product of a diquark q 3 q 4 and an antiquarkq 5 , and thus it can be decomposed as (3 ⊕ 6) ⊗3 = (3 ⊗3) ⊕ (6 ⊗3) = (3 ⊕6) ⊕ (3 ⊕ 15). Correspondingly, the discriminator I = 1 6 (−4, +2, −5, +2) for R = (3,6, 3, 15), respectively. Obviously, there are two types of attractive color configurations for the triquark q 3 q 4q5 . One is in the color triplet 3 withq 5 attracting q 3 q 4 , where q 3 is repulsive to q 4 , which is analogous to helium composed of a nucleus and two electrons. The other is also in the color triplet 3 withq 5 attracting q 3 q 4 , but q 3 is attractive to q 4 , which is a peculiar interaction structure obtained from QCD. According to this analysis, we find that the diquark q 1 q 2 in color configuration3 and the triquark q 3 q 4q5 in color configuration 3 may form a color-singlet pentaquark state q 1 q 2 q 3 q 4q5 . Before starting the spectrum analysis, we first need to define a light cone distribution amplitude for the pentaquark P Q in terms of nonlocal quark fields where k is the momentum of the pentaquark and in the lightcone definition k + = (k 0 + k 3 )/ √ 2 and k − = (k 0 − k 3 )/ √ 2, w i is the quark momentum fraction and the spinor u γ denotes the heavy antiquarkQ with momentum fraction of wQ = 1 − i=1,4 w i , which is at rest at the space-time origin. The letters a-g, i, j, l, and s represent color indices. For prompt pentaquark production, the leading-twist contribution comes from the collinear conformal subset [33], where the gauge link can be expressed as In this case, the gluon field G µ (x) ≡ G µ λ (x)T λ lies in the adjoint representation. It should be noted that the gauge links connect to the quark fields in the fundamental representation, which ensures that all of the colored quarks are transported to the space-time origin, and thus the pentaquark is well defined. For differences in the spin-parity of the diquark, we have Γ α,β = C, Cγ µ , Cσ µν , Cγ 5 γ µ , Cγ 5 , which correspond to the scalar, vector, tensor, pseudovector, and pseudoscalar, respectively. The charge conjugation matrix C is de-fined as C = iγ 2 γ 0 in the Pauli-Dirac representation. In the following, we focus only on the scalar and vector diquarks, which are referred to as "good" and "bad" diquarks by Jaffe, respectively [34]. The general QCD confining potential for the multiquarks reads [35] where L( r i ) represents the universal binding interaction of quarks, S ij denotes two-body Coulomb and chromomagnetic interactions, and I = − 4 3 and − 2 3 denote the coefficients of single-gluon interactions in the quarkantiquark and quark-quark cases, respectively. The effective Hamiltonian includes spin-spin interactions inside the diquark and triquark, as well as between them, the spin-orbital and purely orbital interactions, which may be expressed formally as with where m δ and m θ are the constituent masses of the diquark [Qq] and triquark [q q Q ], respectively; H δ SS and Hθ SS describe the spin-spin interactions inside the diquark and triquark, respectively; H δθ SS describes the spinspin interactions of quarks between the diquark and triquark; H SL and H LL correspond to the spin-orbital and purely orbital terms, respectively; S q ( , ) , S Q , and SQ are spin operators for the light quarks, heavy quark, and antiquark, respectively; S δ and Sθ are the spin operators for the diquark and triquark, respectively; L is the orbital angular momentum operator; κ q1q2 and (κ q1q 2 )3 are the spin-spin couplings for a quark-antiquark pair and diquark in the color antitriplet, respectively; and A δ(θ) and B Q are spin-orbit and orbit-orbit couplings, respectively. where we use the notation |S δ ; S δ , SQ , Jθ; J for pentaquark states. Here S δ and Jθ denote the spins of the diquark [Qq] and triquark [q q Q ], respectively; S δ and SQ denote the spins of the diquark and antiquark within the triquarkθ, respectively; and J is the total angular momentum of the pentaquark. In the following, for sim-plicity, we focus only on the scalar and vector diquarks, i.e., S δ ( ) = 0, 1. For J P = 3 2 − , there are four possible pentaquark states, i.e., For J P = 5 2 − , only one pentaquark state exists, i.e., We now consider the specific situation where Q = Q = c, q = q = u and q = d, which means that the pentaquarks are comprised of [cu] [udc]. Then, for the state where J P = 5 2 − , the mass eigenvalue reads where the isospin symmetry is maintained with u = d = q and the small isospin breaking effect is discussed later. Under the basis vectors | 3 2 − i defined in Eq. (7), the mass splitting matrix ∆M for J P = 3 2 − may be obtained It should be noted that | 3 2 − 2 does not mix with other states due to the isospin symmetry. In the following, we show that the four of them are and |1 δ ; 1 δ , 1 2Q , 3 2θ ; 3 2 S , 1 L , 5 2 J , and their corresponding Hamiltonians for spin-orbit and orbit-orbit interactions are 3A Q + B Q , where the identical spin-orbit coupling A δ = Aθ = A Q is taken in Eq. (5) for simplicity. According to Eq. (8), the fifth state with quantum number J P = 5 2 + is |1 δ ; 1 δ , 1 2Q , 3 2θ ; 5 2 S , 1 L , 5 2 J , and its corresponding Hamiltonians for spin-orbit and orbit-orbit interaction are −2A Q + B Q . After inputting the spin-spin, spin-orbit, and orbitorbit couplings, and masses of the quarks, we can readily obtain the pentaquark spectrum. For convenience, we give the spin-spin couplings in Table I [ [36][37][38][39], which are extracted from mesons, baryons, and the XY Z spectra in the constituent quark model and diquark model. The expression κ ij = 1 4 (κ ij ) 0 for quark-antiquark coupling comes from the one gluon exchange model. 10 The masses of diquarks [cq] and [bq] are extracted from X(3872) with J P C = 1 ++ and Y b (10890) with J P C = 1 −− in the diquark model, respectively. We find that m [cq] = 1.932GeV and m [bq] = 5.249GeV. In the numerical study of the pentaquark spectrum, the input quark masses are m q = 305MeV, m s = 490MeV, m c = 1.670GeV, and m b = 5.008GeV [37,38]. The spin-orbit coupling A Q takes 30MeV and 5MeV for c and b quarks, respectively; and the orbit-orbit coupling B Q takes 278MeV and 408MeV for c and b quarks, respectively [39][40][41]. For triquarkθ [udc], the approximate relation m θ m c + 2m q = 2.280GeV is employed. The charmonium pentaquark spectra with quantum number J P = 3 2 − , 5 2 − , and 5 2 + are depicted in Fig. 1. We find that among the many predicted pentaquark states, that with a mass of 4.349GeV probably corresponds to the LHCb P c (4380) state and that with a mass of 4.453GeV to the P c (4450). The minimum mass splitting between 5 2 + and 3 2 − states is about 100MeV, which explains the experimental measurements well. In general, it should be noted that the diquark-triquark model may give a large binding energy compared with the molecular model. Furthermore, we may also conclude that P c (4380) should not have the quantum number J P = 5 2 + or J P = 5 2 − by referring to Fig. 1. This is consistent with the LHCb measurement, where the best fitting result shows that P c (4380) and P c (4450) probably have quantum numbers of according to Eq. (7), then it naively has more decay channels and hence a broad decay width. We show the bottomonium pentaquark spectra in Fig. 3. It should be noted that many novel pentaquark states are predicted in Figs. 1 and 3 according to the diquark-triquark model. In particular, those with relatively narrow decay widths and large masses are more likely to be detected in experiments. For the charmonium pentaquark with S = 0, the predicted state with a mass of 4.329GeV and J P = 3 2 − may be reconstructed through the J/ψp invariant mass distribution in the Λ 0 b → J/ψpK − decay channel, in a similar manner to the measurement of P c (4380) and P c (4450). Furthermore, the state with mass of 4.433GeV and J P = 5 2 − may be reconstructed through the J/ψ∆ + invariant mass distribution in the Λ 0 b → J/ψ∆ + K − decay channel. For the state with mass 4.085GeV, the decay channels with J/ψp in the final states would be difficult to measure due to the small phase space. For states over 4.6GeV in the left diagram in Fig. 1, their masses exceed the Σ * cD * threshold, which means that more decay channels are open, so they would be relatively difficult to measure in experiments. The reconstruction of charmonium pentaquark states with strange number S = −1 is very similar to that of P c states with S = 0. Given this feature, we suggest that the predicted charmonium pentaquark states (S = −1) with masses of 4.516GeV, 4.540GeV, and 4.682GeV may be detected through the Ξ 0 b → J/ψΣ + K − , J/ψΣ 0K 0 and Λ 0 b → J/ψΛφ, J/ψΣ 0 φ channels, whereas the state with a mass of 4.624GeV, spin-parity J P = 5 2 − , and strange number S = −1 may be reconstructed through the J/ψΣ + (1385) spectrum in the Ξ 0 b → J/ψΣ + (1385)K − decay channel. The other states shown in the right diagram in Fig. 1 would be relatively difficult to measure due to either the small phase space or the possibly broad decay width. The reconstruction of the bottomonium pentaquark is tedious because no hadron can decay directly to yield it. Thus, searching for the bottomonium pentaquark must rely on its prompt production in hadron-hadron collisions or lepton-hadron deep inelastic scattering processes. In conclusion, we demonstrated that the diquarktriquark model may provide a good explanation of the pentaquarks discovered by the LHCb Collaboration. The small mass splitting between P c (4450) and P c (4380), and their special decay widths can be understood well us-ing this model. We predicted more heavy pentaquark states, which may be confirmed by LHCb, JLab, or Belle-II experiments. Thus, the observation or nonobservation of these states will facilitate the judgment of the diquark-triquark model. We also consider that it would be useful to analyze the J/ψΣ + (J/ψΛ) invariant mass spectrum in experiments such as LHCb, near 4.682GeV in Ξ 0 b → J/ψΣ + K − and Ξ − b → J/ψΛK − decay channels, where charged and neutral charmoniumpentaquarks with J P = 5 2 + and S = −1 may exist.
4,012.4
2015-10-29T00:00:00.000
[ "Physics" ]
MAKING PROTOTYPE DYE-SENSITIZED SOLAR CELLS ( DSSC ) BASED ON TiO 2 NANOPORY USING EXTRACTION OF MANGOSTEEN PEEL ( Garcinia mangostana ) DOI : Alternative energy sources are available in nature is the Sun's energy. Dye-sensitized solar cell (DSSC) is one of the photochemical electrical cells consisting of a photoelectrode, dye, electrolyte, and counter electrode. The manufacture of prototype Dye-sensitized solar cell (DSSC) utilizes carotene from the dye extract of mangosteen peel pigment (Garcinia mangostana). This study aims to create a Dyesensitized solar cell (DSSC) and know the efficiency it produces. This Dye-sensitized solar cell (DSSC) consists of a pair of FTO (Flourdoped in oxide) glass intercepts facing each other. The glass acts as an electrode and counter electrode and separated by a redox electrolyte (I/ I3-), arranged to flank each other to form a wafer. In the electrode, is deposited a porous nanocrystalline TiO2 layer, as well as dye extract of mangosteen peel pigment (Garcinia mangostana). While on the counter electrode coated with a layer of platinum. This article presents some experimental data on absorbance properties and the conductivity of dye extract of mangosteen peel pigment (Garcinia mangostana) as an application in DSSC. Absorbance test using Spectrophotometer UV Visible 1601 PC and electrical properties test using Elkahfi 100 / Meter I-V. DSSC fabrication has been done using dye extract of mangosteen peel pigment (Garcinia mangostana) with coating spin coating technique. The results showed that the dye extract of mangosteen peel pigment (Garcinia mangostana) had an absorbance spectrum of 380-520 nm range. From the test results using AM 1.5G solar simulator (100 mW / cm2), it was found that the volume of TiO2 precursors affected the performance of DSSC solar cells and the overall conversion efficiency was 0.092%. Introduction Energy needs in the world are increasing along with the development of technology, in Indonesia even in the world are being incessant-the incessant of the researchers to find alternative energy sources, in lieu of fossil energy.One of the considerable alternative energy available in nature is solar energy.Solar energy in Indonesia, in particular, the amount is very abundant and is said to be large enough to serve as an alternative energy source.To realize that, then needed a system to convert solar energy into electrical energy.One of the utilization of solar energy is through the use of solar cells (solar cell) because this is a promising alternative [1]. This is what makes energy very important in meeting all the needs of life in the world, so the energy needs in the world increasingly day.Limitations of silicon solar cells are not only expensive, but the absorption spectra are too narrow.The known energy distribution of sunlight comprises about 4% ultraviolet and 96% visible light.The main spectrum of silicon solar cell absorption is ultraviolet and purple [2].This shows that silicon solar cells cannot use nearly 96% of the energy from sunlight [3].Attempts to extend the absorption spectrum from the ultraviolet region to the visible light region are now applied as Dye-Sensitized Solar Cell [2], where dyes can assist DSSC to broaden the absorption spectrum [4]. The absorption of light in the range 380-520 nm and the molar extinction coefficient greater than 105 makes the carotenoids as potential sensitizers in photovoltaic solar cells and other artificial photochemical devices [5].Carrots (Daucus carota), melinjo fruit (Genetum genemon), and mangosteen peel (Garcinia mangostana) are natural ingredients that are widely consumed and contain carotenoids.But the carotenoid content may vary depending on the source. Basically, the working principle of DSSC converts light energy into electricity on a molecular scale in the form of a reaction of electron transfer.The first process begins with the excitation of electrons in the dye due to the absorption of photons.This is one of the roles of TiO2 properties.When photons from sunlight overwrite the working electrode on the DSSC, the photon energy is absorbed by the dye attached to the surface of TiO2 [6].So dye gets the energy to excite.The excited dye carries energy and is injected into the conduction band on TiO2.TiO2 acts as an electron acceptor or collector [7].The dye molecule left behind is then oxidized.The electrons will then be transferred through the outer circuit to the comparative electrode (an electrode containing the platinum layer).Electrolytes (iodide and triiodide pairs) act as electron mediators so they can produce cycle processes in cells.The Triodide ion captures electrons that originate from the outer sequence with the help of platinum molecules as a catalyst.The excited electrons re-enter the cell and are assisted by platinum to react with the electrolyte causing the addition of iodide ions to the electrons.Then an iodide ion on the electrolyte delivers electrons that bring energy to the oxidized dye.Electrolytes provide electron substitutes for oxidized dye molecules.So the dye returns to its initial state [8]. In general, FIG. 1 shows the DSSC comprising dye-sensitized organic matter, a nanocrystal TiO2 layer, an electrolyte solution containing the redox pair I -/I 3 and the FTO glass substrate as the working electrode.The outside factor of the area and the thickness of the semiconductor layer governs the increase of dye load, then the optical density that results in the efficiency of light absorption [9].The optical density represents the transmission size of an optical element of a certain wavelength.When connected with radiation on an object, the optical density is the ratio between the initial intensity and the intensity of the transmission.DSSC is a sandwich-shaped structure, in which two electrodes ie TiO2 electrode with dye and a comparative electrode made of FTO-plated glass flattened by electrolyte form a photoelectrochemical cell system.The reference electrode is made of FTO glass coated with platinum because it has sufficient conductivity and heat resistance and electrocatalytic activity of triiodide reduction. TiO2 is a photocatalyst material that has strong oxidizing power, high photostability and redox selectivity [11].An important requirement for increasing the catalyst activity of TiO2 is to increase the surface area of TiO2 depending on the size of the crystal. The physical and chemical properties of TiO2 depend on their size, morphology and crystal structure [12].TiO2 has three crystalline forms namely anatase, rutile, and brookite.Anatase phase TiO2 crystals have a more active ability than rutile.Anatase is considered to be the most favorable phase of photocatalysis and solar energy conversion [13].TiO2 is only capable of absorbing ultraviolet light (350-380nm).To increase the absorption of the TiO2 spectra in the visible region, a dye layer is required that will absorb visible light.The dye serves as a sensitizer [14]. This study presents some experimental data of the carotenoid content of the mangosteen fruit peel pigment that can be used as a sensitizer.A material analysis was performed on the optical and electrical properties of organic matter from mangosteen fruit peel extract (Garcinia mangostana).The extracts from the natural ingredients used in the study showed a similar absorbance of β-carotene in the 380-520 nm range.While the value of absorbance and optimum conductivity on the peel of mangosteen fruit.This study aims to analyze the optical properties and to know the electrical properties of the mangosteen peel pigment consisting of 3 stages: the extraction of the mangosteen peel pigment, the measurement of the absorption of the spectrum, and the measurement of the conductivity of the extraction [15].Various studies about DSSC using natural colors of plant extracts has been conducted and the research proves that the substances natural color can give the effect of photovoltaics.On this research resulted in higher efficiency compared to previous research that has been done by Basitoh Djaelani (2014)[20}.What is new in this study is located on the use of TiO2 coating technique that uses the techniques of spin coating couting.While previous research still uses the technique of the doctor blade. B. Preparation This preparation stage includes cleaning tools for the extraction and preparation of TiO2 paste.The preparation process for extraction is done by cleaning the tool in the form of mortar, Fluorine Doped Tin Oxide (FTO) glass, glass bottle, beaker, and dropper with ethanol solution and using ultrasonic cleaner to be free from materials that can not be cleaned with water only.Clean glass affects the test results of samples to be superimposed on the glass substrate. C. FTO glass cleanup (Fluorine Doped Tin Oxide) Alcohol 70% poured on glass of chemical as much as 100 ml.The 2.5 x 2.5 cm FTO glass to be cleaned is inserted in a glass containing chemicals.Ultrasonic cleaner filled aquades to the specified limits.Chemical glass containing alcohol and FTO glass is inserted into ultrasonic cleaner at 30 minutes.After 30 minutes, the glass is dried using a hair dryer.Then measured resistance to the FTO glass using a digital multimeter. D. Making of TiO2 Nano Pasta TiO2 0.5 gram of nano powder dissolved in 2 ml ethanol is then stirred using a stirrer vortex with a speed of 200-300 rpm for 30 minutes.The already formed TiO2 paste is fed into aluminum foil-covered bottles and stored in a spot that avoids direct sunlight to reduce the evaporation process. E. Mangosteen Peel Extraction (Garcinia mangostana) Mangosteen peel weighed using 25 grams of digital scales.Furthermore the peel of mangosteen fruit crushed and mashed using mortar.The finely ground mangosteen peel was dissolved in 125 ml of ethanol solvent with ratio (1: 5) and then stirred for 60 minutes using a stirrer vortex with a rotation speed of 300 rpm in 60 o C.After the solvent is dissolved for 24 hours and filtered with filter paper whatman no.42.The extraction results were then deposition region TiO 2 chromatographed by pouring in chromatographic columns and waited until dark red extraction. F. Making Working Electrode The working electrode is made of FTO conductive glass on which the TiO 2 nano paste is deposited by spin coating technique.In FTO glass measuring 2.5 x 2.5 cm formed an area for the deposition of TiO2 measuring 2 x 1.5 cm above the conductive surface.The FTO side taped the tape as a barrier.The TiO2 paste is dripped on the FTO glass that has been glued in the spinner, then distirrer with a speed of 200-300 rpm with a predetermined time.The coated TiO2 FTO glass is heated using a hotplate at 500°C for 60 minutes, then cooled to room temperature.The scheme of the TiO2 paste deposition area is shown in Figure 2. G. Making of Electrolyte Solution Pottasium iodide (KI) of 0.8 grams (0.5 M) in solid form is mixed into 10 ml of polyethylene glycol 400 then stirred.Next to the solution was added Iodine (I2) of 0.127 grams (0.05 M) then stirred with a stirrer vortex at 300 rpm for 30 min.The finished electrolyte solution is stored in a sealed container coated with aluminum foil. H. Making of Opponent Electrode The counter electrode is a FTO conductive glass which has been coated with a thin layer of Platinum (Hexachloroplatinic (IV) acid 10%).The steps of making the opponent electrode is 1 ml of Hexachloroplatinic (IV) acid 10% mixed with 207 ml of isopropanol and then stirred using vortex stirrer with speed 300 rpm for 30 minutes.The FTO glass was heated using a hotplate at 250°C for 15 minutes then spilled 3 ml of platinum solution onto the surface of the FTO glass substrate by the drop method.The glass that has been dropped platinum then cooled to reach the room temperature.The scheme of the Platinum deposition area is shown in Figure 3. I. Dye Absorbtion On TiO2 Layer FTO conductive glass substrate which has been deposited TiO2 layer then soaked in dye extract of mangosteen peel peel for 24 hours. J. DSSC Sandwich Making The arrangement of DSSC layers of FTO glass that has been coated with TiO2 and has been immersed in dye solution of extraction result is called working electrode.The working electrode is dropped by an electrolyte solution and then covered with a platinum coated glass called the opposing electrode.Then the DSSC arrangement is clamped with a clamp on both sides of the right and left so as not to shift.The finished DSSC results are shown in Figure 4. K. Natural Dye Extraction The study used ethanol solvent to dissolve the carotenoid extracted from the natural material of the mangosteen fruit peel pigment.The ingredients to be extracted were cleaned with water, then as much as 25 grams of mangosteen peel peel pigment smoothed and after finely mixed 50 ml of ethanol stirred for 60 minutes 200 rpm using a magnetic stirrer at room temperature.After stirring and then stand for 24 hours and filtered using Whatman no filter paper.42.After filtration, the solution is stored in a sealed container and protected from sunlight. L. Absorption Analysis A spectrophotometric method was used for the simultaneous determination of β-carotene [16].The spectrophotometric method shows the potential for β-carotene analysis because Pigments can absorb radiation in the visible region [17].The content of each extracted material was analyzed using Spectrophotometer UV Visible Shimadzu 1601 PC to determine the absorbance properties of the material.The wavelength range of absorption spectrum analysis in visible light is 300-800 nm.from the result of measurement of absorbance characteristic then known the type of dye content from natural material [18]. M. The conductivity of the material The conductivity measurements using Elkahfi 100 / IV-Meter were performed in a dark state by covering all parts of the container using aluminum foil and under irradiation using a 100 W halogen light source and an energy intensity of 680.3 W / m 2 .Halogen lamps are used because they have a full spectrum that resembles visible light with sunlight [19].From the result of measurement of I-V then determined conductivity (σ) various material.To determine the conductivity of organic solution can use the equation: Where σ is the conductivity (ohm -1 .m - ), R is the resistance (Ohm), l is the distance between the two electrodes (m) and A is the cross-sectional surface area of the electrode (m2). Result and Discussion Research using natural ingredients to produce a carotenoid extract from the peel of mangosteen fruit extracted using ethanol with a fixed ratio of 1 gram of natural materials 2 ml of solvent.Then tested the absorbance using Spectrophotometer UV Visible Shimadzu 1601 PC and determined the Voltages using I-V meter / elkahfi 100 of I-V can be seen the value of dye conductivity made from the peel of mangosteen fruit.Based on Figure 5, the mangosteen dye absorbance shows considerable wavelength, with a fairly good absorbance ability, showing the dye spectrum from the peel of the mangosteen fruit having an absorption spectrum similar to β-carotene, which has a major absorption wave at 380-520 nm, with a wave peak range of 350 nm and the amplitude reaches 0.1, so it can be enabled this mangosteen peel to absorb good sunlight energy and is able to maximize the performance of the DSSC. mangosteen peel extract is still very small is 3x10 -4 resulting in low efficiency.Therefore there needs to be another material to improve its efficiency. Conclusion The measurement and analysis of the absorption spectra of natural dye extract of mangosteen peel have been done with the ratio of the mass of natural materials and the volume of solvent is kept steady.The results showed that the dye extracted from the natural material has an absorption spectrum similar to that of β-carotene having absorption at wavelengths between 380 -520 nm and a wave peak in the range of 450 nm.Measurements of I-V Meters / Elkahfi used the same voltage source of 9 volts to generate an electric current from the extract of the peel of the larger mangosteen fruit.The current in the dark gives 3.21 x 10 -5 mA, while under irradiation gives 3.46 x 10 -5 mA. Measurements of the I-V meter also show the conductivity value of the extract of mangosteen fruit in the dark is 5.20 x 10 -6 Ohm -1 .m - .The conductivity under irradiation is 2.34 x 10 -5 Ohm -1 .m - . The results of electrical current measurements and conductivity produced by mangosteen peel extract, this makes the mangosteen fruit needs to be investigated further as a DSSC sensitizer.Efficiency produced by DSSC for the pigment of mangosteen peel with spin coating method reached 0,092%.The mangosteen rind dye so that needs to be done further research with engineering and applications better. Figure 5 Figure 5 Absorbance of natural pigment extract of mangosteen peel.
3,760.4
2018-10-31T00:00:00.000
[ "Materials Science", "Environmental Science", "Chemistry" ]
Generation of genuine all-way entanglement in defect-nuclear spin systems through dynamical decoupling sequences Multipartite entangled states are an essential resource for sensing, quantum error correction, and cryptography. Color centers in solids are one of the leading platforms for quantum networking due to the availability of a nuclear spin memory that can be entangled with the optically active electronic spin through dynamical decoupling sequences. Creating electron-nuclear entangled states in these systems is a difficult task as the always-on hyperfine interactions prohibit complete isolation of the target dynamics from the unwanted spin bath. While this emergent cross-talk can be alleviated by prolonging the entanglement generation, the gate durations quickly exceed coherence times. Here we show how to prepare high-quality GHZ$_M$-like states with minimal cross-talk. We introduce the $M$-tangling power of an evolution operator, which allows us to verify genuine all-way correlations. Using experimentally measured hyperfine parameters of an NV center spin in diamond coupled to carbon-13 lattice spins, we show how to use sequential or single-shot entangling operations to prepare GHZ$_M$-like states of up to $M=10$ qubits within time constraints that saturate bounds on $M$-way correlations. We study the entanglement of mixed electron-nuclear states and develop a non-unitary $M$-tangling power which additionally captures correlations arising from all unwanted nuclear spins. We further derive a non-unitary $M$-tangling power which incorporates the impact of electronic dephasing errors on the $M$-way correlations. Finally, we inspect the performance of our protocols in the presence of experimentally reported pulse errors, finding that XY decoupling sequences can lead to high-fidelity GHZ state preparation. Introduction Generating and distributing entanglement is one of the most fundamental yet non-trivial requirements for building large-scale quantum networks.Entanglement is the key ingredient of quantum teleportation, robust quantum communication protocols, or alternative quantum computation models such as the one-way quantum computer [1,2,3], or fusion-based quantum computation [4].In the context of cryptography, entangled states ensure secure communications via quantum key distribution or quantum secret sharing protocols [5,6,7,8,9,10,11].Error detection [12,13] and correction [14,15,16,17,18,19] schemes are based on the encoding of information onto entangled states, such that errors can be detected or corrected by utilizing correlations present in entangled states.At the same time, distributed entanglement increases the sensitivity and precision of measurements [20,21]. Defect platforms offer an electronic qubit featuring a spin-photon interface that enables the generation of distant entanglement [22,23,24], as well as nuclear spins that serve as the longlived memories needed for information storage and buffering.Several protocols based on remote entanglement or entanglement within the electron-nuclear spin system have already been demonstrated in these platforms, including quantum teleportation [25], quantum error correction [17,18,19], enhanced sensing [26,27,28], and entanglement distillation [29].The electronnuclear spin entanglement is usually realized through dynamical decoupling (DD) sequences; large entangled states can be created by applying consecutive sequences with the appropriate interpulse spacings.By tuning the interpulse spacings, one can select different nuclear spins to participate in entangling gates while decoupling the electron from the remaining always-on coupled nuclear spin bath [30]. A major issue is that entangled states often decohere faster than product states [31,32], so entanglement must be preserved for long enough times to complete quantum information tasks.Additionally, in defect platforms, the unwanted spin bath sets a lower bound on the duration of entangling gates, which should be long enough to suppress unwanted interactions that lower the quality of target entangled states.Despite this challenge, generation of electron-nuclear entangled states has been realized experimentally in NV [33], in SiV centers in diamond [34,35], and in SiC defects [36].In particular, Bradley et al. demonstrated experimentally in an NV defect in diamond electron-nuclear GHZ states involving up to 7 qubits [33] by combining DD sequences with direct RF driving of the nuclear spins, accompanied by refocusing pulses to extend the entanglement lifetime.This direct driving was necessary to improve the selective coupling of individual nuclear spins.Although successful, this method introduces experimental overhead and heating of the sample due to the direct RF driving of the nuclei, potentially creating scalability issues.Another problem is that as the number of parties contributing to the entangled state increases, a larger gate count and longer sequences are needed, exacerbating dephasing.Failure to provide optimal isolation from unwanted nuclei further deteriorates the electron-nuclear entangled states.Therefore, a more efficient approach to generating electron-nuclear spin entanglement within time constraints and verifying its existence is necessary for large-scale applications. In this paper, we address these challenges by introducing a framework for preparing high-quality GHZ states of up to 10 qubits within time constraints.Using experimental parameters from a 27 nuclear spin register well-characterized by Taminiau et al. [37], we show how to improve sequential entanglement generation methods by minimizing the cross-talk induced by the unwanted nuclear spin bath.We find that it is possible to prepare GHZ-like states with maximal all-way correlations in excess of 95% and gate errors lower than 0.05% due to residual entanglement with unwanted nuclei.We show how to prepare GHZ-like states with single-shot operations, reducing the gate times at least twofold compared to the sequential scheme while offering decoupling capabilities comparable to the latter.We present a closed-form expression for the M -tangling power of the evolution operator and use this to develop a method for verifying genuine multipartite entanglement.Remarkably, this metric depends only on two-qubit Makhlin invariants and is closely related to the one-tangles we introduced in Ref. [38].This simplification allows us to systematically determine the DD sequences that maximize all-way correlations as desired for generating multipartite entangled states.Further, we analyze the entanglement of mixed electron-nuclear states and derive a non-unitary M -tangling power that captures residual entanglement links arising from unwanted nuclei.We incorporate electronic dephasing errors into the M -tangling power and derive a simple closedform expression.Finally, we study the impact of pulse control errors on the M -way correlations. The paper is organized as follows.In Sec. 2, we discuss methods for generating electron-nuclear spin entangled states using DD sequences.In Sec. 3, we introduce the M -tangling power of an evolution operator.In Sec. 4, we show how to prepare GHZ M -like states through sequential or single-shot entanglement protocols.In Sec. 5, we quantify the entanglement of electron-nuclear mixed states by tracing out unwanted spins.In Sec. 6, we study the non-unitary M -tangling power of the electron-nuclear target subspace, which encodes correlations arising from the entire electron-nuclear system.Finally, in Sec.7 we study the effect of electronic dephasing errors as well as pulse control errors on the M -tangling power. 2 Establishing electron-nuclear entanglement Multi One way to generate electron-nuclear entanglement is through DD sequences, which are trains of π-pulses applied on the electron spin that are interleaved by free evolution periods.These sequences are constructed by concatenating a basic π-pulse unit of time t a certain number of iterations, N .Well-known examples are the Carr-Purcell-Meiboom-Gill (CPMG) [39,40,41,42] and Uhrig (UDD) [43,44] sequences, which have been used experimentally for example in Refs.[45,30,17,33,46], or proposed for defects in Ref. [47].In the defect-nuclear spin system, DD sequences serve a two-fold purpose; i) they average out the interactions between the electron and unwanted nuclei and, ii) under so-called resonance conditions, they selectively entangle a target nuclear spin with the electron [30].These resonances correspond to specific values of t, and they are generally distinct for each nuclear spin, as they depend on the hyperfine (HF) parameters of each nucleus.By composing consecutive sequences with different parameters (i.e., unit times t and iterations N ), we can thus create entanglement between the electron and multiple nuclei.We refer to this standard experimental approach of entangling one nuclear spin at a time with the electron as "sequential".In Ref. [38], we showed that alternatively single-shot operations can be used to entangle the electron with a subset of nuclei from the register, significantly reducing gate times compared to the sequential approach.Figure 1 summarizes the two approaches to generating electron-nuclear entanglement. Figure 1(a) depicts the multi-spin scheme implemented by a single-shot entangling operation, and Fig. 1(b) shows the sequential scheme which requires M − 1 entangling gates to prepare GHZ M -like states.The M − 1 entangling gates are realized by composing sequences of different interpulse spacings and iterations of the unit, such that a different nuclear spin is selected from the register based on the resonance condition. GHZ M states are created by initializing the electron in the |+⟩ state, polarizing the nuclear spins in the |0⟩ state [48,17], and then performing electron-nuclear entangling gates via DD sequences.Probabilistic (deterministic) initialization of the nuclear spins in |0⟩ can be achieved through measurement-based (SWAP-based) initialization [49].After state initialization, one can perform a DD sequence to generate entanglement.Setting the sequence unit time t to be a nuclear spin resonance time forces that nuclear spin to rotate along an axis conditioned on the electron spin state.If the axes are anti-parallel and along the ±x direction, and the unit is iterated an appropriate number of times, the nuclear spin can accumulate a rotation angle of ϕ = π/2, realizing a CR x (±π/2) electron-nuclear entangling gate [30].Repeating this process by changing the unit time t and iterations N of the subsequent sequence (and assuming the k-th target spin is ro-tated only during the k-th evolution; trivial nonentangling rotations could happen on the k-th nucleus during the remaining evolutions) creates a state equivalent to GHZ M = (|0⟩ ⊗M +|1⟩ ⊗M )/ √ 2 up to local operations: (1) where m k = ±1, m (k) j can be +1 or −1 for the kth nucleus, and we defined |±y⟩ = (|0⟩±i|1⟩)/ √ 2. For example, if all the gates are CR x (π/2), then the resulting state is (|0⟩|−y⟩ ⊗k + |1⟩|y⟩ ⊗k )/ √ 2. This is the idea behind the sequential entanglement protocol.The multi-spin entanglement protocol produces a more complicated state since the electron-nuclear evolution now becomes a CR xz gate (see Ref. [38]), but the resulting state is still locally equivalent to GHZ M .In this latter setup, multiple nuclei are simultaneously entangled with the electron via a single-shot operation. Quantifying the quality of GHZ M -like states using a fidelity overlap is computationally costly since we would need to optimize over local gates.This optimization becomes more difficult for larger entangled states.In the following section, we introduce an entanglement metric that allows us to quantify genuine all-way correlations in an arbitrarily large electron-nuclear spin system, using only the information of the evolution operator generated by π-sequences.With this analysis we are able to translate the M -tangling capability of such an evolution operator irrespective of the total system size, into simple spin-spin correlation metrics between the central spin and each of the M −1 nuclear spins.Our analysis throughout the paper is completely general and is not restricted by the choice of DD sequence or by the choice of the defect system.It is further generically valid for any type of nuclear spins present in the register (provided they have spin I = 1/2).The code used to simulate all our following results can be found in [50]. M -tangling power Genuine multipartite entanglement of GHZ Mlike states can be verified with metrics such as entanglement witnesses [51,52,53], concentratable entanglement [54], or the so-called Mtangles [55,56,57].We choose to work with the latter, which are usually defined in terms of a state vector (or, more generally, a density operator).The M -tangles, τ M (|ψ⟩) ∈ [0, 1], distinguish the GHZ entanglement class from other entanglement classes.For example, the three-tangle saturates to 1 for the GHZ 3 state, whereas it vanishes for the |W⟩ = 1/ √ 3(|100⟩ + |010⟩ + |001⟩) state.The M -tangles are invariant under permutations of the qubits and SLOCC, and are entanglement monotones1 [58].Since they are defined based on a state vector, their calculation requires that we make an assumption about the initial state of the register, which is then evolved under the π-pulse sequence.We circumvent this issue by focusing instead on the capability of a gate to saturate allway correlations and thus prepare states locally equivalent to GHZ M . We introduce the M -tangling power of a unitary, which we define to be the average of the Mtangle ϵ p,M (U ) := ⟨τ M (U ρ 0 U † )⟩, over all initial product states, ρ 0 .The formulas of M -tangles expressed in terms of the state vector can be found in Appendix A.1 and in Refs.[55,57,56]. In our system, the total evolution operator (of the electron and M − 1 nuclear spins) produced by DD sequences has the form: where σ jj ≡ |j⟩⟨j| are projectors onto two of the levels in the electron spin multiplet, and is the rotation that acts on the l-th nuclear spin and, in general, depends on the electron spin state.Due to the fact that the evolution is controlled only on the electron, we find that we can compactly express the M -tangle of a given state as [see Appendix A.2]: where ρ ⊗2 is the density matrix of two copies of the M -qubit system living in the Hilbert space H M 2 .P is the product of projectors onto the antisymmetric (or symmetric) space of the i-th subspace and its copy, i.e. where P (±) j,j+M = 1/2(1 ± SWAP j,j+M ), and with SWAP j,j+M = α,β∈{0,1} |α⟩ j |β⟩ j+M ⟨β| j ⟨α| j+M .Note that we have fixed system "1" to be the electron's subspace.(If the control qubit corresponds to a different subspace, then for odd M , the product of antisymmetric projectors needs to exclude the control system from the copies, and instead we apply P (+) on the sectors of the control qubit).For even M ≥ 4, Eq. ( 3), holds for density matrices ρ obtained by any arbitrary U , whereas for odd M , Eq. ( 3) holds strictly for states obtained by CR-evolutions.(We verified this numerically, comparing with the exact expressions of τ M that hold for states obtained by arbitrary evolutions.)By averaging Eq. ( 3) over all product initial states, we prove that the M -tangling power of an operator U has the simple expression [Appendix A.2]: with , where d = 2 (dimension of qubit subspace).Equation ( 5) is exact for any arbitrary U (M ), for M even and M ≥ 4, whereas for odd M it holds only for CRtype evolution operators of the form of Eq. ( 2).The simple structure of CR-type evolution operators generated by π-sequences allows us to derive analytically a closed-form expression for the M -tangling power for an arbitrarily large nuclear spin register coupled to the electron.In Appendix A.3, we prove that the M -tangling power of the operator U of Eq. ( 2), is given by: where G 1 is the so-called Makhlin invariant [59], which we have shown that for a π-pulse sequence is given by [38]: 1 ) is the rotation angle that the kth nuclear spin undergoes when the electron is in the |0⟩ (|1⟩) state, and n 1 characterizes the evolution of the kth nuclear spin under the DD sequence and is related to the bipartition entangling power of the nuclear spin with the remaining register, i.e., the nuclear one-tangle given by ϵ nuclear p = 2/9(1 − G 1 ) [38]. The nuclear one-tangle is a faithful metric of selectivity; its minimization ensures that the spin is decoupled from the register, while its maximization implies that the particular spin attains maximal correlations with the register. The remarkable simplicity of Eq. ( 6) allows us to check whether a controlled unitary of the form of Eq. ( 2) is capable of preparing GHZ Mlike states.Saturating the bounds of all-way correlations between the electronic defect and the nuclear register translates into minimizing G 1 (or, equivalently maximizing ϵ nuclear p ) of the nuclei that will be part of the GHZ state, and implies that the maximum M -tangling power is At this point, we should mention a caveat of the M -tangle metric.In Ref. [56] it was mentioned that the four-tangle cannot discriminate the entanglement of a GHZ 4 -like state from that of bi-separable maximally entangled states, e.g., |Φ + ⟩ ⊗2 , where |Φ + ⟩ is one of the four Bell states; in both cases the four-tangle is 1. Analogously, the even Mtangle cannot discriminate the entanglement of a GHZ M state from the entanglement of M/2 copies of two-qubit maximally entangled states |Bell⟩ ⊗M/2 (see also Ref. [60]).Nevertheless, in our system we have the extra condition that G 1 of all M − 1 nuclei should be minimized to saturate all-way correlations.Hence, if we fail to minimize at least one out of the M − 1 {G 1 } quantities, we know that the evolution operator will never be able to prepare genuine multipartite entanglement between all M parties.Another way to understand this is to consider the example of 4 qubits, and the entangling gate CR ).If we act with CR x (π/2) ⊗3 on arbtirary initial states, we can at most prepare a GHZ 4 -like state, but we will never be able to prepare two individual Bell pairs, since correlations are constrained to be distributed among all parties connected by the electron (since we are dealing with a central spin type system).Similar statements hold for the multispin method that utilizes CR xz gates.Therefore, we can safely use the M -tangle metrics to detect genuine multipartite electron-nuclear (and nuclear-nuclear) spin entanglement. Let us further comment that the M -tangling power is derived by averaging over all product (and pure) states, so it only tells us how well the dynamics of U can saturate the all-way correlations of pure product states.No conclusion can be made, however, if one has a statistical mixture of such states (i.e., a mixed state). In the following section, we show how to optimize the sequential or multi-spin schemes and guide the selection of nuclei from the register to generate genuine all-way correlations within time constraints. Makhlin-invariant of j-th spin Table 1: Meaning of the symbols that we use in the flowchart of Fig. 2 to describe the optimization steps for the sequential and multi-spin protocols. Sequential scheme for GHZ 3 states We first begin with the task of generating GHZ Mlike states using the sequential scheme, which requires M − 1 consecutive gates.Hereafter, we refer to the collection of M − 1 entangling gates as the composite evolution. Under this scheme, to entangle the nuclei with the electron, we need to ensure that during each evolution we rotate conditionally only one particular nuclear spin, meaning we maximize its one-tangle.At the same time, all other nuclear one-tangles should be minimal in each evolution, so that we suppress cross-talk arising from the nuclear spin bath.The procedure of identifying optimal nuclear spin candidates and sequence parameters is shown in Fig. 2 and explained in more detail in Appendix G. Depending on the size of the GHZ state, we set different tolerances for unwanted/target one-tangles, and gate time restrictions within T * 2 of the nuclei (see Table 2 of Appendix G).At the end of this procedure, we are left with different options for selectively entangling a single nuclear spin with the electron. After identifying the optimal sequence parameters and nuclear spin candidates, we compose the M − 1 entangling gates and create a "case" of a composite evolution operator, from which we can extract the Makhlin invariants G 1 associated with the evolution of each nuclear spin.This process allows us to calculate the M -tangling power of the composite gate (using Eq. ( 6)) and verify if it can prepare genuine multipartite entanglement.Further, as we showed in Ref. [38], we can analytically calculate the induced gate error due to residual entanglement links with the unwanted nuclei.This unresolved entanglement could make the target gate deviate from the ideal evolution, which we take to be the composite M − 1 entangling operations in the absence of unwanted spins; minimization of this gate error thus ensures a better quality of the GHZ-like states that we produce. We start with the simplest case of generating GHZ 3 -like states using the sequential scheme.Throughout the rest of the paper, we focus on the CPMG sequence (t/4−π −t/2−π −t/4) N , where t is the unit time, π represents a π-pulse, and N is the number of iterations of the basic unit.For concreteness, we consider an NV center in diamond and define the qubit states of the electron to be |0⟩ ≡ |m s = 0⟩ and |1⟩ ≡ |m s = −1⟩.We further use the hyperfine parameters of the 13 C nuclear spins from the 27 nuclear spin register characterized by the Taminiau group [37], and following their conventions we label the nuclear spins as Cj with j ∈ [1,27].Despite these choices, the analysis that follows is general and valid for any π-sequence or electronic defect in diamond or SiC, and for nuclei with spin I = 1/2 (e.g., 13 C in diamond, 13 C and 29 Si in silicon carbide). In Fig. 3(a), we show the nuclear one-tangles (scaled by the maximum value of 2/9) for 39 different composite evolutions (labeled as "cases") that can prepare GHZ 3 -like states.Each realization corresponds to a unique nuclear spin combination that is entangled with the electron (the Reject case Multi-spin scheme Sequential scheme Figure 2: Flowcharts that summarize the steps we follow to find optimal sequences that prepare GHZ M -like states, utilizing the sequential or multi-spin scheme.The various symbols are defined in Table 1.text above the bars indicates which 13 C nuclear spins are involved).The first (second) bar of each case corresponds to the one-tangle of the first (second) target nuclear spin at the end of the composite evolution of two entangling gates.Each entangling gate is implemened using an optimal unit time t of the sequence and an optimal number N of iterations of the unit.Since we optimize t close to the resonance time of the respective nuclear spin, the dot product of the nuclear axes is n 0 • n 1 ≈ −1.Thus, the maximization of the first (second) one-tangle happens when the first (second) nuclear spin rotates by ϕ = π/2 [see Eq. ( 7)], during the first (second) evolution.(Note that for each CPMG evolution it holds that ϕ 0 = ϕ 1 ≡ ϕ, see also Ref. [38]). The composition of the two entangling operations gives rise to total rotation angles ϕ 0 ̸ = ϕ 1 for each nuclear spin.The nuclear rotation angles ϕ := ϕ 0 are shown in Fig. 3(b), and the dot products of the rotation axes are depicted in Fig. 3(c).Due to the composite evolution, the total nuclear rotation angles deviate from π/2, and the rotation axes are no longer anti-parallel for all cases.However, as we noted in Ref. [38], G 1 can be minimized as long as n 0 • n 1 ≤ 0. Due to the unwanted one-tangle tolerances we impose, we ensure that only one out of the M − 1 gates will conditionally rotate the target spin we intend to entangle with the electron.The remaining evolutions will approximately leave the entanglement of that specific spin with the electron intact.For example, the first (second) target spin will evolve approximately trivially (i.e., in an unconditional manner) during the second (first) evolution, hence preserving the high entanglement between all parties at the end of all the gates.This behavior is confirmed in Fig. 3(d) where we show the M -tangling power (scaled by [d/(d + 1)] M , which is higher than 0.99 across all different cases.In Fig. 3(e), we further show the total gate time, which we have restricted to less than 2 ms so that we are within the coherence times of the nuclei (T * 2 ∈ [3, 17] ms; see Ref. [61]).Further, in Fig. 3(f), we show the gate error due to residual entanglement of the target subspace with unwanted nuclei.We note that for various realizations, the gate fidelity can exceed 95% (e.g., cases "2", "7", "19", "23","30", "37"), implying small levels of cross-talk with the remaining 25 unwanted nuclear spins. Multi-spin scheme for GHZ 3 states Next, we proceed with the multi-spin scheme, which can prepare GHZ 3 -like states with a singleshot entangling operation.To identify optimal nuclear spin candidates and sequence parameters, we follow a slightly different procedure, shown in Fig. 2 and explained more in Appendix G.In this case, we require that two nuclear one-tangles are maximized for the same sequence parameters and under the gate time restriction of 2 ms.At the same time, we require small values of unwanted one-tangles to minimize the cross-talk with the remaining nuclear spin bath.We summarize the results of this method in Fig. 4. In Fig. 4(a), we show the one-tangles of two nuclei that are maximized by a single-shot gate for 15 different DD sequences.To accept these cases as "optimal", we require that the target one-tangles are greater than a threshold, here set to 0.9. Figure 4(b) shows the nuclear rotation angles, and Fig. 4(c) shows the dot products of the nuclear rotation axes for each case.Since now the sequence unit time will not in principle coincide with the resonance of both target spins (since their HF parameters are distinct), the nuclear rotation axes will not be anti-parallel for both nuclei.However, when n 0 • n 1 ≤ 0 for a particular nuclear spin, then if it is also rotated by the appropriate angle, the one-tangle can be maximal (see for example cases "2", "7", "8", "11" and Ref. [38] for a more detailed explanation).Thus, the feature of nonantiparallel axes can be compensated by rotating the nuclear spins by angles that deviate from π/2.On the other hand, if n 0 • n 1 > 0, G 1 cannot go to 0, and the one-tangle can never be 1 (see case "4", for nuclear spin C5).In Fig. 4(d) we show the M -tangling power (scaled by [d/(d + 1)] M ) of the multi-spin gate.As expected, the Mtangling power saturates to 1 when the nuclear one-tangles are also maximal.In Fig. 4(e), we show the gate time of the multi-spin operation, and in Figure 4(f), the gate error.We note that for various cases, the multi-spin gate duration is at least two times shorter (see for example, cases "4", "11", "22" ) compared to the sequential one.Thus, the multi-spin gates can be equally reliable as the sequential gates, but with the additional advantage of being significantly faster compared to the latter. Generating GHZ M states We can use similar ideas to extend the size of the GHZ M -like states.In Fig. 5(a), we show the Mtangling power of composite evolutions that can prepare GHZ M -like states up to M = 10 qubits.The nuclear one-tangles for all different realizations are provided in Appendix H. Figure 5(b) shows the gate error of the composite evolution due to the presence of the remaining 27 − (M − 1) nuclear spins, whereas Fig. 5(c) shows the total gate time of the composite sequences.As the size of the GHZ-like states increases, it becomes more difficult to eliminate the cross-talk between the target nuclei since we need to ensure that only one out of the M − 1 evolutions rotates conditionally one of the M − 1 target nuclei.However, we still find acceptable cases that can prepare up to GHZ 10 -like states with M -tangling power over 0.95.The infidelity tends to increase as we perform more entangling gates because it becomes more likely that at least one of the M − 1 entangling operators will induce cross-talk between the target subspace and the remaining nuclear spin bath.However, it is still possible to create GHZ 6 -like states within ∼ 2 ms, and even GHZ 9or GHZ 10 -like states within 4 ms, a significant improvement compared to gate times reported in Ref. [33], which could be as high as 2.371 ms for controlling only a single nuclear spin. We also study the possibility of creating GHZ M -like states for M > 3 via the multi-spin scheme.In principle, due to the more constrained optimization that requires the simultaneous maximization of M −1 nuclear one-tangles, we expect that we will find fewer acceptable cases that respect the bounds we set for target/unwanted onetangles.In Fig. 6(a), we show the nuclear onetangles for various cases of single-shot entangling operations, which can create up to GHZ 9 -like states.Figure 6(b) shows the M -tangling power of the single-shot gate, whereas Fig. 6(c) shows the gate error.We note that the infidelity can in principle be less than 0.1, but the M -tangling power is not high enough in all cases, due to imperfect entanglement between target nuclei and the electron. Figure 6(d) shows the gate time of the multispin operation.Once again, the gates are significantly faster compared to the sequential scheme.While for the sequential protocol increasing the number of parties of the GHZ M -like states implies durations of the total gate that only increase, the multi-spin operations follow a different trend.To understand this, note that the sequential scheme requires higher-order resonances (i.e., longer unit times) to suppress unintended couplings with unwanted nuclei.Higher-order resonances are also needed to ensure that only one target nuclear spin is rotated conditionally by each of the M − 1 sequential gates.Thus, increasing the GHZ size makes the requirement to resort to higher-order sequences, in principle, more stringent.On the other hand, increasing the size of the GHZ M -like state in the multi-spin protocol means that we need to gradually reduce how well we decouple the electron from the spin bath and allow interactions of the electron with increasingly more nuclear spins.In contrast to the sequential scheme, we thus need to shorten the unit time, which gives us acceptable cases of realizing GHZ M -like states. Of course, the duration of the single-shot gate also depends on the number of times, N , we repeat the unit and on the constraints we impose on the target all-way correlations and unwanted nuclear one-tangles.In principle, a shorter basic unit can lead to faster operations.For example, it is remarkable that multi-partite entangled states can be prepared within 1 ms or even faster with only one entangling gate (see for example, the case of the GHZ 8 -like state).Experimentally, depending on the nuclear HF parameters, this method could be highly beneficial for preparing entangled states involving nuclear spin clusters fast, whereas the entanglement could be boosted using distillation protocols [29]. Overall, both the sequential and multi-spin protocols can generate high-quality GHZ M -like states with reasonable gate times.Our formalism based on the M -tangling power allows us to identify optimal scenarios of preparing entangled states with minimal cross-talk.Interestingly, a hybrid entanglement generation scheme involving both single-shot and sequential gates could offer a more realistic path to scalability by drastically reducing the number of entangling operations and gate times. Entanglement of mixed states So far we have made no assumption about the initial state of the system and focused instead on the capability of the gates to produce entan-gled states.We have used the gate error due to residual entanglement links and the unwanted nuclear one-tangles as a metric of mixedness in the GHZ M -like states prepared by DD sequences.We now work with a density matrix for the total system and inspect the entanglement of a target subspace after we trace out the unwanted nuclear spins.This partial trace operation will result, in general, in a mixed state for the target subspace, for which we cannot use directly the M -tangles that are defined for pure states. In the case of mixed states, the M -tangles are calculated via so-called convex roof constructions [55,62]: where the minimum is taken over all ensembles with probabilities p i and pure states |ψ i ⟩ that reconstruct the mixed state density operator.The ensemble that gives the minimum value of the tangle is known as optimal.Such convex roof extensions are necessary to ensure an entanglement monotone, but since the minimum of a sum of convex functions is not always convex, we need to take the convex hull of Eq. ( 8) (see also Refs.[63,62]). Finding the optimal ensemble is a non-trivial task as it entails searching over all possible decompositions of the mixed state.This task becomes even harder when the rank of the reduced density matrix is large.However, for our problem, we find that starting from an arbitrary pure state of the total system and tracing out any number of nuclei, the maximum rank of the reduced density matrix is 2.This is due to the form of the total evolution operator, which can produce at most GHZ M -like states (if extra single-qubit gates are not allowed) given the appropriate initial state.Further, the maximum rank proves that the creation of |W ⟩-like states is impossible if one allows only entangling gates generated by DD sequences.In Appendix I we derive the eigenvectors and eigenvalues of the density matrix, of which two are nonzero, provided the electron starts from a superposition of |0⟩ and |1⟩. The analytical expression of the eigenvalues and eigenvectors of the reduced density matrix of the target subspace allows us to find the optimal ensemble that minimizes the M -tangle of the mixed state.This can be done using the methods developed in Ref. [62].The first step is to di-agonalize the reduced density matrix of the system.Based on the eigendecomposition we can then construct the trial state: where |v ± ⟩ are the eigenvectors of the rank-2 mixed state and λ ± the two nonzero eigenvalues. Since any ensemble can be obtained by acting with unitaries on the diagonalized reduced density matrix, minimizing the entanglement of the trial state by varying the angle χ implies that we obtain the entanglement of the optimal ensemble that reconstructs the mixed state.The angle χ controls the relative phase of the eigenvectors. Our system consists of 28 qubits, so simulating the total density matrix is computationally hard.The fact that we obtain the reduced density matrix analytically [see Appendix I] allows us to find the impact of the unwanted spins on the target subspace.Working with a density matrix necessitates that we specify the system's initial state.In the literature, it is often assumed that the nuclear spin bath starts from the maximally mixed state.This assumption, however, would result in maximal entanglement in the target subspace (provided we perform close to perfect gates and once the electron-nuclear target register is purified) free from cross-talk, since the maximally mixed state of the unwanted spins would remain invariant under any unitary evolution.This is far from true since the target subspace suffers from cross-talk from unwanted nuclei introduced by the entangling gates.On the other hand, considering a pure state for the entire electron-nuclear register is again unrealistic, but we choose this convention to find how classical correlations due to the partial trace operation manifest in the target subspace. For the following analysis, we will use the 39 cases we found for creating GHZ 3 -like states (generalization to larger dimension of the target subspace is straightforward) from Sec. 4 via the sequential scheme.We further use the analytical expression of the three-tangle, first introduced in Ref. [55].To prepare an entangled state with genuine all-way correlations, we need to find the appropriate initial state for the target subspace, upon which we act with the entangling operations.In most cases, this state needs to be |+⟩|0⟩ ⊗M −1 .(Recall that the CR x (π/2) gates are locally equivalent to CNOT.)If this initial state does not give rise to a three-tangle of at least 0.95, we then optimize over the initial threequbit state (we assume a sampling of 0.05π for θ ∈ [0, π] and 0.1π for γ ∈ [0, 2π] for each qubit state |ξ⟩ = cos(θ/2)|0⟩ + e iγ sin(θ/2)|1⟩).It is thus possible that other initial states might give slightly larger three-tangles than the ones we will show later on.We further assume that the initial state of the bath is |0⟩ ⊗27−(M −1) . We begin with the following setup.In Sec. 4, we found 39 cases of preparing GHZ 3 -like states using the sequential scheme, via the composition of two consecutive DD sequences.The composite evolution rotates both target and unwanted nuclei.Some unwanted spins might evolve slightly conditionally on the electron.Thus, we need to update the initial state of all spins under the composite evolution.Using the analytical expression of the reduced density matrix, we obtain its eigenvalues and eigenvectors and arrange the 39 cases in increasing value of the largest eigenvalue, p = λ + .In Fig. 7(a), we show the three-tangle of the pure state prepared using the composite evolutions of Fig. 3 by ignoring the presence of unwanted nuclei.Using the three-tangle, we verify that all-way correlations are maximal across all cases, as expected based on the fact that we approximately saturated the M -tangling power (and we also use the appropriate three-qubit initial state).We further show the three-tangle of the two eigenvectors, |v ± ⟩, as a function of the largest eigenvalue p, obtained by diagonalizing the mixed state (after tracing out unwanted nuclei).The three-tangle of |v + ⟩ coincides with the three-tangle of |v − ⟩, which means that the unwanted bath creates a mixture of two terms, which both have the same M -way entanglement and are GHZ 3 -like states.Another feature we observe is that the all-way correlations of the eigenvectors |v ± ⟩ coincide with those of the pure GHZ 3 -like state of the target electron-nuclear register.Hence, tracing out the unwanted spins makes the reduced state classically correlated, but the amount of entanglement in each term of the mixed state is unaffected. In Fig. 7(b), we then use the two eigenvectors to build the trial state and find the minimum of the three-tangle as we vary χ for each value of p. Since we are tracing out 25 unwanted nuclei, we see that the entanglement of the mixed state reduces substantially.One simple way to understand this feature is to think of a trial (pure) twoqubit state which is a superposition of two Bell states.When we have the equal superposition, the two-qubit concurrence goes to 0, whereas for different weights between the two terms, the concurrence is 0 < C(|ψ⟩) ≤ 1.A similar thing happens here, with the difference that the superposition terms are two GHZ 3 -like states.In Appendix J, we show that the three-tangle of the superposition of two orthogonal GHZ 3 states is much more sensitive compared to the concurrence of the superposition of two orthogonal Bell states for the two-qubit case.Even for large values of p (i.e.p = 0.9) the minimum three-tangle is around ∼ 0.64 for the GHZ 3 mixture compared to the minimum concurrence which is ∼ 0.8 for the Bell mixture.In Fig. 7(b), we further show the convex hull of the minimum value of the three-tangle.In the scenario we are studying, the minimum value of the three-tangle is not convex since the pure state of the target subspace does not have perfect correlations due to slightly imperfect gates (i.e., gates that cause over-or under-rotation of the nuclei).If the gates create perfect all-way entanglement then both eigenvectors are perfect GHZ 3 -like states, and the minimum of the threetangle is convex in terms of p, and is given by We should further comment that our results depend highly on the initial state of the system.Since the evolution is conditional on the electron, if the initial state of the unwanted spins is not the appropriate one to generate the maximum possible entanglement, cross-talk will be suppressed.Also, as the system's initial state is generically mixed, the rank of the reduced density matrix after tracing out unwanted nuclei will be larger than 2.However, the simple model we assumed in this section captures qualitatively how the Mway entanglement changes as the target subspace becomes more mixed, as a result of residual entanglement with unwanted nuclei. In the next section, we show another way of understanding the unwanted residual entanglement in terms of the non-unitary entangling power of the entire quantum channel, where now the partial trace operation is translated into the operator-sum representation [64]. Non-unitary entangling power An alternative way to include the impact of unwanted nuclei on the target subspace is to use the Kraus operators associated with the partial trace channel.In Ref. [38], we derived the Kraus operators for an arbitrary number of nuclei coupled to a single electron spin.Here, we use this result to derive the M -tangling power of the evolution that the target system undergoes due to the presence of the unwanted nuclei.Because this is a generalization for non-unitary dynamics, we will refer to this metric for brevity as the non-unitary Mtangling power. The density matrix of the total system evolves under the unitary U given by Eq. ( 2).If we trace out the unwanted nuclei, the target subspace evolves under the quantum channel E(ρ) = , where E k are the Kraus operators corresponding to unwanted nuclei, L is the number of total nuclei in the bath, and K is the number of target nuclear spins.In this case, the M -tangling power (where M = K + 1) of the quantum channel reads [65]: (10) with the same definitions for Ω p0 and P we introduced in Sec. 3. Since the total evolution is controlled on the electron, it is possible to derive a closed-form expression for the M -tangling power of the channel, which reads (see Appendix A.4): (11) with W (j) given by: x,1 n x,0 n We find that the M -tangling power of the channel is upper-bounded by the unitary M -tangling power of the target space, i.e. The equality is satisfied when the unwanted subspace evolves trivially (i.e., irrespective of the electron's state), in which case it holds that G (j) 1 = 1 and W (j) = 0, ∀j.Therefore, whenever ϵ p,M (E) < ϵ p,M (U ), we know that the unwanted spin bath possesses nonzero correlations with the target subspace.This is an alternative way to see that the entanglement of the mixed state is lower compared to the pure case, a feature we already observed in Sec. 5.The remarkable simplicity of Eq. ( 11) allows us to obtain full information of the entanglement distribution within the electron-nuclear register, irrespective of the total number of qubits in the system, or number of nuclear spins we are tracing out.Using ϵ p,M (E), we can find how well the gate we design saturates the M -way correlations of the target subspace, given the fact that there might be residual unwanted correlations linking the target subspace with unwanted spins; those unwanted correlations are encoded in the Kraus operators.An extra advantage of this approach is that we never need to assume an initial state of the electron-nuclear register, as we did for example in Sec. 5. To compare the non-unitary M -tangling power against ϵ p,M (U ), we consider as an example the optimal cases we identified for preparing GHZ 4 -like states from Sec. 4 for either the sequential or the multi-spin schemes.The results for the sequential scheme are shown in Fig. 8(a).The dark blue bars show the unitary M -tangling power across 29 realizations, and the pink bars show the non-unitary Mtangling power.As expected, ϵ p,M (E) is lower in all cases due to unresolved crosstalk, and this deviation is enhanced when the gate error is larger [see Fig. 5(b)].A similar feature is observed in Fig. 8(b) for the 7 cases of preparing GHZ 4like states using the multi-spin scheme.Specifically, for case "5", for which we found the lowest gate error emerging from residual entanglement across the 7 cases [see Fig. 6(b)], the non-unitary M -tangling power is closer to the unitary one.Hence, this verifies our previous observation that the mixed electron-nuclear states we create in the target subspace remain GHZ M -like, but their entanglement is reduced whenever correlations between the target subspace and the unwanted spin bath are present. In Fig. 8, we also display with green bars an approximate expression for ϵ p,M (E), where we set W (j) = 0, ∀j.We note that the approximate expression agrees very well with the exact expression of ϵ p,M (E). Finally, let us discuss the computational complexity related to the various entanglement metrics we introduced.The calculation of the G 1 Makhlin invariants and hence of target/unwanted one-tangles scales linearly with the number of nuclear spins, since each G 1 quantity describes the evolution of a single nuclear spin.Thus, the M -tangling power, ϵ p,M (U ), can be calculated without computational difficulty.Similarly, ϵ p,M (E) scales linearly with the size of the total size of the register (including target and unwanted nuclei), and can also be calculated without difficulty.The infidelity of the target gate due to the presence of unwanted spins requires a summation over 2 L−K Kraus operators.Therefore, the starting point to identify optimal cases for preparing GHZ M -like states is to calculate the target/unwanted onetangles in order to maximize (minimize) target (unwanted) correlations.To further inspect the optimal cases in terms of gate error, one can then proceed with this metric to obtain full information about the dynamics of the system. Additional error analysis In this section, we study the effect of pulse control errors and dephasing errors that can occur on the electronic qubit.To find the impact of those errors, we consider only their contribution to the M -tangling power of the target subspace. Target subspace M -way entanglement under electronic dephasing We begin by considering dephasing errors for the electronic qubit.The dephasing error can be represented via the Kraus operators [66]: In Appendix B, we prove that the "dephased" M -tangling power of the target subspace (consisting of the electron and target nuclei) has a simple closed-form expression.In particular, for a CPMG decoupling sequence of unit time, t, and of N iterations, it takes the form: ), (15) where R is given by: In the case where we compose CPMG sequences of different unit times t j and iterations N j (as we do, for example, in the sequential scheme), the function R takes the form: We verify that in the limit of λ 0 = 1 and λ 1 = 0, we recover the ideal ϵ p,M (U ) for the target subspace.We further find that the M -tangling power in the presence of electronic dephasing errors is bounded from below by 50%.We begin by studying the impact of the electronic dephasing for the 39 cases of preparing GHZ 3 -like states via the sequential scheme.The results are shown in Fig. 9(a), where we consider λ 0 = 0.98, λ 1 = 1 − λ 0 , and dephasing angles ranging from θ −1 = 400 µs to θ −1 = 50 µs.We see that for θ −1 = 400 µs, we preserve high Mtangling power over 90% for most realizations.For a dephasing angle θ −1 = 100 µs, we find several cases with M -tangling power around 80-85%.For a dephasing angle θ −1 = 50 µs, the M -way correlations can drop to less than 60% across many cases. In Fig. 9(b), we repeat the calculations for the 27 cases of preparing GHZ 3 -like states using the multi-spin protocol.The M -tangling power of the multi-spin protocol remains higher in the presence of electronic dephasing compared to the sequential scheme.We even observe that for the smallest dephasing angle of θ −1 = 50 µs, we have several cases which exceed 80% M -way correlations.This behavior is expected since the multispin scheme is, in principle, faster than the sequential protocol, and hence it is less sensitive to electronic dephasing errors. Figure 9(c) shows the M -tangling power of case "1" of the sequential scheme, as a function of λ 0 and θ −1 , whereas Fig. 9(d) shows the M -tangling power of case "1" of the multi-spin scheme.In the sequential scheme, we control the nuclei C1 and C18 (with a gate time ∼ 1843.2 µs), whereas, in the multi-spin protocol, we control the nuclei C1 and C2 (with a gate time ∼ 1366.5 µs).Although the results of Fig. 9(c) and Fig. 9(d) should not be compared directly due to the different total timings, we note that the multi-spin scheme is more robust to dephasing errors.In Appendix C, we provide another example where we compare the performance of the multi-spin scheme with the sequential, but now for case "16".Although case "7" of the multi-spin scheme has a total sequence time longer than the sequence time of the sequential scheme, we find again that the multi-spin scheme is more robust to dephasing.We attribute this feature to the more complicated dephasing channel of the sequential scheme.The expression of ϵ p,M (E deph ) for the sequential scheme is described by different unit times t j and iterations N j since we are composing CPMG sequences to control each nucleus.We believe that this feature can potentially lead to M -way correlations that are more sensitive to electronic dephasing errors. Target subspace M -way entanglement under pulse errors We now consider the effect of pulse errors during the control of the electronic spin.We assume that the pulse error results in an over-or under-rotation along the x-axis of the electron.We model such errors by modifying the perfect R x (π) acting on the electron into R x (π + ϵ) = e −iπ/2(1+ϵ)σx .Such rotation angle errors yield an evolution operator which is no longer blockdiagonal.Consequently, we cannot use the Mtangling power we found in Eq. ( 6).Additionally, we cannot study the M -tangling power for an odd-sized system since, as we mentioned in Sec. 3, the expression of Eq. ( 5) is only applicable to CR-type evolution operators.This difficulty in describing the M -way correlations on the evolution operator level for odd M arises from the fact that averaging over all initial states is not straightforward due to the expression of the odd M -tangle we start with.Nevertheless, by modeling the rotation angle error in this way, we still preserve unitary dynamics, and we can find the impact of such errors on the M -tangling power of even-sized systems numerically by making use of Eq. ( 5).First, we consider systematic errors and study their impact on the cases of preparing GHZ 4like states via the sequential or multi-spin protocols.In Fig. 10(a) and Fig. 10(b), we show the M -tangling power for the 29 cases of preparing GHZ 4 -states, assuming a systematic overrotation of ϵ = 2% and ϵ = 8% respectively.The dark blue bars depict the error-free case.The light blue bars correspond to the XY2 sequence, whereas the orange bars to the CPMG sequence, including the over-rotation errors.In the case of 2% error, we see that CPMG can provide high values of M -way correlations only for a few cases (e.g., case "5", case "11", case "17", case "24").However, if we consider the XY2 sequence whose one unit is t/4 − (π) X − t/2 − (π) Y − t/4, we find that the impact of the error on the quality of the GHZ states is negligible.For an over-rotation error of 8%, the M -way correlations accumulated via the CPMG sequence are lower than 50%, whereas the XY2 sequence still provides high entanglement across almost all cases (all cases besides case "9" and case "20"). We observe a similar behavior of systematic over-rotation errors for the multi-spin scheme.We depict the results for the multi-spin protocol in Fig. 10(c) and Fig. 10(d), where we assume a systematic over-rotation error of 2% and 8%, respectively.For 2% error, we find that ϵ p,M (U ) obtained via the XY2 scheme is extremely robust, although the pulse iterations per case are in general high: N = 128 for case "1", N = 76 for case "2", N = 124 for case "3", N = 136 for case "4", N = 78 for case "5", N = 223 for case "6", and N = 243 for case "7".For an 8% error shown in Fig. 10(d), case "6" and case "7" display the most prominent reduction in M -way correlations across all cases, which is consistent with the large number of decoupling unit iterations. The robustness of XY-sequences to pulse errors has already been reported extensively in the literature [42,67,68,69].In Ref. [69], a quantum process tomography protocol called boostrap was used to determine control pulse errors for the π and π/2 pulses.It was mentioned therein that the π/2 pulses show twice as much variation than πpulse errors indicating that the pulse edges have a larger impact on shorter pulses.The pulse errors were taken as constant during each run and the same for different runs.The reported error values for π-pulses were ϵ x = ϵ y = −0.02(rotation errors for π X and π Y pulses), q x = 0.005 (x-axis component of π Y rotation), q y = 0 (ŷ-axis component of π X rotation) and q z = ±0.05(ẑ-axis component of π X and π Y rotations).More importantly Ramsey, spin-echo, XY4, CPMG, and UDD simulations with those parameters were in strong agreement with experimental data.The estimated pulse error was 1%.In Ref. [68] XYsequences were used to cancel out the systematic pulse errors, suppress decoherence of the electron, as well as suppress artificially injected magnetic noise, which was introduced in the same stripline that was used for the control pulses.The fidelity of a single π-pulse on the electron was again estimated to be 99% using calibration techniques from Ref. [41,67].Therefore, since the π-pulse microwave control can typically exceed 99% for defect platforms, we expect that the M -tangling power we evaluate for the 2% systematic rotation error will be close to the experimentally observed one.In Appendix E, we perform another simulation with the aforementioned error parameters and verify that the M -tangling power we obtain for the ideal CPMG sequence is close to the one obtained via the XY2 sequence in the presence of pulse imperfections.This reveals that combining our protocols with the XY2 decoupling sequence is sufficient to prepare high-fidelity GHZ M states in the presence of realistic experimental errors. We continue the pulse error analysis, assuming random over-/under-rotation pulse errors, which we sample from a normal distribution with standard deviation σ = 0.01.In Fig. 11(a), we show the mean M -tangling power for the XY2 (light blue bars) and for the CPMG (orange bars), using the 29 cases of preparing GHZ 4 -like states with the sequential scheme.In each case, we run 500 independent trials where we sample anew from the normal distribution (per iteration of the decoupling unit), and we evaluate the mean ϵ p,M (U ) across the 500 trials.For comparison, we also display the ideal ϵ p,M (U ) per case.CPMG and XY2 perform on par when random pulse errors are present.The error bars mark the range of ϵ p,M (U ) for the 500 trials per case.We observe that for the sequential scheme, the M -tangling power in the presence of random pulse errors is higher than 0.7 across all cases. In Fig. 11(b), we show the mean M -tangling power for the 7 cases of preparing GHZ 4 -like states using the multi-spin scheme.We sample errors from the normal distribution with standard deviation σ = 0.01 and repeat the calculation 500 times per case to collect statistics and evaluate the mean M -tangling power.Once again, we observe that the XY2 and CPMG sequences perform on par in the presence of random over-/under-rotation errors.The error bars per case correspond to the range of ϵ p,M (U ) for a total of 500 random trials. Overall, we find that the generation of high-quality GHZ M states is possible with the current state-of-the-art experimental control and constraints.Our formalism provides a detailed analysis of the entanglement generation in the presence of errors and allows us to address the problem of designing optimal entangling gates that saturate all-way correlations. Conclusions Genuine multipartite entangled states are an essential component of quantum networks and quantum computing.Exploiting the full potential of nuclear spins in defect platforms for largescale applications requires precise and fast entanglement generation.We showed under what conditions decoupling sequences produce gates capable of maximizing all-way correlations and quantified the entanglement capability of the gates through the M -tangling power.Using this formalism, we guided the selection of sequence parameters and nuclear spin candidates to prepare high-quality GHZ M -like states by appropriate driving of the electron spin.We improved the sequential entanglement generation scheme, pushing the gate time to as low as 4 ms for preparing GHZ-like states of up to 10 spins.We also studied the possibility of direct entanglement generation, which performs on par with the sequential scheme in decoupling capabilities, with the extra advantage that the latter approach drastically reduces the gate count and speeds up the entanglement generation.Further, we studied the entanglement of mixed states, revealing that the M -way correlations of a target subspace are sensitive to residual entanglement with the unwanted nuclei.We introduced a non-unitary M -way entanglement metric, which additionally captures correlations between the target subspace and the unwanted nuclei and showed that it is upper-bounded by the unitary M -tangling power.We derived a nonunitary M -tangling power for the target subspace in the presence of electronic dephasing errors, revealing that the multi-spin protocol can have in principle superior performance compared to conventional schemes due to its shorter implementation time.Finally, we studied the impact of pulse errors on the target subspace M -tangling power and showed that our protocols combined with XY decoupling sequences can provide highfidelity preparation of GHZ states in the presence of realistic experimental errors.Our results pave the way for the systematic and efficient creation of multi-partite entanglement in spin defect systems for quantum information applications. A Mathematical treatment of M -way entanglement A.1 Formulas of M -tangles expressed in terms of the state vector For any even M ≥ 4 the M -tangle is given by [55,56]: where in the last line, we have linearized the equation of the M -tangle by introducing a second copy and projecting onto the antisymmetric space of subsystems i and i + M , defined as For M = 3, the three-tangle can be expressed in the following form [55]: where ) is the one-tangle of the electron (A is the system of the electron partitioned from the space of the two nuclei represented by systems B and C), and (similar definition holds for τ A|C ), with ρAB = σ ⊗2 y ρ * AB σ ⊗2 y .τ A|B (or τ A|C ) is also known as the entanglement of formation and is alternatively given by τ where λ j are the square roots of eigenvalues (in decreasing order) of ρ AB ρAB , with ρ AB = Tr C (ρ). An alternative way to write the three-tangle (as in Ref. [71]) is: This is a complicated expression and averaging this quantity to find the M -tangling power of an arbitrary evolution operator for M = 3 is a difficult task.However, for CR-type evolution operators we find that it holds ⟨ψ|1 ⊗ σ y ⊗ σ y |ψ * ⟩ = 0 = ⟨ψ|σ z ⊗ σ y ⊗ σ y |ψ * ⟩.Considering ⟨ψ|1 ⊗ σ y ⊗ σ y |ψ * ⟩ we can show that this vanishes for CR-type evolutions: where we have used the fact that R † n j σ y (R n j ) * = σ y as well as the property ⟨ϕ|A|ϕ⟩ = ⟨ϕ|σ y C|ϕ⟩ = ⟨ϕ|σ y |ϕ * ⟩ = 0, where A is an anti-linear operator whose expectation value vanishes for all states |ϕ⟩ ∈ H and C denotes complex conjugation.Analogously, one can prove that ⟨ψ|σ z ⊗ σ y ⊗ σ y |ψ * ⟩ = 0. Thus, these results simplify the expression of the three-tangle for states generated by CR-type evolutions, meaning that we can express it as: We can linearize the above formula by noticing that we can write the three-tangle by extending the Hilbert space into a 6 qubit system: The above expression can be understood as a "vectorized" form of Eq. ( 22), since we note that |σ y ⟩ = vec(σ y ) = −i(|01⟩ − |10⟩), which means P (−) = 1/2(1−SWAP) between the sectors i and i+M can be represented as P (−) = 1 2 |σ y ⟩⟨σ y |.We also found that Eq. ( 23) can be generalized to odd M > 3, for arbitrary initial (pure) product states evolved under a CR-type evolution: The most general expression for the odd Mtangle for states that undergo arbitrary unitary evolution can be found in Ref. [57].We also verified by numerical inspection that Eq. ( 24) agrees with the definition of the odd tangle of Ref. [57] when the evolution is CR-type. Averaging over the uniform distribution in H M 2 , we find that the M -tangling power reads: where i,i+M , with d = 2 and P (+) i,i+M = 1/2(1 2M ×2M + SWAP i,i+M ).To prove the expression for Ω p0 , it suffices to note that the uniform distribution of product states factorizes, and hence we can consider the total average as averages of the sectors i and i + M , ∀i.Thus, we have Ω p0 = M i ω i,i+M .Since Ω p0 is symmetric under the exchange of i and i + M , we then have ω i,i+M = cP (+) i,i+M , where c is a constant equal to c = (d + 1) −1 (see also Ref. [72]).Alternatively, the combined integral of the sectors i and i + M can be expressed in terms of an integral over the unitary group where the initial state is fixed and we vary the unitary acting on i and i + M systems: where we have used the fact that Tr[ρ 0 ] = 1 and Tr[ρ 0 SWAP i,i+M ] = 1. Note that for even M , the M -tangling power holds for arbitrary gates U , whereas for odd M it holds only for controlled evolutions of the form A.3 Proof of M -tangling power for CRevolution Before going into the proof, it will be useful to find the action of products of symmetric/antisymmetric projectors (which project onto the sectors i and i + M ) on a trial product state. Suppose we have a collection of states {|ϕ l ⟩, |ϕ l+M ⟩} M l=1 .Suppose further that we have the ordered ket If we act with the projectors i,i+M , on the above state we find: Note that for the above equality to hold it is important that both the kets and the symbols inside the kets are labeled by an index; otherwise if the kets are not labeled the above expression holds only up to re-ordering of the unlabeled kets. To derive a closed-form expression for the Mtangling power of CR-type gates in defect systems, we start by considering odd M .For odd M , we start from the general expression: and we define the vector |m; The action of the projectors l,l+M on the above vector yields: Next, we define: and where we have suppressed the symbol of summation.In the above expressions we have defined , for l ̸ = 1.Using Eq. ( 31) and Eq. ( 30), we evaluate 2 M (U † ) ⊗2 P |m; m + M ⟩: (33) We then find the action of [2(d + 1)] M U ⊗2 Ω p0 on the bra ⟨m; m + M |: Thus, we find that where we have used the property {m i },{m i+M } i = i {m i },{m i+M } .We now work with the first expression of the Kronecker delta's which gives: ), (36) where we have used the fact that , as well as 4 − (Tr[Q The second term with the Kronecker deltas reads: For similar reasons the third term of Kronecker deltas also vanishes.Working with the last term of Kronecker deltas we find: ), (38) where we made use of (−1) M −1 = 1 since M − 1 is even.Therefore, our final expression for odd M is: ), (39) which concludes our proof. For even M , note that P = M i=1 P (−) i,i+M .Again, we follow a similar procedure and start by calculating the action of 2 M (U † ) ⊗2 P from the left on the ket |m; m + M ⟩, which gives: Now, we combine Eq. ( 34) with Eq. ( 40), to obtain First, we focus on the first term of Kronecker deltas, which gives: ). ( It is further easy to show that the second and third terms of Kronecker deltas in Eq. ( 41) evaluate to 0, for similar reasons as in the odd case.The fourth term of Kronecker deltas produces: ). ( Therefore, the expression 2 M Tr[U ⊗2 Ω p0 (U † ) ⊗2 P ] for even M reads: A.4 M -tangling power of non-unitary quantum evolution In this section we consider a setup where we have L nuclear spins with K of them being the target ones, and L−K being the unwanted spins.Thus, the target subspace has M = K + 1 qubits, consisting of the electron and K target nuclear spins.In Ref. [38] we showed that the Kraus operators associated with the partial trace operation of the L − K unwanted spins have the closed form expression: with f (i) j given by: where for the i-th Kraus operator, the first product is taken over the unwanted spins comprising the environment that are in |0⟩, and the second product is taken over the unwanted spins that are in |1⟩.If all spins are in |0⟩, then the second product is 1, whereas if all spins are in |1⟩ the first product is 1.Further, we have The evolution of the target subspace is thus described by the Kraus operators via the quantum channel To include the impact of the unwanted subspace into the M -way entanglement that is generated by the quantum evolution, we define the M -tangling power of the non-unitary quantum channel: Clearly, in the limit of a unitary channel i.e., E r = E s = U (the summation is over the unique Kraus operator) we recover the M -tangling power of a unitary. To derive a closed-form expression, we follow a similar procedure as in the unitary case.We begin by defining Starting with the odd M case, we find that where we have made use of the fact that only the first and last terms of products of Kronecker deltas survive.Therefore we find that ϵ p,M (E) for the odd case is: ). ( whereas for the even case following similar steps we obtain: ). (52) In the above expressions we have made use of the completeness relation i.e.: and hence we can write: We note that the expressions for even/odd number of qubits in the target subspace are identical. Further, the non-unitary M -tangling power is upper bounded by the unitary M -tangling power. Let us now simplify the expression that appears due to the summation over Kraus operators.We first define the set S = {1, 2, . . ., L − K} of unwanted spin indices.Let P (S) be the power set of S, i.e.: This power set is useful in order to write down which unwanted spins are in |0⟩ or |1⟩ in the Kraus operators.Let p ∈ P(S) be an element of the power set and l ∈ p be an unwanted spin index within the subset p.We now start from: In the last line we have defined α l = ⟨0|(R n 1 |0⟩.To evaluate the above expression, let us look into some examples.If we have one unwanted nuclear spin, then P(S) = {{}, {1}}, and so it holds: If we have 2 unwanted nuclear spins, then we find: Similarly, if we have 3 unwanted nuclear spins, the summation over the power set evaluates to: operators (neglecting global phases): We label the Kraus operators of one unit with the bitstring m = (m 1 , m 2 , m 3 ), where each m j can take the value of 0 or 1.The only non-zero terms occur for (q, j, r) = (0, 0, 1) and (q, j, r) = (1, 1, 0), respectively.Thus, the Kraus operator reads: where we have defined: Also, we define ⊕ as addition modulo 2. It is easy to verify that the Kraus operators for one decoupling unit satisfy m 1 ,m 2 ,m 3 [D CPMG deph,m ] † D CPMG deph,m = 1 as follows: where we have used the fact that λ 0 + λ 1 = 1. For multiple decoupling units, the Kraus opera- , where N is the number of iterations of a single unit.In this notation, each m (k) consists of indices (m 3 ), and each one of them takes the value 0 or 1.Each Kraus operator is distinguished by a different ordering of 0s and 1s in the total vector m.For N iterations, we have 2 3N such Kraus operators.For brevity of notation, we will label the composite Kraus operators that result after N CPMG iterations as D N,q , where q = 1, ..., 2 3N labels which Kraus operator we are referring to out of all Kraus operators. Let us now look into the M -tangling power, where we focus on the subspace of the electron and the K target nuclear spins, and we ignore contributions from unwanted nuclei.This target subsystem now evolves under the non-unitary channel E deph (ρ) = r D N,r ρ[D N,r ] † .We first start by defining D N,r = j g(r) j σ jj ⊗ l R (l) n j .Note, that here R (l) n j is the rotation after N iterations, acting on the l-th nuclear spin.Thus, we can express the non-unitary M -tangling power of the channel as: 1 ) * g(r) 1 (g ), where we have made use of the results we found in the previous section.We have also defined 1 ) * g(r) 1 (g (s) 0 )].Note that in this case, the entangling power expresses only the evolution in the target subspace.In other words, we do not include the correlations that arise from the unwanted spins.In the limit of no dephasing, i.e., λ 0 = 1, λ 1 = 0 and θ = 0, the only Kraus operator that survives corresponds to the bitstring of m = [0, . . ., 0] T , and we recover the unitary channel.We can verify this from the above equation, since we will have r = s = 0 and g(r) 0 = g(r) The expression for g(r) j is defined as follows: Using the above definitions and considering only 1 CPMG iteration, we find the following expression for the sum: For N CPMG iterations we can show that R is given by: In the first line we defined f (r) = (g (r) 0 ) * g(r) 1 .Using the above result, we find that the expression of the M -tangling power of the target subspace in the presence of dephasing errors is: ). (72) It is easy to verify that in the limit of λ 0 = 1, (λ 0 = 0) λ 1 = 0 (λ 1 = 1) we get the unitary M -tangling power as expected. If we are composing p CPMG sequences of different unit times t j and iterations N j then we will have: Based on the above expression we verify that in the limit where all p sequences have the same time t = t j , ∀j ∈ [1, p], we recover the previous expression for R. In particular, the above expression is the one we use for the sequential protocol when the electron undergoes dephasing. C Comparison of sequential with multi-spin schemes in the presence of electronic dephasing In this section, we compare the performance of the multi-spin with the sequential scheme in the presence of electronic dephasing errors.In Fig. 12(a) we show ϵ p,M (E deph ) of case "16" of preparing GHZ 3 -like states for the sequential scheme.In Fig. 12(b) we show again ϵ p,M (E deph ) of case "7" of preparing GHZ 3 -like states for the multi-spin scheme.The nuclei we control with the sequential scheme are C5 and C13, and with the multi-spin scheme, C4 and C5.The total sequence time for the sequential scheme is 1357.9µs, and for the multi-spin is 1916.6 µs. Although the multi-spin scheme requires slightly longer time than the sequential one, it is again more robust to dephasing errors.This feature could be attributed to the different unit times t j and sequence iterations N j that enter the expression of ϵ p,M (E deph ) for the sequential scheme since we are composing CPMG sequences of different timings and iterations in the sequential protocol. D Comparison of XY2 and CPMG performance to pulse errors In this section, we study over-/under-rotation errors for the electronic control pulses.We consider the optimal cases we found in the main text for preparing GHZ 4 -like states via the sequential or multi-spin scheme.The systematic pulse errors are simulated as R x/y (π + ϵ) = e −iπ/2(1+ϵ)σ x/y for the CPMG and XY2 sequences.We calculate the deviation between the M -tangling power of the target subspace in the absence of errors, ϵ p,M (U ) ϵ=0 , with the one in the presence of errors, ϵ p,M (U ) ϵ̸ =0 .We define this deviation as ∆ϵ p,M (U ) = |ϵ p,M (U ) ϵ=0 − ϵ p,M (U ) ϵ̸ =0 |.In Fig. 13(a), we show ∆ϵ p,M (U ) for the 7 cases of preparing GHZ 4 -like cases via the multi-spin scheme, assuming the XY2 sequence.The different colored lines correspond to each one of the 7 cases, and the labels correspond to the number of total sequence iterations.We see that for small pulse errors (≤ 3 − 4%), the deviation in estimating this quantity is on the order 10 −3 − 10 −2 .Additionally, we expect that as the number of pulses increases, the deviation is, in principle, enhanced [e.g., see dark red line versus green line].This feature, however, does not always hold since the dynamics are case-dependent and tend to corrupt the GHZ preparation differently.In Fig. 13(b), we repeat the same calculation assuming the CPMG sequence for a range of systematic pulse errors from 0.1 − 0.8%.As expected, the ability of the CPMG sequence to prepare high-quality GHZ 4 -like states is severely impacted by the pulse errors. In Fig. 13(c), we display in four panels the deviation in M -tangling power for the 29 cases of preparing GHZ 4 -like states using the sequential protocol.The labels correspond to the total number of iterations, 3 j=1 N j .In this scenario, we are assuming the XY2 sequence.We notice a similar performance as for the multi-spin scheme, and the results again reveal the robustness of XY2 under moderate pulse errors (≤ 3 − 4%). E Rotation angle and rotation axis errors In this section we consider the effect of rotation angle and rotation axis errors on the M -tangling power.We consider as an example the 7 cases of preparing GHZ 4 -like states via the multi-spin scheme.To showcase the robustness of the XY2 decoupling sequence, we consider both rotation axis and rotation angle errors, assuming the estimated parameters from Ref. [69], which we also mentioned in Sec.7.2.In particular, we model the erroneous π X and π Y pulses as: In Fig. 14 we show the ideal M -tangling power (dark blue bar) assuming a perfect CPMG decoupling sequence.The light blue bars correspond to the XY2 decoupling sequence which includes the errors ϵ x = ϵ y = −0.02,q z = 0.05, q x = 0.005.We observe that the deviation in the expected M -way correlations in the presence of both rotation axis and rotation angle errors is sufficiently small, and hence the XY2 decoupling sequence can be reliably used in experiments to prepare high-fidelity GHZ states. F Uncertainty in HF parameters In this section, we consider the effect of errors due to uncertainty in the experimentally measured HF parameters of the nuclear spins.We use the same optimal sequence parameters we found for the HF parameters we considered in the main text and additionally shift the HF parameters A and B of the target nuclear spins by 0.01 or 0.05 kHz.In Fig. 15, we consider a 0.01 kHz shift and plot the deviation ∆ϵ p,M (U ) := ϵ p,M (U )−ϵ uncertain p,M (U ) for the cases of preparing GHZ states up to 6 qubits for the sequential scheme in Fig. 15(a) and for the multi-spin scheme in Fig. 15(b).We observe that the presence of a 0.01 kHz uncertainty in the HF parameters only produces a negligible uncertainty on the order of 10 −3 relative to our ideal calculations for ϵ p,M (U ).In Fig. 16, we repeat the same calculations for the sequential scheme in Fig. 16(a) and for the multi-spin scheme in Fig. 16(b), assuming a shift of the HF parameters by 0.05 kHz.We note that the error in the estimation of ϵ p,M (U ) is on the order of 10 −2 − 10 −3 .Thus, based on the experimentally achievable accuracy in the measured HF parameters [37], we can estimate the M -tangling power to a relatively narrow confidence interval.Optimization within a small range of the optimal parameters t * and N * that we find for definite values of HF parameters can guide the experimental calibration of the timing of the pulses and optimal sequence iterations. G Optimization process for GHZ M -like generation Here we highlight the optimization procedure for generating GHZ M -like states using the sequential or multi-spin protocols.In Fig. 2 of the main text, we show a diagram of the optimization procedure for the sequential protocol.First, we need to provide as inputs: • the size of the GHZ-like state we want to prepare (|GHZ|), • the maximum resonance number (k max ) to search for each nuclear spin, • the maximum time for the total sequence and of individual gates (T max ), • the tolerances of unwanted/target nuclear one-tangles, • the gate error tolerance. Then, for each nuclear spin, we search over all resonances by varying the unit time around each resonance by δt = ±0.2µs, and with a time step of 5 × 10 −3 µs and select as optimal the unit time t where n 0 • n 1 is as close as possible to the value −1.Having fixed the unit time, we then use the analytical expressions for the minima of G 1 which give us the number of iterations that maximize the nuclear one-tangle [see Ref. [38] for the expressions of the minima].Since G 1 is periodic, there are multiple numbers of iterations that can minimize G 1 , and so we usually truncate to about 15 maxima of the one-tangles, which we also postselect such that N • t ≤ T max .Note that here T max could be the gate time restriction we impose for the gate time of a single decoupling sequence, rather than the total gate time restriction for the composition of all sequential gates.We then inspect all elements in the sets of {t, N } and keep those that give a sequence with gate time smaller than T max .After this step, we calculate the remaining nuclear one-tangles and check if they are smaller than the unwanted one-tangle tolerance.If this is not satisfied, we reject that particular (t, N ) case, whereas if this is satisfied, we store the unit time and number of iterations.We repeat this process for each nuclear spin, such that we have all the possible unit times and iterations that can give maximal entanglement of the target spins with the electron while keeping the unwanted one-tangles minimal.At this stage, we have multiple unit times and iterations that satisfy these requirements. We then combine the unit times and iterations corresponding to sets associated with the selection of any |GHZ| − 1 nuclei out of the entire nuclear register, such that the total decoupling sequence does not exceed the maximum time, T max .For each such nuclear spin combination, we evolve all nuclei individually under the composite evolution and obtain their nuclear one-tangles.At this step, each nuclear spin combination is associated with multiple unit times and iterations that we could consider, so we need to choose which t and N we keep for each distinct nuclear spin combination.We choose those t and N which give rise to a maximal M -tangle at the end of the composite |GHZ|-1 entangling gates.Different choices could be made here, e.g., selecting the t and N that give the shortest gate time or penalizing both gate time and the deviation from the maximal possible M -tangle using an appropriate cost function.At this stage, we have narrowed down nuclear spin candidates and associated each nuclear spin combination with a particular sequence composed of times t j and iterations N j that gives rise to maximal M -way entanglement.For each of these cases, we inspect whether the unwanted nuclear one-tangles of the composite evolution are below the unwanted one-tangle tolerance we imposed.If this is satisfied, we accept this case and proceed with the calculation of the gate error.Finally, we accept this case if the gate error is lower than Table 3: Tolerances (tol.) and relevant parameters for the optimization of the generation of GHZ M -like states using the multi-spin protocol. the gate error tolerance.In Table 2, we provide the tolerances and relevant parameters we set for each GHZ size. Regarding the multi-spin protocol, we are not composing gates since we are interested in a single-shot operation that generates direct entanglement of the electron with multiple nuclei.We explain here the procedure we follow according to the flowchart of Fig. 2. We start with one nuclear spin selected from the entire register and find its resonance time for some particular resonance number k.We vary the unit time t within ±0.25 µs and define an upper bound for the maximum number of iterations that respects the gate time restriction for the particular unit time.Then, for all unit times and N ∈ [1, N max ] we obtain all the nuclear one-tangles using the knowledge of rotation angles and the dot product of the nuclear axes for each nucleus given one iteration.Then, for all possible times and iterations, we check how many one-tangles are above the target one-tangle tolerance and how many are below the unwanted one-tangle tolerance.If we have |GHZ|−1 one-tangles above the target one-tangle tolerance, and all other nuclear one-tangles are below the unwanted one-tangle tolerance, then we accept this case.We repeat this process by choosing a different resonance number k, up to some k max .Then, we select a different nuclear spin from the register as our starting point and repeat all the aforementioned steps. After completing the above stage, we have multiple times and iterations for which we maximize |GHZ| − 1 target one-tangles simultaneously.We then rearrange all these cases corresponding to different unit times and iterations in terms of unique spin combinations.For each spin com-bination, we select the time and iterations of the single-shot operation, which give the maximal M -tangle.Similar to the sequential scheme, the choice of narrowing down particular t and N could be based on minimal gate time or a cost function that both minimizes gate time and maximizes the M -tangle for the particular spin combination.Finally, for each spin combination we evolve all unwanted nuclear spins individually, such that we pass this information to the calculation of the gate error.If the gate error is below the tolerance we imposed, we then accept this case. H One-tangles of Sequential Scheme for generation of GHZ M -like states Here we present the nuclear one-tangles corresponding to the sequential entanglement scheme of Fig. 5.The nuclear one-tangles for all different realizations of entangling M − 1 nuclei with the electron to prepare GHZ M -like states are shown in Fig. 17.The labels above each bar of each case refer to some 13 C nuclear spin of the register labeled as Cj, with j ∈ [1,27]. I Eigendecomposition of mixed state To keep the discussion general, we consider L nuclear spins, with K of them being the target ones and hence, L − K being the unwanted nuclei.To derive the optimal ensemble for the calculation of entanglement of the mixed state, we make the assumption that the initial state of the system is any arbitrary product pure state: (77) Thus, the full density matrix reads: ρ = j,k∈{0,1} σ jj ρ el σ kk ⊗ l R (l) n j ρ (l) nuc (R † n k ) (l) .(78) Next, suppose that we wish to trace out the last L − K spins from the density operator (assuming that we have ordered the basis such that these appear last in the Kronecker product): Tr[R (l+K) (80) Thus, we can easily find the eigenvalues of the electron's reduced density matrix which read: (81) whereas the eigenvectors are given by: with c = α * βf 10 (provided c ̸ = 0).Now, to find the eigenvectors of the total reduced density matrix, we need to reapply the controlled gates of the target subspace on the matrix whose columns are the two eigenvectors: (83) Thus, we now have the diagonalized density operator: which means that we can use the trial state: to find the entanglement of the mixed reduced density matrix of the electron and the target nuclei.Note that given arbitrary initial states, the highest rank that the reduced density matrix can have (irrespective of the initial pure state of the system, the number of total nuclei, or the number of nuclei we trace out) is 2, due to the form of the controlled evolution operator. In the case when the electron starts from a nonsuperposition state e.g., when β = 0, we find that the two eigenvalues are (86) In this scenario, since λ − = 0, the reduced density matrix corresponds to a pure product state. J Concurrence and three-tangle Here we compare the entanglement of rank-2 mixed states involving two qubits with the entanglement of rank-2 mixed states involving three qubits.For two-qubit pure states the concurrence is defined as ), where ρ A(B) is the reduced density matrix of system A (B).The entanglement of a mixed two-qubit state can be computed with similar methods as in the main text by minimizing the concurrence of a trial state.We assume that we have a trial state of the form: where |Φ ± ⟩ = 1/ √ 2(|00⟩ ± |11⟩) are the two Bell states.We further consider a rank-2 mixed threequbit state which is the mixture of two |GHZ ± ⟩ = 1/ √ 2(|0⟩ ⊗3 ±|1⟩ ⊗3 ) states.Thus, similarly to the main text, we define the trial state: (88) For each value of p ∈ [0, 1] we vary the relative phase of the two terms, χ, for both |ψ trial,2 ⟩ and |ψ trial,3 ⟩, and obtain the minimal concurrence, or three-tangle, respectively.We plot the results as a function of p in Fig. 18.We observe that the minimum concurrence, as well as the minimum three-tangle are both convex functions.In fact, the minimum concurrence is given by C = 2|p − 1/2| and the minimum three-tangle by (1 − 2p) 2 .We note that for all values of p (except for p = 1/2 where we have maximal mixed states) the minimum three-tangle is lower than the minimum concurrence, and hence the threeway entanglement is more sensitive to the relative ratio of the superposition terms in the mixed state than the concurrence of two-qubit mixed states. J.1 Parameters for 27 nuclear spins The HF parameters for the 27 nuclear spin register we consider in the main text are presented in Table 4 and can also be found in Refs.[37,61]. Figure 1 : Figure 1: Schematics of two protocols for generating GHZ M -like states (shown for M = 4 and using the CPMG sequence).(a) The multi-spin scheme is capable of generating direct entanglement between the electron and a subset of nuclei from the nuclear spin register in a single shot.(b) The sequential scheme requires M − 1 consecutive entangling gates to prepare a GHZ M -like state. Figure 3 : Figure 3: Generation of GHZ 3 -like states via the sequential protocol.Each case # corresponds to a different composite DD sequence that sequentially entangles a pair of nuclei with the electron to generate GHZ 3 -like states.(a) Nuclear one-tangles (scaled by 2/9) after the two entangling gates.The first (second) bar of each case corresponds to a particular nuclear spin, labeled as Cj.(b) Nuclear rotation angles (ϕ := ϕ 0 ) and (c) dot products of nuclear rotation axes at the end of the composite evolution of the two entangling gates.(d) M -tangling power for each case of composite evolution.(e) Total gate time of the sequence and (f) gate error of the composite evolution due to residual entanglement with the remaining nuclear spins that are not part of the GHZ 3 -like state. Figure 4 : Figure 4: Generation of GHZ 3 -like states using the multi-spin protocol.Each case # corresponds to a different DD sequence that simultaneously entangles a pair of nuclei with the electron to generate GHZ 3 -like states.(a) Nuclear one-tangles (scaled by 2/9) after the single-shot operation.(b) Nuclear rotation angles and (c) dot products of nuclear rotation axes at the end of the single-shot entangling operation.(d) M -tangling power for each case.(e) Total gate time of the sequence and (f) gate error of the evolution due to residual entanglement with the remaining nuclear spins that are not part of the GHZ 3 -like state. Figure 5 :Figure 6 : Figure 5: Preparing states locally equivalent to GHZ M via the sequential scheme.Each case # corresponds to a unique composite DD sequence that selects M − 1 nuclei from the register.(a) M -tangling power of the composite M − 1 entangling gates for various cases.(b) Gate error of the composite evolution due to the presence of unwanted nuclei, and (c) gate time of the total sequence for each case and each different size of GHZ-like states.The panels from top to bottom correspond to GHZ 4 , GHZ 5 , GHZ 6 , GHZ 7 , GHZ 8 , GHZ 9 and GHZ 10 states.The nuclear onetangles for each case and value of M can be found in Appendix H. Figure 7 : Figure 7: (a) Three-tangle of the pure state of the target subspace ignoring unwanted nuclei (purple), and threetangle of either eigenvector (blue) of the mixed state for each of the 39 cases of Fig. 3.The 39 cases are sorted in terms of largest eigenvalue, λ + = p, shown along the x-axis.(b) Three-tangle of the mixed electron-nuclear state (red), obtained by minimizing the entanglement of the trial state of Eq. (9) for each value of p.The dashed line shows the convex hull. Figure 8 : Figure 8: (a) Comparison of the unitary, ϵ p,M (U ), with the non-unitary, ϵ p,M (E), M -tangling power for the 29 cases of Fig. 5 that prepare GHZ 4 -like states in the target subspace, via the sequential protocol.(b) Comparison of ϵ p,M (U ) with ϵ p,M (E) for the 7 cases of Fig. 6 for preparing GHZ 4 -like states via the multi-spin protocol.In both panels, the dark blue bars show the unitary M -tangling power, the pink bars the non-unitary M -tangling power, whereas the green bars show an approximation of the latter. Figure 9 : Figure 9: M -tangling power of the target subspace when the electronic qubit experiences dephasing.(a) M −tangling power for the 39 cases of preparing GHZ 3 -like states with the sequential protocol.(b) M -tangling power of the 27 cases of preparing GHZ 3 -like states with the multi-spin scheme.The darker blue bars in (a) and (b) correspond to the no-dephasing scenario, whereas the remaining colors include electronic dephasing with dephasing angles θ −1 = (400, 100, 50) µs.For the cases where we consider dephasing, we set λ 0 = 0.98.(c) M -tangling power for case # 1 of the sequential scheme for different dephasing angles and λ 0 parameters.(d) Same as in (c) but for case # 1 of the multi-spin scheme. Figure 10 : Figure 10: Generation of GHZ 4 -like states in the presence of systematic over-rotation pulse errors.(a), (b) Mtangling power using the 29 cases we found for preparing GHZ 4 -like states via the sequential scheme, assuming a systematic error of 2% in (a), and of 8% in (b).(c), (d) M -tangling power for the 7 cases of preparing GHZ 4 -like states using the multi-spin scheme assuming a 2% error in (c), and an 8% error in (d).In all panels, the bars with the highest value of ϵ p,M (U ) correspond to the error-free case.The orange bars show the CPMG including pulse errors, whereas the light blue bars show the XY2 including pulse errors.All bars are scaled by the maximal value of (2/3) 4 . Figure 11 : Figure 11: Generation of GHZ 4 -like states in the presence of random over-/under-rotation pulse errors.(a) Mean M -tangling power using the 29 cases we found for preparing GHZ 4 -like states via the sequential scheme, assuming rotation angle errors sampled from a normal distribution with standard deviation σ = 0.01.(b) Mean M -tangling power for the 7 cases of preparing GHZ 4 -like states via the multi-spin scheme assuming the same standard deviation.In both (a) and (b), the bars with the highest value of ϵ p,M (U ) correspond to the error-free case.The orange bars correspond to the CPMG case including pulse errors, whereas the light blue bars show the XY2 case including pulse errors.All bars are scaled by the maximal value of (2/3) 4 .Each error bar for CPMG and XY2 in (a) captures the range of ϵ p,M (U ) where we run 500 trials per case, with different random samplings per sequence iteration.In (b), we perform 1000 random trials per case to display the error bars. Figure 12 :Figure 13 : Figure 12: M -tangling power of the target subspace when the electron undergoes dephasing.(a) ϵ p,M (E deph ) for case # 16 for preparing GHZ 3 -like states via the sequential scheme as a function of the dephasing angle θ and λ 0 .(b) Same as in (a) for case # 7 of the multi-spin scheme. Figure 14 : Figure 14: Robustness of XY2 decoupling sequence in the presence of pulse and rotation axis errors.The dark blue bars show the ϵ p,M (U ) we find via the ideal CPMG sequence for the 7 cases of preparing GHZ 4 -like states via the multi-spin scheme.The light blue bars show the ϵ p,M (U ) obtained via the XY2 decoupling sequence assuming both rotation axis and rotation angle errors. Figure 17 : Figure 17: One-tangles of nuclear spins after composing M − 1 sequential entangling gates to create the cases of Fig. 5.Each case corresponds to a distinct DD sequence that selects different nuclei from the nuclear spin register.Each bar shows the one-tangle of a particular nuclear spin, and the text above the bars corresponds to the nuclear spin labels.From top to bottom, we show the nuclear one-tangles for the generation of GHZ 4 -like up to GHZ 10 -like electron-nuclear entangled states. Figure 18 : Figure 18: Minimum concurrence of a mixture of Bell states (blue), and minimum three-tangle of a mixture of |GHZ ± ⟩ states (red).The black dashed line is the analytical expression of the minimum three-tangle. Table 2 : Tolerances (tol.) and relevant parameters for the optimization of the generation of GHZ M -like states using the sequential protocol. Table 4 : Hyperfine parameters of the13C atoms we considered in the paper.
22,358
2023-02-11T00:00:00.000
[ "Physics" ]
Targeted Nanotechnology in Glioblastoma Multiforme Gliomas, and in particular glioblastoma multiforme, are aggressive brain tumors characterized by a poor prognosis and high rates of recurrence. Current treatment strategies are based on open surgery, chemotherapy (temozolomide) and radiotherapy. However, none of these treatments, alone or in combination, are considered effective in managing this devastating disease, resulting in a median survival time of less than 15 months. The efficiency of chemotherapy is mainly compromised by the blood-brain barrier (BBB) that selectively inhibits drugs from infiltrating into the tumor mass. Cancer stem cells (CSCs), with their unique biology and their resistance to both radio- and chemotherapy, compound tumor aggressiveness and increase the chances of treatment failure. Therefore, more effective targeted therapeutic regimens are urgently required. In this article, some well-recognized biological features and biomarkers of this specific subgroup of tumor cells are profiled and new strategies and technologies in nanomedicine that explicitly target CSCs, after circumventing the BBB, are detailed. Major achievements in the development of nanotherapies, such as organic poly(propylene glycol) and poly(ethylene glycol) or inorganic (iron and gold) nanoparticles that can be conjugated to metal ions, liposomes, dendrimers and polymeric micelles, form the main scope of this summary. Moreover, novel biological strategies focused on manipulating gene expression (small interfering RNA and clustered regularly interspaced short palindromic repeats [CRISPR]/CRISPR associated protein 9 [Cas 9] technologies) for cancer therapy are also analyzed. The aim of this review is to analyze the gap between CSC biology and the development of targeted therapies. A better understanding of CSC properties could result in the development of precise nanotherapies to fulfill unmet clinical needs. INTRODUCTION Gliomas, demonstrating glial cell characteristics, represent 30% of all brain tumors as described by American Cancer Society (2016). These tumors, especially high-grade gliomas and glioblastoma, grow invasively in the central nervous system and cause discernible neurological symptoms within months with an extremely poor prognosis even after aggressive open surgery combined with adjuvant chemo/radiotherapy. New assumptions incriminate cancer stem cells (CSCs) as a possible cause of tumor treatment resistance. However, the biological nature of these cells is still undetermined (Louis et al., 2007;Westphal and Lamszus, 2011). The development of new technologies based on nanometersized particles (nanotechnology) for cancer treatment has been extensively investigated in the last decade and this approach shows potential for glioma diagnosis and treatment. Unique molecular signatures for each type of tumor have been uncovered recently, because of advances in proteomics and genomics, opening new paths for therapies that specifically target and kill tumor cells (Cruceru et al., 2013). In this review paper, the challenges in targeting gliomas are highlighted. The concept of CSCs and their biomarkers is introduced initially, and finally, developed nanotechnologies, including some clinical trials, are summarized. Moreover, the application of therapies already used in different fields to glioblastoma multiform (GBM) treatment is proposed, focusing on CSC targeting. CLINICAL CLASSIFICATION AND CURRENT TREATMENT OF GLIOMAS Gliomas are brain tumors that resemble normal stromal (glial) cells of the brain, such as astrocytes (astrocytomas), oligodendrocytes (oligodendrogliomas) and ependymal cells (ependymomas). They are a group of oncological diseases for which no cure exists and little progress has been made in order to guarantee a longer life expectancy. Gliomas can diffusely penetrate throughout the brain and are mainly classified according to their morphological resemblance to their respective glial cell types, their cytoarchitecture and their immunohistological marker profile (Louis et al., 2007;Westphal and Lamszus, 2011). There is also a glioma grading system that distinguishes, astrocytomas, by four World Health Organization (WHO) grades (I, II, III, and IV); and oligodendrogliomas and oligoastrocytomas, by two grades (II and III) (Louis et al., 2007). The most aggressive and common glioma is glioblastoma (a grade IV astrocytoma). This tumor demonstrates extensive vascular endothelial proliferation, necrosis, high cell density and atypia. It can evolve from a preexisting secondary glioblastoma (low grade astrocytoma), but usually occurs de novo (primary glioblastoma) (Westphal and Lamszus, 2011). Recently, as described in the 2016 WHO report on the central nervous system (CNS), it has been recommended that glioblastomas be divided into IDH-wildtype, IDH-mutant and Nitric oxide synthase (NOS). IDH-wild type (about 90% of cases) is regarded as primary or de novo glioblastoma and prevailing in patients over 55 years of age; IDH-mutant (about 10% of cases), corresponds to secondary glioblastoma that preferentially arises in younger patients (Louis et al., 2007); and NOS is reserved for cases in which a full IDH evaluation cannot be performed (Louis et al., 2016). In contrast, the CNUs alkylate the N3-position of adenine and the N7-position of guanine inducing apoptotic cell death in p53 wildtype cells and necrotic cell death in p53 deficient cells (Fischhaber et al., 1999;Johannessen et al., 2008). Currently, TMZ, together with radiotherapy and surgical resection, is the most commonly applied glioblastoma treatment. Despite a boost in overall patient survival with TMZ treatment and the low toxicity of TMZ, patient prognosis remains poor. Usually few patients survive longer than 5 years, with a median survival of approximately 14.6 months (Stupp et al., 2005(Stupp et al., , 2009). GBM STEM CELLS AND TREATMENT RESISTANCE The possible cause of GBM chemoresistance is the presence of CSCs. CSCs are tumor cells with stem cell-like properties that reside in GBM and can readily generate both proliferating progenitor-like and differentiated tumor cells amid microenvironment cues . CSCs could be more resistant towards radio-and chemotherapy and survive intensive oncological therapies, leading to tumor recurrence (Modrek et al., 2014). Since GBM is an aggressive tumor, the development of alternative therapies targeting CSCs is urgently needed. The origin of CSCs can be either mutated embryonic stem cells or downstream progenitors, that may already exist at birth or accumulate over time through mutation (Shipitsin and Polyak, 2008). Recent studies have revealed that the "de-differentiation" of non-CSCs into CSCs can be an alternative mechanism of CSC creation (Safa et al., 2015), suggesting that diverse cell types, from stem cells to their related differentiated progeny, are amenable to oncogenic transformation. Distinguishing between CSCs and other tumor populations largely lies in the functional multipotency that stem cells demonstrate, i.e., the self-renewal and differentiation to multiple progeny capabilities. Cells that are tumorigenic and can differentiate hierarchically are commonly regarded as CSCs (termed alternatively as glioma stem cells, glioma CSCs, or brain tumor stem cells). Also, CSCs can form sphere-shaped colonies, however, it is not considered as a default feature (Pastrana et al., 2011). BIOMARKERS FOR GLIOMA STEM CELLS The CSC hypothesis states that CSCs escape multimodal therapy, causing tumor resistance. Some causes of this resistance could be insufficient drug delivery to CSCs niche or non-specific targeting, since the therapies generally target more differentiated tumor cells. Another premise of this hypothesis is that therapies which efficiently eliminate the CSC fraction of a tumor are able to induce long-term responses and thereby halt tumor progression. The best-described marker for CSCs is CD133, and recently new molecules such as CD15/ stage specific embryonic antigen-1 (SSEA-1) and integrin a6 have been described as novel markers. However, there is not yet a consensus on the optimal markers for CSCs in GBM. CSCs have been isolated from cancer to be analyzed and later used to screen for stem cell-specific biomarkers in tumor cells, particularly surface biomarkers. Cell-surface markers are generally cell membrane-surface antigens to which antitumor drugs can easily bind, consequently increasing the therapeutic efficiency of the drug. Therefore, membrane surface markers are more meaningful than nuclear or cytoplasmic antigens in targeted tumor therapy. CD133 and its Limitations CD133 belongs to the Prominin family, and is also known as Prominin 1, with five transmembrane regions. Singh et al. (Singh et al., 2004) found that 100 CD133 positive cells is enough to induce tumorigenesis in the NOD/SCID mouse brain and whereas 100,000 CD133 negative cells were incapable of tumorigenesis. Subsequently, CD133 has been widely recognized as a biomarker of glioma stem cells. Although many studies have demonstrated transplanted tumors using CD133+ cells, some researchers have reported on the limitations of CD133 as tumor stem cell marker. CD133+ cells serve as tumor stem cells in many organs, such as brain, lung and colon cancers, but expect for gastric or breast (Su et al., 2015). CD133+ cells only had tumor initiating effects in some glioma cells and were not found in other brain tumors, such as CD15+, CD133− medulloblastomas (Read et al., 2009). Different types of glioblastoma cells derived from different patients can produce CD133+ or CD133− tumor stem cells after serum-free culture in vitro, both of whom embrace stem cell features, tumorigenic characteristics and capability of regenerating CD133+ and CD133− cell populations. CD133+ glioma stem cells can differentiate into CD133− tumor cells; CD133− glioma cells injected into nude rats formed tumors containing CD133+ cells (Joo et al., 2008;Wang et al., 2008). Therefore, CD133+ cells are not the only cells with the characteristics of glioma stem cells, and CD133− cells exist as CSCs. CD44 Recent studies have demonstrated that some glioma cell subpopulations highly express CD44, a distinctive cell adhesion molecule (Xu et al., 2010). CD44 is a glycoprotein commonly expressed in numerous malignancies (Bradshaw et al., 2016). CD44 knockdown in GBM xenograft models has inhibited tumor cell growth while improving the response to chemotherapy (Y. Xu et al., 2010). CD44 and CD133 are usually co-expressed in GBM spheres (Brown et al., 2015). Collectively, these data suggest that CD44 may be useful as a CSC marker. Integrin-α6 Integrin-α6 is a member of the heterodimer integrin family and is a laminin member of the extracellular matrix protein family. Integrin-α6 can be used as a marker of neural stem cells and the expression of integrin-α6 can be used to detect the tumorigenic potential of normal neural stem cells (Corsini and Martin-Villalba, 2010). Integrin-α6 is highly expressed by the glioma stem cell population and can be used to isolate glioma stem cells (Lathia et al., 2010;Velpula et al., 2012). The function of integrin-α6 lies in self-renewal, proliferation, survival and growth of tumor cells in vitro, and so it can be used as a regulatory target of tumor growth, while its genetic knockout can reduce tumorigenesis (Lathia et al., 2010). CD15 Also known as SSEA-1, it is a carbohydrate antigen on the cell surface. Read et al. (Read et al., 2009) found that tumor cells that are CD15+ and CD133− had the characteristics of tumor stem cells through mouse medulloblastoma experiments. The tumorigenicity of CD15+ cells is 100 times higher than that of CD15− cells in human glioblastoma, where all CD15+ cells were also found to be CD133+, while most CD133+ cells also expressed CD15, suggesting that CD15 is highly likely to be another surface marker of glioblastoma stem cells (Son et al., 2009). L1CAM L1 cell adhesion molecule (L1CAM) belongs to the nerve cell adhesion molecule category and to the type I transmembrane glycoprotein of immunoglobulin super family and is crucial in nervous system development. L1CAM supports the survival and proliferation of CD133+ glioma cells, both in vitro and in vivo, and can be targeted as CSC-specific marker for precise treatment in malignant gliomas (Bao et al., 2008). L1CAM activates some signaling pathways such as fibroblast growth factor receptor (FGFR) and focal adhesion kinase (FAK) through integrin, increasing the growth and motility of GBM cells in autocrine or/and paracrine manner. These effects can be intervened by using small-molecule inhibitors of FGFR, integrins and FAK (Anderson and Galileo, 2016). CD90 Also known as Thy-1, CD90 is a member of the cell adhesion molecule immunoglobulin super family. CD90 has been found on the surfaces of nerve cells, thymocytes, fibroblast subsets, endothelial cells, mesangial cells, and hematopoietic stem cells, suggesting that CD90 is a surface marker in hematopoietic stem cells (Kumar et al., 2016), mesenchymal stem cells (Kimura et al., 2016) and hepatocellular stem cells . CD90 is overexpressed in GBM and is almost absent in low-grade gliomas or normal brain tissues. All CD133+ glioma cells expressing CD90, and CD90+/CD133+ and CD90+/CD133− cells have the same self-renewal ability, indicating that CD133+ glioma stem cells may be a subtype of CD90+ glioma cells (He et al., 2012). In addition, CD90+ cells were also found in glioma peritumoral vessels (Inoue et al., 2016). Therefore, CD90 can be used as a prognostic index of glioma, a marker of glioma stem cells and an indicator of glioma angiogenesis as well. A2B5 A2B5 is a ganglioside on the surface of the glial precursor cell membrane. Ogden et al. (Ogden et al., 2008) detected more A2B5+ cells than CD133+ cells in glioblastoma samples, and CD133+ cells were rarely detected. The cells were screened and sorted using flow cytometry, and sequential culture of A2B5+/CD133− and A2B5+/CD133+ cells showed stem cell proliferative activity while that of A2B5-/CD133− cells did not. Tchoghandjian et al. (Tchoghandjian et al., 2010) confirmed that A2B5+/CD133− and A2B5+/CD133+ cells could form tumor stem cell spheres, while A2B5-/CD133− cells could not. These studies also show that CD133− cell populations still contain cells with stem cell activity, and A2B5+ cells may be one type of such stem cells. CD133−/A2B5+ glioma-initiating cells possess a strong migratory and invasive capacity; these cells may be an important subpopulation with high invasive potential in GBM (Sun et al., 2015). Recently, some typically expressed embryonic stem cells markers have been considered as the markers for tumor-initiating cells, such as c-Myc, SOX2, and OCT-4. These markers could be useful as a tool to identify and isolate CSCs (Ignatova et al., 2002). Moreover, Nestin, OCT-4, NANOG, SOX2, c-Myc, and KLF4 have been described as key players in the transcriptional regulation of glioblastoma CSCs (Ignatova et al., 2002;Yang et al., 2008;Guo et al., 2011;Zhu et al., 2014). APPLICATION OF NANOTHERAPIES IN GBM Besides drug discovery, the delivery of drugs to the brain is a major challenge in treating CNS diseases. Invasive procedures like tumor resection are not always effective for cancer treatment and are extremely complicated and delicate. A possible alternative to overcome this issue is to use systemic delivery; however, the blood-brain barrier (BBB) is an obstacle because of its low permeability, requiring higher doses of drugs, which causes increased side effects. The BBB inhibits the delivery of therapeutic agents to the CNS and prevents a large number of drugs, including antibiotics, antineoplastic agents, and neuropeptides, in passing through the endothelial capillaries to the brain (Fiandaca et al., 2011;Aryal et al., 2014;Pardridge, 2014). Safe disruption or loosening of the BBB is highly important to deliver drugs into brain niches. Successful delivery of drugs can be achieved through BBB disruption using ultrasound in intraarterial infusion therapy. This allows both chemotherapeutic agents and antibodies to bypass the BBB (Kuittinen et al., 2013). In addition, K + (Ca) channels have been identified as potential targets for modulation of BBB permeability in brain tumors by assisting the formation of pinocytic vesicles of drugs (Ningaraj et al., 2003). Moreover, tumor drug delivery can be enhanced if they are injected into the brain along with a vasodilator, such as bradykinin, nitric oxide donors or agonists of soluble guanylate cyclase, and calcium dependent potassium K + (Ca) channels. Furthermore, cerebral blood flow could be modulated and the therapeutic efficacy was augmented after applying a nitric oxide donor which selectively open the blood tumor barrier in rats with intracerebral C6 gliomas (Fross et al., 1991;Weyerbrock et al., 2003Weyerbrock et al., , 2011Black and Ningaraj, 2004). Aiming to enhance transport through or bypass the BBB, many research groups have been developing new nanotechnologies to overcome these obstacles. Many biochemical modifications of drugs and drug nanocarriers have been developed, enabling local delivery of high doses while avoiding systemic exposure. In this review section, BBB properties and recently discovered nanotechnologies that allow systemic drug delivery for CNS cancer therapy are discussed. THE BLOOD-BRAIN BARRIER The BBB is a barrier that presents selective permeability carried out by endothelial cells lining the lumen of brain capillaries, which lack pinocytosis and fenestrations because of the presence of tight junction complexes (Eichler et al., 2011;Chacko et al., 2013;Papademetriou and Porter, 2015). In addition to tight junction complexes, the BBB degrade drugs preventing them to reach the target location due to drug metabolizing enzymes presence, besides the existence of active efflux transporters (AETs) that cargo drugs back to the blood and enzymes that metabolize the drugs before their releasing to the destination. (Regina et al., 2001;Ohtsuki and Terasaki, 2007;Papademetriou and Porter, 2015) (Figure 1). The tight junctions in the BBB are mainly composed of claudins and occludins (Nitta et al., 2003;Abbott et al., 2010;Haseloff et al., 2014). Claudin 5 is critical for the restriction of small molecules (<800 daltons) and the loss of some claudins, like claudin 3, is related to the increased BBB permeability in tumor vasculature and autoimmune encephalomyelitis (Wolburg et al., 2003). In contrast, the BBB remains intact in infiltrating gliomas or micrometastatic tumors, indicating that it is crucial to modulate the BBB permeability in these regions. Transport across the BBB is selective for molecules smaller than 12 nm and is finely regulated; there are mainly two types of transport, carrier-mediated transport (CMT) and receptormediated transport (RMT) (Papademetriou and Porter, 2015) (Figure 1). Carrier-Mediated Transport The transport of energy production molecules like glucose and lactate, nucleosides, and ions through the cell membrane by facilitated or active transport can be mediated by CMT. In addition, CMT aids the clearance of neurotoxic substances, metabolites of brain function, and neurotransmitters, through Frontiers in Pharmacology | www.frontiersin.org FIGURE 1 | The blood-brain barrier (BBB) and the glioblastoma multiform (GBM) niche. The BBB is selective and restrictive to a variety of molecules. Endothelial cells and the basement membrane, together with strong lateral tight junctions, maintain the selective permeability. A possible strategy to reach the glioma core is to use nanocarriers coupled with target guiding molecules that, for example, bind to the membrane receptors of both tumor niche infiltrated BBB or healthy BBB, and which carry nanomedicines. Glioblastoma is composed of heterogeneous cell populations and the cancer stem cells are responsible for treatment resistance. The biochemical modification of small molecules enables changes in some parameters like solubility, stability, lipophilicity, and recognition by AETs. Redesign of drug aiming to improve the recognition by CMT and transportation through BBB can be achieved by coupling the drug to a regular CMT substrate. The molecular structure of the drug should mimic that of the endogenous CMT substrate (e.g., sugars, amino acids, nucleosides) with pharmacologic activity preserved, but preferably not affect CMT function to avoid possible side effects (Misra et al., 2003;Papademetriou and Porter, 2015). Receptor-Mediated Transport In contrast to CMT, RMT promotes the permeability of some macromolecules into the brain, such as lipoproteins, hormones, nutrients and growth factors (Papademetriou and Porter, 2015). The RMT process is mediated by the binding of the molecule to a cell-surface receptor that presents in endothelial cells on the luminal surface, following endocytosis and transportation of vesicles to the destination, and sequential exocytosis of the vesicle to the extravascular space (Abbott et al., 2010;Georgieva et al., 2014;Papademetriou and Porter, 2015). The approach targeting RMT requires the involvement of a specific ligand (e.g., an antibody or antibody fragment, synthetic peptide, natural ligand), which has affinity for an endocytic receptor expressed on the endothelial cell surface, to the chemotherapeutic drug or to a drug-loaded nanocarrier. Binding to the targeted receptor induces intracellular signaling cascades mediating invagination and formation of membranebound vesicles in the cell interior, and then intracellular vesicular trafficking transport to the abluminal endothelial plasma membrane (Abbott et al., 2010;Georgieva et al., 2014;Papademetriou and Porter, 2015). NANOCARRIERS The discussion of nanosystems in this review mainly focuses on liposomes, polymeric nanoparticles, solid lipid nanoparticles, polymeric micelles and dendrimers as carriers (Figure 2). Liposomes Lipid-bilayer vesicles, namely liposomes, are popular drug systems for delivery due to their easy preparation, their encapsulation capability of a wide array of drugs, their biocompatibility, efficiency, non-immunogenicity, enhanced solubility of chemotherapeutic agents, and commercial availability. The clearance of liposomes by macrophages is relatively fast, so modifications of the liposome surface or size can extend their circulation time. Specificity for the nervous system is possible by coupling liposomes to aptamers or monoclonal antibodies against transferrin receptors (OX-26), glial fibrillary acidic proteins or the insulin receptor (Kanai et al., 2014). The use of liposomes for gene delivery has been FIGURE 2 | Nanocarrier characteristics. Nanocarriers have four main features: a shell that can vary in type, length, density and crosslinking molecules; a core, which can be hydrophobic, anionic or cationic depending on which crosslinked molecule needs to be carried; surface targeting molecules which can be antibodies, proteins, vitamins, peptides and aptamers; and lastly the cargo, which can be chemotherapeutics, nucleic acids, proteins, fluorophores or other imaging dyes. Usually nanocarriers are divided into five subtypes: liposomes (lipid bilayer structures), polymeric micelles (lipid monolayers), dendrimers (highly branched structures), and nanoparticles (organic or inorganic). Recently, new strategies have focused on carrying bio-cargoes, such as plasmids coding for proteins involved in programmed cell death or agents to silence the genes important for the cancer stem cell survival through genetic knockout using the CRISPR/CAS9 system or genetic knockdown using siRNAs. demonstrated by injecting liposomes carrying a plasmid coding for the green fluorescence protein in rats. Also, in tumor therapy, liposomes carrying small interfering RNA (siRNA) have been deployed, while the diesteryl phosphoethanolamine poly carboxybetaine lipid which promotes endosomal/lysosomal escape was developed for systemic delivery of siRNA (Dai et al., 2014;Ozpolat et al., 2014). Furthermore, trafficking cargo across the BBB is improved when using nanocarriers that target CMT. For example, liposomes targeting glucose transporter 1 (GLUT1) enhanced transport of daunorubicin (Ying et al., 2010), while doxorubicin delivery to the brain was 4.8-fold enhanced after equipped with liposomes that targeted glutathione transporters (2B3-101). This approach is particular suitable for small molecules delivery rather than that of large ones. In another study, 2B3-101 was reported as reaching clinical trials and this will be further detailed along this review (Birngruber et al., 2014;Papademetriou and Porter, 2015). Nanoparticles (NPs) Nanoparticles (NPs) have also been widely studied, because of their high drug-loading capacity and protection against chemical and enzymatic degradation. NPs have enormous medical potential and have emerged as a major tool in nanomedicine, compared with conventional drug delivery methods. NPs are solid colloidal particles made of polymers ranging from 1 to 1000 nm, and are divided in two types, nanospheres and nanocapsules (Couvreur et al., 2002). An interesting application of NPs is the magnetic format of NPs that are made of a magnetic core of iron oxide or magnetite and a biocompatible covering shell of dextran or starch, to be distributed through an organism that is exposed to a localized magnetic field. In vivo GBM models have shown that magnetic NPs are promising. Detailed reviews concerning NP applications have already been published (Laquintana et al., 2009;Upadhyay, 2014). Polymeric Micelles Polymeric micelles range from 10 to 100 nm and have a coreshell architecture like NPs. They spontaneously self-assemble in aqueous solutions at concentrations higher than a threshold concentration termed the critical micelle concentration. The core is constructed mainly by hydrophobic polymer parts such as poly(caprolactone), poly (propylene glycol) (PPG), or poly(D,Llactide), together with a hydrophilic shell made of poly(ethylene glycol) (PEG). Pluronic micelles (PEG-PPG-PEG) have emerged as good candidates for brain therapy, since they can easily cross the BBB and inhibit drug efflux. Micelles carrying paclitaxel were able to increase the toxicity of the chemotherapeutic drug in a LN18 human glioblastoma cell line (Liu et al., 2008;Laquintana et al., 2009). Dendrimers Dendrimers are highly branched polymer molecules smaller than 12 nm. Conjugation to dendrimers confers enhanced delivery across the BBB, which in polyether-copolyester dendrimers loaded with methotrexate and D-glucosamine and tested against avascular glioma spheroids resulted in increased methotrexate potency. After a week, dendrimers do not affect the viability of neural cells nor induce local microglia activation even at submicromolar range of concentration. A better understanding of dendrimer distribution patterns may facilitate the design of nanomaterials for future clinical applications (Laquintana et al., 2009). Metal Particles Metal particles have been studied extensively because it has been demonstrated that they enhance the susceptibility of tumor tissues to injury induced by radiation exposure, and are therefore a promising candidate for nanomedicine. Application of gold NPs prior to radiation produced distinctive DNA damage in tumors and improved the survival of tumor bearing animals (Joo et al., 2008;Bobyk et al., 2013). Previous research has suggested that the enhanced radiosensitization effects were led by low-energy electrons emission from gold particles and in a dose-dependent maner (Zheng et al., 2008). Similarly, the radiosensitization effect of silver NPs is also attributed to their interactions with the DNA repair system, which eventually leads to the arrest of DNA duplication and cell apoptosis (Xu et al., 2009). After irradiation, titanium dioxide (TiO2) induces tumor cell death by increasing the production of free radicals. The amount of reactive oxygen species generated is dose-dependent on the amount of TiO2 applied as a radiosensitizer when the cells are exposed in X-rays . EMERGING STRATEGIES USING NANOCARRIERS Hyperthermia Hyperthermic treatment strategies use a magnetic medium such as thermoseeds and magnetic NPs to apply moderate heating in a specific area of the organ where the tumor is located. The combination of carbon nanotubes (CNTs) with near-infrared radiation (NIR) was effective in debulking a tumor in rats, leading to tumor shrinkage without recurrence. Furthermore, this protocol could eliminate glioma CSCs, both drug-sensitive and drug-resistant glioma cells due to the broad-spectrum absorption of CNTs by gliomas. In contrast, normal cells were merely affected, demonstrating the lower uptake of CNTs (Santos et al., 2014). Hyperthermia in glioma treatment remains controversial because it is technically difficult to impose a lethal dose of heat to all cell populations within the glioma mass. The heterogeneous response to different grades of hyperthermia may change the biological nature of the surviving tumor cells. For example, following moderate thermal preconditioning human glioma cell lines demonstrate increased proliferation in vitro and aberrant aggressiveness in a xenograft model. The transient increase in growth of the CD133 subtype of gliomas after thermal preconditioning indicates that there might be a compensation for the loss of the thermal sensitive sub-population (Zeng et al., 2016). To further increase selectivity for CSCs, antibodies against the CD133 surface marker can be employed as a targeting moiety. Photothermal therapy using single-walled carbon nanotubes (SWNTs) conjugated with anti-CD133 antibodies (CDSWNTs) produced a targeted lysis of CD133+ GBM CSCs, while CD133− GBM cells remained intact in vitro. A discernible shrinkage of tumor after subcutaneous NIR laser irradiation following CDSWNT administration in this particular ectopic GBM tumor model . NIR photoimmunotherapy, employing a monoclonal CD133 antibody (mAb) conjugated to an IR700 phototoxic phthalocyanine dye, permitted a spatiotemporally controlled elimination of tumor cells through specific image guidance. Rapid cell death was observed after CD133 mAb intravenous administration followed by harmless NIR light applied through the intact skull. This proof of principle study offers a promising theranostic agent that can be applied in intraoperative imaging or histopathological evaluation to define the tumor borders, as well as eradication of CSCs specifically and efficiently (Jing et al., 2016). Antitumor Antibiotics Antitumor antibiotics are a form of chemotherapeutic that interferes with DNA and slows or stops cancer cells from multiplying. Antitumor antibiotics demonstrate promise in treating gliomas. For example, doxorubicin (trade name: Adriamycin R ), daunorubicin (trade name: Cerubidine R ), and bleomycin (trade name: Blenoxane R ), show powerful anticancer activity against gliomas cells in vitro. Their efficacy in vivo was reported to be poor, which was largely attributed to their inability to penetrate the BBB (von Holst et al., 1990). However, once these antibiotics are encapsulated in PEGylated liposomes (for example, Doxil R is a PEGylated form of liposomal doxorubicin), the prolonged survival of treated animals is observed following an enhanced local antitumor effect (Sharma et al., 1997). Overall, the antitumor effects of liposomal doxorubicin, daunorubicin, or bleomycin have been unsatisfactory against glioma in patients (Fabel et al., 2001;Fiorillo et al., 2004). Further, to promote the efficacy of liposomal formulations against brain tumors, more effective drug-delivery strategies are clearly in need. For example, the combination of ultrasound-induced microbubbles, which create transient local BBB permeability, with liposomal doxorubicin has been reported to have a significant antitumor effect (Aryal et al., 2013(Aryal et al., , 2015. Engineering of Cell Genome Recently, the advance of new technologies that facilitate the engineering of the cell genome, like clustered regularly interspaced short palindromic repeats (CRISPR)/Cas 9 and silencing RNA, has provided new methods to deliver nucleic acids to the brain, and in particular for glioma treatment. For this purpose, positively charged and degradable polymers, including chitosan, poly(beta-amino esters), poly(amidoamines), and many other cationic polymers have been used, because of their cationic nature, which allows complexation with negatively charged molecules like DNA or RNA. Inorganic NPs are better applied for imaging and drug delivery purposes, because their synthesis is easily tunable and reproducible (Cardoso et al., 2007;Tzeng and Green, 2013). Some examples are injectable superparamagnetic iron oxide NPs, which are used as contrast agents for magnetic resonance imaging, and gold NPs that are used to carry a conjugated drug. Coated spherical gold NPs carrying a highly oriented layer of siRNA are well protected from nuclease degradation and provide highly efficient knockdown. Liposomes have also been used to deliver the IFN-β gene in mouse models of glioma, resulting in immune response induction and reduced tumor growth. Five malignant glioma patients were treated using liposomes carrying the IFN-β gene in a pilot clinical trial and four patients showed > 50% tumor reduction or stable disease (Yoshida et al., 2004). Moreover, since Apo2L/tumor necrosis factor-related apoptosis inducing ligand (TRAIL) is fairly specific for cancer cells, a TRAIL plasmid encapsulated in PEG-conjugated PLA NPs (<120 nm) was injected intravenously and caused an increased median survival time (Hawkins, 2004;Lu et al., 2006). To avoid GBM recurrence, the protein product of the delivered gene should be designed to be active in CSCs. In addition, the construct can be under the control of a cancerspecific promoter, such as survivin or PEG3, to ensure that healthy cells are not transfected or transduced (Su et al., 2005;Van Houdt et al., 2006). The delivery of miRNAs, such as miR-124 and miR-137, can induce terminal differentiation and cell death in murine CSCs in vitro (Silber et al., 2008). Moreover, Gangemi et al. used an shRNA-expressing plasmid in a retroviral vector for in vitro knockdown of SOX2, leading to inhibited CSC proliferation, self-renewal, and tumor-initiating capacity (Gangemi et al., 2009). Recently, modified siRNAs have been developed that are protected from nuclease degradation and can be readily taken up into cells. This type of modification allows researchers to focus on developing engineered NPs with a prolonged circulation time and site-specific delivery, instead of siRNA protection, thus accelerating clinical translation. CLINICAL TRIALS Few clinical trials using nanotherapies to target GBM have been conducted; this review focuses on glioma treatment, and the information about these clinical trials is summarized in Table 1. Ang-1005 (also named GRN-1005) was designed to circumvent BBB under several clinical trials. Ang-1005 is conjugated to paclitaxel and to the RMT ligand angiopep-2 that targets LRP1. In a phase I trial, the drug tolerance of maximum dose of 650 mg/m 2 was shown. A pharmacokinetics and tumor resections analysis proved that Ang-1005 kept intact in blood plasma so as to remain sufficient concentrations for cytotoxicity when approaching tumor samples (Thomas et al., 2009;Papademetriou and Porter, 2015;Regina et al., 2015). To the best of our knowledge, nanocarrier-based RMTtargeting strategies in GBM treatment have very limit clinical trial outcomes. It has been described that PEGylated liposomal doxorubicin without RMT-targeting was evaluated in phase I studies in GBM patients, showing no improvements in progression nor survival (Papademetriou and Porter, 2015). In Phase I/II clinical trials, solid tumors and metastatic brain cancer or malignant recurrent glioma patients were treated with 2B3-101 encapsulated by a PEGylated liposomal doxorubicin nanocarrier employing glutathione to target glutathione transporters (CMT-based targeting) (Birngruber et al., 2014). SGT-53 is a nanocarrier composed of cationic liposomes that encapsulate a plasmid for the p53 tumor suppressor, and which displays scFv-targeting TfR. One phase II clinical trial of SGT-53 is to combine it with TMZ for patients with recurrent malignant gliomas, aiming to evaluate tumor cells death after accumulation Magnetic hyperthermia plus radiotherapy with Nanotherm for the treatment of glioblastoma in 14 patients (Maier-Hauff et al., 2007). Hyperthermia plus radiotherapy with Nanotherm for the treatment of glioblastoma in 60 patients (Maier-Hauff et al., 2011). Average survival following first recurrence: 13.2 months compared with 6 months with conventional treatments. IFN-β IFNB gene therapy via cationic liposomes A pilot clinical trial of IFNB gene therapy to demonstrate its feasibility and safety in glioma treatment (Yoshida et al., 2004). Phase I clinical trial of IFNB gene therapy for glioma. In histological examinations of autopsy samples many tumor cells showed necrotic changes, and immunohistochemistry identified numerous CD8+ lymphocytes and macrophages infiltrating the tumor and surrounding tissues, while CD34-immunoreactive vessels were notably decreased in the vector-injected brain (Wakabayashi et al., 2008). INTERLEUKIN 12 Replication-disabled Semliki Forest viral vector carrying the human interleukin 12 (IL-12) gene and encapsulated in cationic liposomes (LSFV-IL12) This was a phase I/II clinical study in adult patients with recurrent GBM which was aimed at evaluating the biological safety, maximum tolerated dose, and antitumor efficacy of LSFV-IL12 (Ren et al., 2003). DAUNORUBICIN Liposome DaunoXome, a liposome formulation of daunorubicin, achieved and maintained potentially cytotoxic levels in glioblastoma for a long time in association with low-level systemic exposure (Zucchetti et al., 1999). High concentrations of daunorubicin and daunorubicinol were found in malignant gliomas after systemic administration of liposomal daunorubicin (Albrecht et al., 2001). A combination of liposomal daunorubicin and carboplatin plus etoposide produced a major response and the 29 month progression-free survival was 38% with little and transient hematological toxicity (Fiorillo et al., 2004). Liposome Stabilization of the disease was observed in 54% (7/13) of patients. Partial response and complete response were not observed. Median time-to-progression was 11 weeks. Progression free survival at 12 months was 15%. Median overall survival (OS) after liposomal doxorubicin therapy was 40.0 weeks, whereas the median OS after diagnosis reached 20.0 months (87.0 weeks). Liposomal doxorubicin was well tolerated, with the main side effects being palmoplantar erythrodysesthesia occurring in 38% of patients and myelotoxicity (World Health Organization grade 3-4) in 31% of patients (Fabel et al., 2001). The investigated combination was tolerable and feasible, but neither the addition of PEG-doxorubicin nor the prolonged administration of temozolomide resulted in a meaningful improvement of the patient outcomes (Beier et al., 2009). A phase II trial with 40 patients using a combination of temozolomide and pegylated liposomal doxorubicin. Treatment was well tolerated but did not add significant clinical benefit regarding 6-month progression free survival and overall survival (Ananda et al., 2011). P53 Liposomes encapsulating a normal human wild-type p53 DNA sequence in a plasmid backbone Phase II Study of Combined Temozolomide and Targeted P53 Gene Therapy for the treatment of patients with recurrent glioblastoma. This study is currently recruiting participants. 5-FLUOROURACIL Injectable 5-fluorouracil -releasing microspheres Phase II study with a total of 95 patients. Safety was acceptable but overall survival was not significantly improved (Menei et al., 2005). FIGURE 3 | Proposed strategy. This proposed therapeutic strategy targeting the GBM cancer stem cells (CSCs) as a novel treatment, would use liposomes as nanocarriers, because they can shield and carry molecules of different sizes and charges. Liposomes, with a shell coated using aptamers or antibodies specific to CSC markers, such as CD133, CD15, CD44, integrin-α6, or A2B5, would carry antitumor antibiotics (doxorubicin) or genome editing tools such as SOX2, TRAIL, miR-124, miR-137, and IFN-β, to modulate tumor survival/death gene expression. Alternatively, the use of gold nanoparticles targeting brain markers, like glial fibrillary acidic protein, is recommended to bypass the BBB and deliver genome editing tools. of the drugs, anti-tumor efficacy, safety and overall survival (Yu et al., 2004;Senzer et al., 2013). Some trials are now using gene-silencing therapies, including siRNA coupled to D3 and D5 polylysine dendrimers and melittingrafted HPMA oligolysine-based copolymers, for intravenous, intracerebroventricular, or intranasal administration to the CNS. A nanoliposomal formulation of irinotecan (CPT-11) is also in phase I trials for glioma (Krauze et al., 2007). Moreover, magnetically induced hyperthermia, which uses a magnetic medium such as thermoseeds and magnetic NPs to produce moderate heating in a specific area of the organ where the tumor is located, is under investigation for malignant glioma, prostatic cancer, metastatic bone tumors and some other malignant tumors. Thermoseed magnetic induction of hyperthermia for the treatment of brain tumors was first reported by Kida et al. in 1990. A Fe-Pt alloy thermoseed with a length of 15-20 mm, a diameter of 1.8 mm and a Curie point of 68-69 • C was used for seven cases of metastatic brain tumor two to three times a week, with the tumor tissues reaching 44-46 • C during the treatment. This resulted in two cases of complete response and one case of partial response. Kobayashi et al. (1991) used a thermoseed with a Curie point of 68 • C for the treatment of 23 patients with brain tumors, and reported an overall response rate of 34.8% (O'Reilly and Hynynen, 2012;Luo et al., 2014). PLA is a biodegradable and hydrophobic polymer that can be used as a carrier for hydrophobic chemical drugs for anti-tumor research. Monomethoxy poly(ethylene glycol)-blockpoly(D, L-lactide) loaded with paclitaxel to form Genexol R -PM has been trialed clinically and is now commercially available for the treatment of breast cancer, ovarian cancer, and non-small cell lung cancer (Kim et al., 2007;Lee et al., 2008). Jun Chen et al. used PEG-PLA as a paclitaxel delivery carrier. The NPs were coupled with the tLyp-1 peptide, which has a high affinity for neuropilin to target both glioma cells and endothelial cells. The tLyp-1-conjugated NPs showed greater penetration in C6 glioma spheroids and enhanced drug access into solid tumors and prolonged survival time to 37 days in intracranial C6 glioma mice, compared with approximately 20 days in controls. However, smart structural design and modification are required for the proper degradation rate of these bioactive materials (Hu et al., 2013). CONCLUSION In summary, elucidating the biological nature of CSCs offers a new strategy for targeted cancer therapy. Interdisciplinary efforts to develop new nanocarriers that can bypass the BBB, protect the drug from being degraded, and that are specific for tumor cells or CSCs are ongoing. Some groups prefer to focus on developing new drugs that can efficiently kill CSCs, which are responsible for treatment resistance and a poor prognosis in glioblastoma, while some research groups are using modern and pioneering molecular biology tools, such as CRISPR/Cas 9 and siRNA. To develop a novel treatment based on targeting CSCs, an effective strategy should use liposomes as nanocarriers, because of their ability to shield and carry molecules of different sizes and charges. These liposomes should have a shell coated with aptamers or antibodies specific for CSC markers such as CD133, and would carry antitumor antibiotics (doxorubicin) or genome editing tools that would modulate the expression of genes important for tumor survival, such as SOX-2. Another possibility is the use of gold NPs targeting brain markers, such as glial fibrillary acidic protein, to facilitate brain penetration, and deliver siRNA to knockdown tumor survival and proliferation genes (Figure 3). Finally, some clinical trials have succeeded in testing new nanotechnologies that may become available to patients in the near future. AUTHOR CONTRIBUTIONS TG, IH, LW, and XZ summarized the literature and drafted the manuscript. TG, XZ, and LW revised and edited the manuscript. XZ and LW supervised the work. TG and XZ initiated, finalized, and submitted the manuscript.
9,195.2
2017-03-31T00:00:00.000
[ "Medicine", "Materials Science", "Engineering" ]
Thermal and maturation history for Carboniferous source rocks in the Junggar Basin, Northwest China: implications for hydrocarbon exploration The reconstruction of thermal history is an important component of basin evolution and hydrocarbon exploration. Based on vitrinite reflectance data, we integrate the paleo-temperature gradient and paleo-heat flow methods to reconstruct the thermal history of Junggar Basin. Compared with present thermal state, the Junggar Basin experienced much a higher heat flow of ca. 80–120 mW/m2 during the Carboniferous. This feature can be attributed to large-scale volcanic events and related thermal effects. The hydrocarbon maturation history of Carboniferous source rocks indicates that the temperature rapidly reached the threshold of hydrocarbon generation during the Late Carboniferous and has never achieved such a high level since then. This characteristic resulted in the early maturation of hydrocarbons in Carboniferous source rocks. Meanwhile, the results reveal that hydrocarbon maturities are different among various tectonic units in Junggar Basin. The kerogen either rapidly broke through the dry gas period so that cracking of gas occurred or remained in the oil maturation window forming oil reservoirs, which depended on the tectonic background and depositional environment. In this study, we present the thermal and hydrocarbon maturation history since the Carboniferous, which has important implications for further hydrocarbon exploration in Junggar Basin. Introduction With the growing demand for hydrocarbon resources and the deepening of exploration activities, the target stratigraphy for present oil and gas exploration has been shifted to the deep (> 4500 m) and ultra-deep zones (Sun et al. 2013;Zhao et al. 2013), which are currently important strategic areas for hydrocarbon prospecting and exploitation . The Junggar Basin is one of the largest oil and gas basins in western China and is characterized by a thickened basement crust, low geothermal gradient, and widespread deep-seated strata (Pan et al. 1997;Qiu et al. 2008;Rao et al. 2013;Wang et al. 2001;Zhang et al. 2007). During recent decades, several deep Carboniferous oil and gas fields such as the Wucaiwan, Shixi, and Karameli gas fields have been discovered and exploited, which bodes well for the deep zone of the Junggar Basin (He et al. 2010;Li et al. 2009). The thermal evolution of basin is closely linked to the generation, migration, accumulation, and preservation of hydrocarbons in the traps combined with the burial history (Mashhadi Edited by Jie Hao Electronic supplementary material The online version of this article (https ://doi.org/10.1007/s1218 2-019-00392 -2) contains supplementary material, which is available to authorized users. et al. 2015). Hence, thermal evolution of the source rock is related to evaluate the petroleum prospectivity in the basin. The Junggar Basin is a multi-cycle superimposed basin and has experienced multi-stage tectono-thermal evolution since the Late Carboniferous. Previous researchers have conducted many studies on the thermal regime of the Junggar Basin and the evolution of the Carboniferous source rocks. The results could be summarized as follows: (1) the high geothermal gradient gradually decreased from the Carboniferous to the Cenozoic (Pan et al. 1997;Qiu et al. 2005;Wang et al. 2001); (2) the source rocks evolved early due to the high geothermal effect during the Late Paleozoic (Cao et al. 2005;Qiu et al. 2005); and (3) Carboniferous pyroclastic rocks formed by large-scale volcanic activity are not only effective source rocks but also provide favorable reservoir conditions for later oil and gas accumulation (He et al. 2010). However, previous studies have mainly focused on the thermal history since the Permian, and only an approximate estimate of the thermal state has been provided during the Carboniferous. Meanwhile, the main controlling factors of thermal evolution and their relations with regional tectonic events have not been properly evaluated. There is also a little understanding of the hydrocarbon maturation period of Carboniferous source rocks in Junggar Basin. In this study, the 1D reconstruction of thermal history recorded by considerable vitrinite reflectance (R o ) data from wells LN-1, MS-1, CS-2 and Ca-6, utilizing the paleo-geothermal gradient and paleo-heat flow methods. Moreover, we precisely simulate the thermal history and hydrocarbon maturation history of the Junggar Basin since the Late Carboniferous by the basin modeling, and the thermal maturity stage of the Carboniferous source rock is established under the constraints of the regional geodynamic background. We also discussed the genesis of locally high heat flow in the Carboniferous and the potential of the Carboniferous hydrocarbon system. The thermal and maturation history will help to understand the tectonic evolution during Carboniferous and have important implications for further hydrocarbon exploration in Junggar Basin. Geological background The Junggar Terrane, which is near the northwestern margin of China and is tectonically a part of the Central Asian Orogenic Belt, consists of the West Junggar Terrane, the East Junggar Terrane and the Junggar Basin ( Fig. 1a) (Han et al. 2010;Li et al. 2016;Şengör et al. 1993;Xiao et al. 2008). The Junggar Basin is bounded by the Tianshan Mountains on the south and the Altai Mountains on the north (Fig. 1b), with an area of approximately 13 × 10 4 km 2 (Ma et al. 2018). The West Junggar Terrane and East Junggar Terrane are mainly fold-thrust orogenic belts composed of Paleozoic accretionary complexes, volcanic arcs and high-grade metamorphic ophiolite zones (Xiao et al. 2008(Xiao et al. , 2011Zhang et al. 2009).The NE West Junggar folded orogenic belt is a suture zone where the Junggar Terrane collided with the Kazakhstan plate. The East Junggar Terrane was formed during the northward subduction of the southern Paleo-Asian Ocean, and the North Tianshan fold orogenic belt might be a collision product between the Junggar plate and the Tarim Block (Li et al. 2015(Li et al. , 2016Xiao et al. 2011). Basin evolution The Junggar Basin is a superimposed basin that has developed since the Late Carboniferous; Xiao et al. (2008) concluded that the basement of the Junggar Basin is mainly composed of volcanic arcs, accretionary complexes and trapped residual oceanic crust from the Paleozoic. He et al. (2013) determined that the SHRIMP U-Pb age of a Carboniferous andesitic tuff in the MS-1 well is 331.7 ± 3.8 Ma; thus, the Early Carboniferous is a crucial period from the closing of the Junggar Ocean to the rapid growth of new continental crust based on Hf isotope analysis . A number of unconformities and tectonic layers were formed after multiple periods of tectonic movement corresponding to the rifting stage from Late Carboniferous to Permian (Carroll et al. 2010), the Triassic-Paleogene intracontinental depression stage (Jolivet et al. 2010) and the rejuvenated foreland basin stage since the Neogene (Fig. 2). The Junggar Basin is generally subdivided into six first-order tectonic units according to the basic structural characteristics ( Fig. 1c; I-VI, respectively): Wulungu Depression, Luliang Uplift, Western Uplift, Central Depression, Eastern Uplift and Southern Depression. Sedimentary strata The sedimentary cover of the Junggar Basin is well developed, and the depositional thickness can be up to 15,000 m locally (Fig. 3). The Carboniferous, Permian, Triassic, Jurassic, Cretaceous and Cenozoic systems from bottom to top are overlapped on top of pre-Carboniferous volcanic and metamorphic rocks (Cai 2009;Yang et al. 2012b). A transitional facies sedimentary system accompanied by intermittent volcanic eruptions is recorded by Late Paleozoic sediments, whereas a set of clastic sediments of Mesozoic and Cenozoic ages reveals continental facies. The Carboniferous is a critical period for the cratonization of the Junggar terrane, when two continental passive margins were developed on the southern and northern margins, and thick volcaniclastic rocks (Fig. 3) filled in the rifts (He et al. 2010). The Carboniferous strata in the Junggar Basin mainly deposited as marine-terrestrial alternating facies (Li et al. 2009). Only low-grade metamorphic sandstone, slate, siliceous slate and intermediate acid volcanics of the Lower Carboniferous (C 1 ) occur in the Altai region, whereas abundant sandstone, siltstone, tuff and volcanic rocks of the Lower to Upper Carboniferous (C 1 -C 2 ) are widespread in other regions (Han et al. 2010;He et al. 2013;Jin et al. 2008;Yang et al. 2012b). Meanwhile, many recent chronological studies on Carboniferous strata yield U-Pb ages of zircons from 350.0 ± 6.3 to 299.8 ± 5.2 Ma based on different lithologies, including rhyolite, basalt, tuff and andesite (Tan et al. 2009;Yang et al. 2012b). Carboniferous source rocks Multiple depocenters were formed during the Carboniferous in the Junggar Basin, and the volcaniclastic rocks can be up to 2.0-5.0 km in thickness (He et al. 2010). The Dishuiquan Formation (C 1 d) and Batamay Formation (C 2 b) occur within or around the basin, and the thickness of effective source rocks can reach 400-800 m. The Carboniferous source rocks included organic-rich gray mudstone, tuffaceous mudstone, silty mudstone and tuffs, which primarily deposited in marine-terrestrial environment containing type II-III kerogen (Wang et al. 2013). The total organic carbon (TOC) value of Carboniferous source rocks changed dramatically and indicated great oil generation potential. Typically, the mudstones have TOC values with the average of 1.45%; carbonaceous with the average of 15.53%; coals with the average of 43.78%; tuffs with the average of 1.56% (Li et al. 2009 will be described in next chapter. Hence, the Carboniferous source rocks are mainly in high-post mature stage. The oil correlation results of Carboniferous source rocks suggest that the oil from volcanic reservoir rocks within the C 2 b Formation generally correlates with the C 1 d source rocks based on molecular and carbon isotopic data (Yu et al. 2014). However, a small amount of younger source rocks from the Middle Permian and Lower Jurassic also can feed to the Carboniferous reservoirs in some regions (Yu et al. 2014). Overall, these source rocks could provide effective supplies of oil and gas for the Carboniferous pyroclastic and the Permian clastic reservoirs (He et al. 2010;Li et al. 2009;Wang et al. 2010). Carboniferous reservoir The discovered Carboniferous hydrocarbons mainly reserved on the volcanic rocks of Batamay Formation (C 2 b) by now, although the sandy conglomerates of Lower Carboniferous could be potential reservoir. The Batamay Formation (C 2 b) contains grayish green basalts, andesites, tuffs with minor gray sandstones, dark gray mudstones, carbonaceous mudstones and coals (Wang et al. 2010). These volcanic rocks could form the weathering crust and the interior interval reservoirs with favorable porosity and permeability condition owing to the long periods of weathering and leaching, the dissolution of organic acid or deep thermal fluids (He et al. 2010). Except for regional mudstone seal from Triassic or Permian, the thick source rocks of Batamay Formation (C 2 b) also could be good seal to form self-generated and self-stored reservoir. Methodology and data The thermal stage of a basin plays an important role not only in regional tectonic evolution but also in controlling the generation, migration, accumulation and preservation of oil and gas (Hudson and Hanson 2010). The principles for the reconstruction of the thermal history of a sedimentary basin can be summarized as follows: 1. On the lithospheric scale, the heat flow history of the basin can be determined by a mathematical and geophysical model, depending on the geodynamic mechanism of the basin and the heat transfer pattern from the Earth's interior. Through adjusting the parameters of model, the modeled results are accordance with observed temperature and subsidence characteristics. 2. On the basin scale, the thermal history can be quantitatively recovered through types of indicators recording paleo-temperature information (Chang et al. 2016(Chang et al. , 2018Sweeney and Burnham 1990). The modeling methods fall into three categories: the stochastic inversion of a single sample, the paleo-geothermal gradient method based on a vertical profile and the paleo-heat flow method based on a suite of downhole samples (Lerche et al. 1984;Tang et al. 2014). Inversion method Considering the complex geological background of the Junggar Basin, the paleo-geothermal gradient and the paleo-heat flow methods (Lerche et al. 1984;Wang et al. 2001) are Thermal conductivity values (K) are from Rao et al. 2013, which are used for later thermal modeling integrated to reconstruct the thermal history. We use the paleo-geothermal gradient method to define the thermal history before reaching the maximum paleo-temperature and invert the uncertain thermal history through various heat flow paths coupled with the burial history. The best-fit path coinciding with thermal indicator data is selected as the final result. Inversion modeling is performed by the software Thermodel (Hu et al. 2001) constrained by a heat conduction and compaction model. Present geothermal gradients, present heat flow, thermal conductivity and heat production data are from Rao et al. (2013), while other related parameters such as the compacting factor and porosity are adopted according to the software default values. Thermal indicators Considering the lack of fission track data in the Junggar Basin (Qiu et al. 2005;Wang et al. 2001), we mainly utilize vitrinite reflectance as the thermal indicator. Vitrinite reflectance (R o ) is one of the most popular thermal maturity indices to measure experienced maximum temperature of organic matter. Each R o value can be translated to a maximum paleo-temperature value using the EASY% R o chemical kinetics model (Sweeney and Burnham 1990). For this paper, 633 R o data were systematically collected from 30 drillings, mainly collected from the PetroChina Xinjiang Oilfield Company. The histogram distributions of R o data are drawn for the Jurassic, Triassic, Permian and Carboniferous (Fig. 4), and the detailed results of R o are shown in the supplementary material. The relatively low average R o values of Jurassic and Triassic Formations indicate their low-mature organic matter and are consistent with the low thermal background since the Mesozoic (Cao et al. 2005;Qiu et al. 2008). However, Carboniferous R o values show a wider distribution and relatively high thermal maturity. Meantime, the differences in frequency distributions of R o values between Late Paleozoic and Meso-Cenozoic rocks indicate completely different thermal regimes. The difference of current maturity in the Carboniferous source rocks is clear and may show a change from low maturity to high maturity. For example, the R o values of the Dishuiquan group (C 1 d) range from 0.56% to 0.63% in well Cai-28, 1.69%-1.76% in well LN-1 and 1.13%-4.15% in well MS-1 (Fig. 1c). The Batamay group samples (C 2 b) in the northwestern basin have moderate maturity with R o between 0.86% and 1.18%, mainly in oil maturation window, whereas it reaches the overmature phase in other regions (He et al. 2010). Four representative vitrinite profiles show that there is a R o "jump" or "break" phenomenon at the disconformity due to uplift and erosion of underlying strata or changing of the geothermal gradient, which establishes the foundation for quantitative reconstruction of the thermal history of the region. Thermal history of well LN-1 The simulated results of well LN-1 located in the Luliang uplift are presented (Fig. 1c, II) to introduce the specific reconstruction procedure. The Formations and lithology in well LN-1 are shown in Table 1. The well was drilled from the Oligocene unit to the Carboniferous unit with a total depth of 4905.5 m. Firstly, the paleo-geothermal gradient method is utilized to acquire the maximum paleo-geothermal gradient and erosion thickness in mainly unconformities. There are several unconformities or disconformities observed in well LN-1, but two are major: the absence of the Upper Carboniferous to Middle Permian and of the Neogene to present units. Thus, the sedimentary strata (Table 1) can be approximately divided into two subsections: Paleogene to Permian (subsection I) and Carboniferous (subsection II). The paleo-temperature profiles are measured by the R o data of different layers (Fig. 5a). The paleo-temperature gradient (dT/dz) for each subsection can be obtained using a linear least-squares fit according to the paleo-temperature profile (Fig. 5b). The eroded thickness is calculated by dividing the difference between the surface temperature (T s ) and the intercept of the paleo-temperature profiles (T i ) at the top unconformity by the estimated paleotemperature gradient using the equation: E i = (T i − T s )/(dT/ dz) i (where i represents the code of each subsection). The results indicate that the paleo-temperature gradient and erosion thickness of subsection II reach 44.9 °C/km and 2550 m, respectively, and 26.5 °C/km and 300 m, respectively, for subsection I (Fig. 5c, d). Meanwhile, the paleoheat flow of well LN-1 decreases from 95.8 mW/m 2 in the Late Carboniferous to 54.2 mW/m 2 at the end of the Late Paleogene (Fig. 5d). Further, the reasonable paleo-heat flow results and eroded thickness defined by the paleo-geothermal gradient method are imported into the software Thermodel. Then, we can screen the best-fit eroded thickness (He) and paleo-heat flow (Q) through multiple iterations. The detailed burial and thermal history (Fig. 6a, c) show that both a high deposition rate (80 m/Ma) and high heat flow (− 100 mW/m 2 ) occurred in the Early Carboniferous. The heat flow gradually decreased to approximately 80 mW/m 2 in the Early Triassic along with a rapid erosion event, followed by a relatively stable depositional period since the 1 3 Late Triassic. The heat flow was approximately 50 mW/m 2 in the Late Cretaceous, which is close to the present level. Figure 6b verifies the reliability of the inversion results through comparing the modeled R o values with measured R o values. Thermal characteristics of the Junggar Basin For superimposed basins, the thermal history recorded by various wells should be jointly analyzed to form an integrated complete thermal sequence. Therefore, we reconstruct the thermal history (Fig. 7) of 3 other wells in the Junggar Basin, including MS-1 (III, Central Depression), CS-2 (VI, Eastern Uplift) and Ca-6 (V, Southern Depression) (Fig. 1c). Well MS-1 is in the Central Depression and has a drilling depth of 7500 m, which is one of the deepest wells in the Junggar Basin. The reconstructed thermal history of the Central Depression is characterized by high paleo-heat flow (> 120 mW/m 2 ) in the Carboniferous, followed by rapid uplift and erosion from 290 to 260 Ma (Fig. 6a). The eroded thickness is close to 4.0 km, and the heat flow decreases to approximately 85 mW/m 2 during this stage. A similar In general, the Junggar Basin has experienced constantly decreasing heat flow since the Late Carboniferous. The reconstructed thermal histories of all modeled wells indicate paleo-heat flows of 80-100 mW/m 2 from the Late Carboniferous to Early Permian, decreasing to the present level of 40-50 mW/m 2 . The heat flow evolution of the tectonic units in the basin shows lateral heterogeneity. During the Carboniferous, the highest heat flow of the basin was in the Eastern Uplift, but during the Permian, the Central Depression showed the highest heat flow value, while the heat flow of the southern part of the basin always maintained a relatively low level. The heat flow distribution of the entire basin has been consistent with the present since the Triassic. Previous researchers mainly focused on thermal history and eroded events after Permian (Wang et al. 2013). Li et al. (2009) and Qiu et al. (2005) consider the eroded thickness at about 0.5-1.5 km between late Carboniferous to early Permian in Central Depression, and these results are reasonable in the Wucaiwan and Dongdaohaizi sag. Our burial history indicates that the eroded thickness could be reach to 2.0-4.0 km since late Carboniferous because these drillings are located in uplift region (Fig. 1c). Well Ca-6 even reveals the unconformity between late Carboniferous and Jurassic in the North Tianshan Thrust Belt. Thermal maturity history The thermal evolution of source rocks refers to the maturity state of source rocks in different geological periods and is an important part to evaluating hydrocarbon generation. Based on the reconstruction of the burial and heat flow history (Fig. 7), we can acquire the temperatures of hydrocarbon source rocks at various stages. Furthermore, the R o vs time curves can be estimated to infer the maturity of source rocks according to the thermal paths and the EASY% R o chemical kinetics model (Sweeney and Burnham 1990). The geothermal and maturity histories of the CS-2 well (Fig. 8) show that the burial depth of the Lower Carboniferous (C 1 ) was only approximately 1200 m, but the temperature had already reached more than 90 °C by 320 Ma, which broke through the oil maturation threshold and quickly reached the oil window peak. The C 1 strata reached their highest temperature of approximately 150 °C in the Late Carboniferous and entered the late oil window period. Since then, the temperature of the C 1 has never been more than 150 °C due to later erosion and decreasing thermal background. Hence, the Lower Carboniferous source rocks of well CS-2 matured rapidly during the Late Carboniferous-Early Permian and have remained in the late oil window period since ca. 250 Ma (Fig. 8). Meanwhile, the results indicate that the hydrocarbon maturation history is very different among various tectonic units in the Junggar Basin (Fig. 9). The R o of Carboniferous strata in well MS-1 has been close to 4.0% during geological time, corresponding to the overmature Genesis of high heat flow in the Carboniferous The reconstructed thermal history in the Junggar Basin is closely related to the geodynamic background of this region. Previous research and our modelings (Fig. 7) indicate that the Junggar Basin has high heat flow values with the average greater than 80 mW/m 2 in the Carboniferous (Qiu et al. 2005). The heat flow of some region (Mosuowan swell) even could have reached 100-120 mW/m 2 , which is similar to the heat flow in mid-oceanic ridges and active volcanic areas. These results may be attributed to intense plate tectonic and simultaneous large-scale volcanic activity (Yang et al. 2012a). The Junggar Basin went through an important transition period from subduction and accretion to collision and amalgamation in the Carboniferous (Li et al. 2015;Xiao et al. 2008). Because of the subduction of oceanic crust and the activation of continental crust (Fig. 10b), frequent and 1 3 large-scale subduction-related volcanic rocks were well developed in the basin as the Junggar Ocean disappeared (Tan et al. 2009;Xiao et al. 2011;Zhang et al. 2009;Zheng et al. 2007). Figure 10a gives the distribution of Carboniferous volcanic rock, and our modeling wells are located at these volcanic field (He et al. 2010). The age distribution of igneous rocks in the Junggar Basin during the Carboniferous (Fig. 10c) shows that the main peak of igneous events was from 335 to 300 Ma, which indicates that these magma maybe release plenty of heat to form a locally high geothermal gradient field. It is worth to mention that the Tarim Basin is neighbored to the Junggar Basin (Fig. 1a) and has similar thermal history in Paleozoic (Chang et al. 2014(Chang et al. , 2017. The Tarim Basin experienced an intracratonic rift from the Middle Carboniferous to the Permian, and the early Permian Large Igneous Province has been systemically studied (Xu et al. 2014). The dramatic magmatic activity could affect the thermal regime of the basins, and the thermal gradient could reach to 30-40 °C/km in the Tarim Basin during this period (Qiu et al. 2012). This situation is also similar to the thermal regime related to the eruption of the Emeishan Large Igneous Province at 259 Ma in the Sichuan basin (Xu et al. 2001;Zhu et al. 2010). Furthermore, the rapid uplift and extensive erosion recorded by the unconformity in the Late Carboniferous are coincident with our modeling results (Fig. 7) and are closely related to the compressional tectonic system of the basin that formed in the process of collision (Li et al. 2015(Li et al. , 2016. Hence, we suggest that the high heat flow of the Junggar Basin in Carboniferous could be a result of large-scale volcanic events and related thermal effects. Hydrocarbon maturation style of Carboniferous source rocks According to the thermal maturation history of different drillings (Fig. 9), we deduce the hydrocarbon maturation styles of Carboniferous source rocks in the Junggar Basin (Fig. 11). The general hydrocarbon maturation styles can Fig. 10a). Then, the kerogens quickly broke through the oil window and the gas window, reaching to the overmature phase with a R o value of more than 3.0%. (2) The times of hydrocarbon maturation in the Wucaiwan sag (East Uplift) and the Sikeshu sag (Southern Depression) started early and simultaneously, but have lower levels of thermal evolution with moderate R o at 1.2% and 0.9%, respectively, corresponding to the oil maturation window. These maturation styles mean that both oil and gas fields could have formed in Carboniferous source rocks. These different styles in maturity may reflect the combined effects of magmatic activity during the Carboniferous Period and the later sedimentary environment in various tectonic positions. As the lithosphere thickened and the supply of mantle heat was reduced, the heat flow has decreased rapidly since the Mesozoic. Considerable sedimentation can also do nothing to help source rocks undergoing higher temperatures (Fig. 2). This high thermal background results in rapid hydrocarbon maturation for Carboniferous source rocks and no higher maturity due to a continuous decrease in heat flow after this time. Meanwhile, the rapid deposition since the Mesozoic can lead to the development of stratigraphic overpressure, which is helpful for the migration and accumulation of oil and gas (Luo et al. 2007). Hence, the Carboniferous Formations preserve conditions for the development of large oil and gas fields in the Junggar Basin from the perspective of thermal evolution. Conclusions The thermal history of the Junggar Basin is quantitatively restored by integrating the paleo-geothermal gradient and paleo-heat flow methods constrained by considerable vitrinite reflectance data. Furthermore, the thermal maturation states of the Carboniferous source rocks for typical wells are identified based on 1D basin modeling. The following two major conclusions can be derived: 1. The paleo-heat flow of Junggar Basin was relatively high at ca. 80-120 mW/m 2 during the Carboniferous and Early Permian, which could be attributed to intense plate subduction and simultaneous large-scale volcanic activity. Meanwhile, considerable erosion is well documented in the Junggar Basin during this time. Later, the heat flow decreased rapidly during the Mesozoic and reached the present level of 40-60 mW/m 2 by the Cenozoic. 2. The source rocks of the Carboniferous started to generate hydrocarbons in the Late Carboniferous and are characterized by rapid hydrocarbon maturation. Not only could the kerogen rapidly break through the dry gas period so that cracking of gas occurred, but it could also remain in the oil maturation window and form oil reservoirs based on the differences between the tectonic units and the depositional environments. Hence, from the perspective of thermal evolution, the Carboniferous source rocks have the potential to develop abundant hydrocarbon resources. Legend Fu-2 PC-2 Pen-4 SC-1 Che-202 Tai Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
6,350.4
2020-01-06T00:00:00.000
[ "Geology", "Environmental Science" ]
Direct DNA and RNA detection from large volumes of whole human blood PCR inhibitors in clinical specimens negatively affect the sensitivity of diagnostic PCR and RT-PCR or may even cause false-negative results. To overcome PCR inhibition, increase the sensitivity of the assays and simplify the detection protocols, simple methods based on quantitative nested real-time PCR and RT-PCR were developed to detect exogenous DNA and RNA directly from large volumes of whole human blood (WHB). Thermus thermophilus (Tth) polymerase is resistant to several common PCR inhibitors and exhibits reverse transcriptase activity in the presence of manganese ions. In combination with optimized concentrations of magnesium ions and manganese ions, Tth polymerase enabled efficient detection of DNA and RNA from large volumes of WHB treated with various anticoagulants. The applicability of these methods was further demonstrated by examining WHB specimens collected from different healthy individuals and those stored under a variety of conditions. The detection limit of these methods was determined by detecting exogenous DNA, RNA, and bacteria spiked in WHB. To the best of our knowledge, direct RNA detection from large volumes of WHB has not been reported. The results of the developed methods can be obtained within 4 hours, making them possible for rapid and accurate detection of disease-causing agents from WHB. were used in WHB DNA detection. In addition, mutant forms of Taq DNA polymerase have been developed for direct DNA detection from large volumes of WHB 28 . Fewer studies have been reported concerning direct RNA detection from WHB. This is probably because the fragility of RNA and the existence of high levels of RNases which can cause RNA degradation and compromise RNA integrity. One method involving a sample pretreatment step to remove erythrocytes and lyse leukocytes has been reported 29 . However, the blood specimen used in that work after sample pretreatment was no longer WHB. In other studies, direct nested RT-PCR has been reported for the detection of bovine viral diarrhoea virus from whole bovine blood 30,31 . However, the maximum volume of whole bovine blood that could be detected directly was still limited (2% of total RT-PCR volume). To the best of our knowledge, direct RNA detection from large volumes of WHB has not been reported. The aim of this study is to develop methods to detect exogenous DNA and RNA directly from large volumes of WHB without the need for sample pretreatment, additives or specific PCR buffers. Tth polymerase, a thermostable enzyme isolated from eubacterium Thermus thermophilus strain HB8 and expressed in Escherichia coli (E. coli), can tolerate high concentrations of inhibitory substances present in clinical specimens [32][33][34] . This enzyme also exhibits reverse transcriptase activity in the presence of manganese ions 35 . The effects of magnesium ions and manganese ions on the ability of Tth polymerase to detect DNA and RNA from WHB were investigated. Additionally, the effects of four preanalytical factors: (a) WHB treated with different anticoagulants, (b) WHB collected from different individuals, (c) storage of WHB at different temperatures and (d) time delay in blood processing after collection on the ability of Tth polymerase to detect DNA and RNA from WHB were also investigated. Finally, the ability of Tth polymerase to detect low amounts of exogenous DNA, RNA, and bacteria spiked in WHB were also studied. The developed methods were evaluated using quantitative nested real-time PCR and RT-PCR because of the following reasons: a) since real-time PCR and RT-PCR were used, inhibitory effects could be measured by the increase in quantification cycle (Cq) value; b) the sensitivity of Tth polymerase has been reported to be two orders of magnitude lower than that of Taq polymerase for RNA detection 36 , nested RT-PCR provided higher analytical sensitivity; c) the formation of an opaque precipitate after WHB PCR and RT-PCR blocked the fluorescence detection and the mixtures became increasingly turbid as the blood concentration increased. Nested real-time PCR using the first round PCR and RT-PCR amplicons as template circumvented this issue. Results Effects of magnesium ions and manganese ions concentrations on DNA and RNA detection from EDTA treated WHB. Magnesium ions and manganese ions are required co-factors for Tth polymerase and adequate concentrations of these ions are crucial for successful DNA and RNA detection. Different concentrations of magnesium ions (from 1.5 mM to 8 mM) and manganese ions (from 2 mM to 5 mM) were used in the first round PCR and RT-PCR reactions to detect the CNRZ16 gene and tmRNA from varying amounts of EDTA treated WHB. The amplification curves of these experiments are shown in Figure S1 and the Cq values for these experiments are summarized in line graphs as shown in Fig. 1. We defined that when the Cq no longer shifted to lower values (<1 Cq value) with increased concentrations of magnesium ions or manganese ions, these concentrations of ions were adequate for the corresponding concentrations of WHB. In this situation, the adequate concentrations of magnesium ions for DNA detection from 10%, 20%, 30%, 40% and 50% WHB were 1.5, 3, 5, 6, and 6 mM, respectively (the mean Cq value for NCC minus 3 Cq values was 28.68). The adequate concentrations of manganese ions for RNA detection from 10%, 20%, 30%, and 40% WHB were 3, 3, 4, and 4 mM, respectively. 6 mM magnesium ions and 4 mM manganese ions were adequate for all concentrations of WHB tested and were chosen for the following DNA and RNA detection experiments. The specificity of Tth polymerase for DNA detection in the presence of 6 mM magnesium ions and RNA detection in the presence of 4 mM manganese ions was demonstrated by gel electrophoresis of the first round PCR and RT-PCR products as shown in Figure S2. No unspecific PCR or RT-PCR products were observed from the gels. Effects of citrate and heparin on DNA and RNA detection from WHB. EDTA, citrate, and heparin are the most commonly used anticoagulants in clinical hematology. The effects of citrate and heparin on the ability of Tth polymerase to detect DNA and RNA from varying amounts of WHB in the presence of adequate concentrations of magnesium ions and manganese ions were evaluated. The Cq values for these experiments are summarized in line graphs as shown in Fig. 2. These results indicated that Tth polymerase enabled DNA and RNA detection from various anticoagulants treated WHB in the presence of adequate concentrations of magnesium ions and manganese ions. We also found that the Cq values for DNA detection from 50% EDTA treated WHB were significantly smaller than those from 50% citrate and heparin treated WHB (p < 0.01) as shown in Fig. 2A, meanwhile the Cq values for RNA detection from 20-40% EDTA treated WHB were significantly smaller than those from 20-40% citrate and heparin treated WHB (p < 0.01) as shown in Fig. 2B. Therefore, EDTA would be the preferred anticoagulant for DNA and RNA detection from large volumes of WHB. EDTA treated WHB was chosen for the following DNA and RNA detection experiments. Effects of WHB collected from different individuals on DNA and RNA detection. Clinical WHB specimens collected from different individuals contain a variety of substances at different concentrations. The effects of WHB collected from 10 individuals on the ability of Tth polymerase to detect DNA and RNA in the presence of adequate concentrations of magnesium ions or manganese ions were evaluated. The Cq values for these experiments are summarized in line graphs as shown in Fig. 3. From these results, we tentatively concluded that the combination of Tth polymerase with adequate concentrations of magnesium ions and manganese ions was capable of reproducibly detecting DNA and RNA from WHB specimens collected from different individuals. We also found that the Cq values for RNA detection from 10% to 20% EDTA treated WHB remained nearly unchanged as shown in Fig. 3B. These results indicated that the combination of Tth polymerase with adequate concentrations of manganese ions enabled RNA detection from 10-20% WHB without reduced detection capacity. Effects of WHB stored under a variety of conditions on DNA and RNA detection. Depending on the aim of laboratory analysis, WHB and blood fractions may be stored under a variety of conditions. The effects of WHB stored at room temperature, −20 °C and −80 °C for ten days on the ability of Tth polymerase to detect DNA and RNA in the presence of adequate concentrations of magnesium ions or manganese ions were evaluated. The Cq values for these experiments as well as those from fresh WHB are shown in Fig. 4A and B. These results indicated that not only fresh blood, but also WHB specimens that have been stored at room temperature or in frozen states for a minimum of 10 days could be analyzed by Tth polymerase in combination with adequate magnesium ions and manganese ions. We also found that the Cq values for DNA and RNA detection from WHB stored at room temperature, −20 °C, and −80 °C for ten days shifted to higher values compared to those from fresh WHB (p < 0.01) as shown in Fig. 4A and B. The effects of WHB stored at + 4 °C for 1, 2 and 4 months on the ability of Tth polymerase to detect DNA and RNA in the presence of adequate concentrations of magnesium ions or manganese ions were also evaluated. The Cq values for these experiments as well as those from fresh WHB are shown in Fig. 4C and D. These results indicated that WHB specimens that have been stored in cold condition for different storage durations as long as 4 months could be analyzed by Tth polymerase in combination with adequate magnesium ions and manganese ions. We also found that the Cq values for DNA and RNA detection from varying amounts of WHB stored at + 4 °C for different storage durations shifted to higher values compared to those from fresh WHB (p < 0.01). Among these three storage durations, the Cq values for DNA detection from 20-50% WHB stored at 4 °C for 2 and 4 months were much higher than those from WHB stored at 4 °C for 1 month (p < 0.01) as shown in Fig. 4C, meanwhile the Cq values for RNA detection from 10%, 30%, and 40% WHB stored at 4 °C for 2 and 4 months were much higher than those from WHB stored at 4 °C for 1 month (p < 0.01) as shown in Fig. 4D. Detection limit of DNA, RNA, and bacteria from varying amounts of WHB. The ability of Tth polymerase to detect low concentrations of DNA and RNA from varying amounts of WHB in the presence of adequate concentrations of corresponding ions was evaluated. A 10-fold dilution series of E. faecalis Symbioflor 1 genomic DNA using EDTA treated WHB was prepared to mimic a clinical specimen. Failure of the reaction to detect DNA is represented in the graph by a Cq value of 40. The detection limit of E. faecalis Symbioflor 1 genomic DNA from 10%, 20%, and 30% WHB was 5.8 copies/μL as shown in Fig. 5A. Whereas the detection limits from 40% and 50% WHB were reduced by 2 log units (5.8 × 10 2 copies/μL) and at least 3 log units (no detection was observed with the given DNA dilutions, data not shown), respectively. As the exogenous DNA was suspended in WHB, more templates were available to Tth polymerase in the presence of higher concentrations of WHB. However, more DNA templates did not relieve the inhibitory effects caused by higher concentrations of WHB which led to significantly increased Cq values from 40% WHB (p < 0.01) as shown in Fig. 5A. Due to the existence of high levels of RNases in WHB which can cause RNA degradation and compromise RNA integrity, tmRNA can not be suspended or diluted with WHB. To ensure that more tmRNA was introduced to RT-PCR mixtures with higher concentrations of WHB used, four 10-fold dilution series of tmRNA corresponding to 10%, 20%, 30%, and 40% WHB by DEPC-treated water were prepared. The equivalent concentrations of tmRNA in each concentration of WHB were from 6.8 × 10 5 to 6.8 × 10 2 copies/μL. The detection limit of E. faecalis Symbioflor 1 tmRNA from 10%, 20%, 30%, and 40% WHB was 6.8 × 10 3 copies/μL as shown in Fig. 5B. The Cq values for tmRNA detection from different concentrations of WHB tested were similar. Since the developed methods should also serve to the diagnosis of bacterial infections, the detection limit of bacteria was also evaluated. Three times washing using sterile saline was used to remove the nucleic acids attached to the bacteria. A dilution series of E. faecalis Symbioflor 1 using EDTA treated WHB was prepared to mimic a clinical specimen. The detection limit of E. faecalis Symbioflor 1 targeting the CNRZ16 gene from 10%, 20%, 30%, and 40% WHB was 6.6 CFU/μL as shown in Fig. 5C. Whereas the detection limit from 50% WHB was reduced by 1 log units (66 CFU/μL). More targets introduced into the PCR mixtures did not relieve the inhibitory effects caused by higher concentrations of WHB which led to significantly increased Cq values from 40% and 50% WHB (p < 0.01). The detection limit of E. faecalis Symbioflor 1 targeting tmRNA from 10%, 20%, 30%, and 40% WHB was also 6.6 CFU/μL as shown in Fig. 5D. The Cq values for bacterial detection targeting tmRNA from different concentrations of WHB tested were similar. Discussion PCR-and RT-PCR-based diagnostic assays of WHB may have low sensitivity or even false-negative results caused by PCR inhibitors. Tth polymerase has proven to be resistant to several common PCR inhibitors in clinical specimens [32][33][34] , and exhibits reverse transcriptase activity in the presence of manganese ions 35 . There is also evidence that RNA detection by Tth polymerase is more resistant to inhibitors present in nasopharyngeal swab compared to Taq polymerase 36 . Therefore, we attempted to use Tth polymerase to directly detect DNA and RNA from large volumes of WHB without the need for sample pretreatment, additives or specific PCR buffers. We found that when supplied PCR buffer containing 1.5 mM magnesium ions was used in WHB PCR reactions, Tth polymerase enabled DNA detection only from 10% WHB (Fig. 1A). Moreover, when 2 mM manganese ions were used in WHB RT-PCR reactions, Tth polymerase enabled RNA detection from 40% WHB with increased Cq values (Fig. 1B). EDTA is commonly used in WHB as anticoagulant to prevent clotting and maintain specimen quality. It becomes operative through binding of calcium ions in WHB which is necessary for a variety of enzyme reactions of the coagulation cascade 37 . However, excessive EDTA can become a PCR inhibitor through binding of DNA polymerase co-factors (magnesium ions and manganese ions), especially in the reactions with high concentrations of WHB. Therefore, the concentrations of magnesium ions and manganese ions in WHB PCR and RT-PCR reactions should be increased to compensate for the inhibitory effect of EDTA. After adding adequate concentrations of magnesium ions and manganese ions, Tth polymerase enabled DNA detection from 50% WHB, and RNA detection from 40% WHB with lower Cq values. The Cq values for DNA detection from varying amounts of WHB in the presence of 6, 7 and 8 mM magnesium ions were similar and those for RNA detection from varying amount of WHB in the presence of 4 and 5 mM manganese ions were similar (Fig. 1). Excessive magnesium ions and manganese ions may cause the fidelity of Tth DNA polymerase to be reduced and may cause the generation of unwanted products. Therefore, 6 mM magnesium ions and 4 mM manganese ions were chosen for the DNA and RNA detection experiments. Laboratory tests are also performed on WHB treated with citrate and heparin anticoagulants. Citrate and heparin become operative through binding of calcium ions or by inhibiting thrombin activity, respectively. Excessive citrate can also become a PCR inhibitor in the same way as EDTA. The inhibitory effect of heparin has been reported on the basis of an interaction with DNA and DNA polymerase. Although both DNA and heparin are highly negatively charged, and would not be expected to interact, binding between these two molecules could be mediated by divalent cations such as magnesium ions 38 . Several studies have been reported on the inhibitory effects of various anticoagulants in different PCR reactions. In the case of detecting Streptococcus pneumoniae from blood, Friedland et al. observed PCR inhibitory effects in the presence of EDTA and citrate rather than heparin 13 . In the case of detecting Aspergillus fumigatus from plasma, Garcia et al. observed PCR inhibitory effects in the presence of citrate and heparin rather than EDTA 39 . In our study, the combination of Tth polymerase with adequate concentrations of magnesium ions and manganese ions can overcome the inhibitory effects of various anticoagulants in WHB. EDTA was likely to be the preferred anticoagulant for WHB DNA and RNA detection compared to citrate and heparin (Fig. 2). The higher concentration of citrate in WHB (9.6 mM) than that of EDTA (3.9 mM), and the multiplex inhibitory effects of heparin on DNA, DNA polymerase and DNA polymerase co-factors may explain the higher resistance of Tth polymerase to EDTA treated WHB for DNA and RNA detection. We observed variations in the Cq values for DNA and RNA detection from fresh WHB, frozen WHB and WHB stored in cold condition for different durations. It is believed that a freezing-thawing cycle ruptures erythrocytes and leukocytes very efficiently due to the intracellular ice formation and probably to some extent due to hypertonicity 40 . The rupture of erythrocytes and leukocytes also happens to WHB specimens stored in cold condition which increases with the duration of storage. Meanwhile, the cells in WHB will eventually completely lyse anyway during denaturation step in PCR or RT-PCR 24 . We speculate that with prelysed blood, hemin formed by heme oxidation in vitro after erythrocyte rupture and exposure to the air, is mainly responsible for the observed variations in the Cq values for DNA and RNA detection from fresh WHB, frozen WHB and WHB stored in cold condition for different durations. Hemin has been reported to be inhibitory to DNA polymerases from human neuroblasoma cells and erythroid hyperplastic bone marrow cells and also reverse transcriptase from Rauscher murine leukemia virus [41][42][43] . TmRNA was discovered in E. coli and described as small stable RNA, present at ~1000 copies per cell 44 . It was found that the detection limit of bacteria targeting genomic DNA (1 copy per cell, detection limit: 5.8 copies/μL) or tmRNA (~1000 copies per cell, detection limit: 6.8 × 10 3 copies/μL) from 10% to 30% WHB was the same as shown in Fig. 5A and B. The detection limit of bacteria targeting tmRNA was 2 log units higher than that targeting genomic DNA from 40% WHB. The binding or degradation of low amounts of genomic DNA by inhibitory substances in high concentrations of WHB may explain this phenomenon. This was further proven by detecting intact bacteria targeting genomic DNA from 40% and 50% WHB as shown in Fig. 5C. In summary, we developed simple methods to directly detect DNA and RNA from large volumes of WHB without the need for sample pretreament, additives or specific PCR buffers. The results of these methods can be obtained within 4 hours, making them possible for rapid and accurate detection of disease-causing agents from WHB. Materials and Methods Blood specimens. WHB specimens treated with tripotassium EDTA, trisodium citrate and sodium heparin were obtained from healthy individuals in the University Medical Center Freiburg, Germany. The WHB specimens were approved by the Research Ethics Committee of the University of Freiburg and the study was performed in accordance with the relevant guidelines. Informed consent was obtained from all the individuals. The citrate and heparin treated WHB specimens were stored at + 4 °C until use. The EDTA treated WHB specimens were split in half. One half was stored at + 4 °C, the other half was divided into three fractions and stored at room temperature (25 °C), −20 °C and −80 °C for ten days, respectively. Bacterial culture. Enterococcus faecalis Symbioflor 1 (E. faecalis Symbioflor 1) was cultured overnight in lysogeny broth Luria (LB-Luria). 1 mL bacterial culture was centrifuged at 8000 × g for 5 min to pellet the bacteria. The supernatant was discarded and the pellet was washed three times with sterile saline (0.9% NaCl). 10-fold serial dilutions in sterile saline were prepared and a volume of 100 μL of each dilution was plated onto LB-Luria agar plates and then incubated overnight at 37 °C to count the number of bacteria (CFU). The concentration of E. faecalis Symbioflor 1 in sterile saline after washing was 6.6 × 10 4 CFU/μL, and 200 μL aliquots of the bacterial suspension in this concentration were stored at −20 °C until use. Scientific REPORts | (2018) 8:3410 | DOI:10.1038/s41598-018-21224-0 DNA and RNA template. E. faecalis Symbioflor 1 genomic DNA and total RNA were extracted using the QIAamp DNA Mini Kit (Qiagen, Hilden, Germany) and the RNeasy Mini Kit (Qiagen, Hilden, Germany), respectively. All extraction procedures were performed according to the manufacturer's instructions. DNase I (Ambion, Kaufungen, Germany) treatment was used to remove trace amounts of genomic DNA from total RNA. The final concentrations of genomic DNA and total RNA were determined spectrophotometrically using a NanoPhotometer (Implen, München, Germany) which were 17.5 ng/μL and 50.5 ng/μL, respectively. Design of primer and probe. Primer pairs and hydrolysis probe used in this study were designed to detect stretches from the CNRZ16 gene of E. faecalis Symbioflor 1 which encodes tmRNA using the Primer3plus online service. The primer pair used for the nested real-time PCR reactions was designed from the region within the sequence of the first round PCR and RT-PCR amplicons. Sequences of the primer pairs and hydrolysis probe along with the sizes of the expected amplicons are summarized in Table 1. The reaction mixtures for the first round PCR and the subsequent nested real-time PCR of the quantitative nested real-time PCR contained 0.5 μM of each primer, 0.2 mM of dNTP mix, 0.25 U/μL Tth polymerase (Bioron, Ludwigshafen, Germany) and 1 × PCR buffer: 10 mM Tris-HCl, 0.1 M KCl, 50 μg/mL BSA, 0.05% (w/v) Tween 20, and 1.5 mM MgCl 2 . A separate solution of 100 mM MgCl 2 (Bioron, Ludwigshafen, Germany) was used for titration in the first round PCR. 0.3 μM hydrolysis probe was used in the nested real-time PCR. Varying amounts of WHB specimens (4, 8, 12, 16 and 20 μL) were added directly to the first round PCR to make up 10-50% of the total reaction volume (the highest concentration of WHB was 56% based on the PCR protocol). 1 μL genomic DNA was added to the first round PCR as the last ingredient and 1 μL of the first round PCR amplicons after 100-fold dilution were used as template in the nested real-time PCR. When varying amounts of blood-DNA mixtures or blood-bacteria mixtures (4, 8, 12, 16 and 20 μL) were used for detection limit determination, 1 μL of the first round PCR amplicons were used directly in the nested real-time PCR without dilution. The first round PCR and the nested real-time PCR were performed at 94 °C for 2 min, followed by 40 amplification cycles at 94 °C for 10 s, 53 °C for 20 s, 72 °C for 30 s, and a final extension at 72 °C for 7 min. The reaction mixtures for the first round RT-PCR and the subsequent nested real-time PCR of the quantitative nested real-time RT-PCR contained 0.5 μM of each primer, 0.2 mM of dNTP mix, and 0.25 U/μL Tth enzyme. The RT-PCR buffer used in the first round RT-PCR was as follows: 50 mM bicine/KOH, 115 mM K-acetate, and 8% glycerol (v/v). The PCR buffer used in the nested real-time PCR was as mentioned above. A separate solution of 100 mM MnCl 2 (Bioron, München, Germany) was used for titration in the first round RT-PCR. 0.3 μM hydrolysis probe was used in the nested real-time PCR. Varying amounts of WHB specimens (4, 8, 12 and 16 μL) were added directly to the first round RT-PCR to make up 10-40% of the total reaction volume (the highest concentration of WHB was 46.5% based on the RT-PCR protocol). 50 U RNase inhibitor (Ambion, Kaufungen, Germany) was added to the first round RT-PCR mixture and mixed thoroughly by vigorous shaking to inhibit RNases. 1 μL total RNA was added to the first round RT-PCR as the last ingredient and 1 μL of the first round RT-PCR amplicons after 100 times dilution were used as template in the nested real-time PCR. When 1 μL of four total RNA dilution series or varying amounts of blood-bacteria mixtures (4, 8, 12 and 16 μL) were used for detection limit determination, 1 μL of the first round RT-PCR amplicons were used directly in the nested real-time PCR without dilution. The RT step was performed at 61 °C for 30 min, the following first round PCR and the nested real-time No template control 1 (NTC 1) contained neither WHB nor nucleic acid template. No template control 2 (NTC 2) contained 10% WHB and no nucleic acid template. NTC 3 was PCR detection of total RNA after DNase I treatment. The exogenous DNA added to the first round PCR was ultimately added to the nested real-time PCR after 100-fold dilution independent of the success of the first round PCR, resulting in an amplification curve. Therefore, no cycling control (NCC) was used to indicate the success of the first round PCR. NCC for the quantitative nested real-time PCR contained 10% WHB and 1 μL DNA template and the amplification protocol for NCC only contained the nested real-time PCR. The PCR mixtures for NCC were incubated at room temperature during the first round PCR and 1 μL of these PCR mixtures were used as template in the nested real-time PCR reactions after 100-fold dilution. The definition for the success of the first round PCR was a Cq value of 3 Cq values lower than the mean Cq value for the NCC. The first round PCR and RT-PCR products were visualized on 1.5% agarose gels with gel green staining (Biotium, Hayward, USA). The gels were recorded by the gel documentation system (FujiFilm, Tokyo, Japan). Data availability. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
6,168.4
2018-02-21T00:00:00.000
[ "Biology" ]
Linear Quantile Mixed Models: The lqmm Package for Laplace Quantile Regression Inference in quantile analysis has received considerable attention in the recent years. Linear quantile mixed models (Geraci and Bottai 2014) represent a flexible statistical tool to analyze data from sampling designs such as multilevel, spatial, panel or longitudinal, which induce some form of clustering. In this paper, I will show how to estimate conditional quantile functions with random effects using the R package lqmm . Modeling, estimation and inference are discussed in detail using a real data example. A thorough description of the optimization algorithms is also provided. Introduction In classical statistics, a common assumption is that sample observations are drawn independently from the same population. However, dependent data arise in many studies. For example, clinical observations, such as blood pressure or insulin measurements, taken repeatedly on the same individuals are likely to be more similar than observations from different individuals; environmental measurements (e.g., air pollution or rainfall measurements) that are taken in the same geographic area will show substantial degree of spatial correlation. Groups of dependent observations are commonly called clusters. Mixed-effects models (or mixed models; Pinheiro and Bates 2000;Demidenko 2004) are highly popular and flexible regression models used to analyze the conditional mean of clustered outcome variables. The extent of applications of mixed models is vast, ranging from medical studies, in which responses to an exposure or a treatment show a strong degree of heterogeneity between subjects due to unobserved genetic factors, to agricultural field trials, environmental and wildlife ecology studies, to mention a few. Mixed models are based on the assumption that predictors affect the conditional distribution of the outcome only through its location parameter (i.e., the mean). Empirical evidence shows that this assumption is inappropriate in a number of real-world applications. For example, the negative effects of maternal smoking during pregnancy or lack of prenatal care have been amply documented. In particular, these factors have been shown to decrease the average weight of infants at birth. However, infants who rank lower in the distribution (i.e., low birthweight infants) are affected by smoking and lack of prenatal care at a greater extent than average-weighting infants, and the latter at a greater extent than those who rank higher in the distribution (Abrevaya 2001;Koenker and Hallock 2001;Geraci 2013). In general, individuals who rank differently according to some outcome variable (e.g., blood pressure, body mass index, size of a tumor) might be affected by risk factors (e.g., age, gender, socioeconomic status) to a different extent or even in opposite ways. Quantile regression (QR) is a statistical tool that extends regression for the mean to the analysis of the entire conditional distribution of the outcome variable. Therefore, location, scale and shape of the distribution can be examined through the analysis of conditional quantile models to provide a complete picture of the distributional effects. The application of QR methods to clustered data is an emerging area of research in statistics. There have been several proposals of QR for dependent data, including Lipsitz, Fitzmaurice, Molenberghs, and Zhao (1997), Koenker (2004), Geraci and Bottai (2007), Reich, Bondell, and Wang (2010), and Canay (2011). Recently, Geraci and Bottai (2014) developed a class of models, called linear quantile mixed models (LQMMs), which extends quantile regression models with random intercepts (Geraci 2005;Geraci and Bottai 2007) to include random slopes, and introduced new computational approaches. These are based on the asymmetric Laplace (AL) likelihood (Hinkley and Revankar 1977), which has a well-known relationship with the L 1 -norm objective function described by Koenker and Bassett (1978). Briefly, consider a sample of observations x i , y i , i = 1, . . . , M , drawn independently from a population with continuous distribution function F y i |x i . The latter is assumed to be unknown, with quantile function given by its inverse Q y i |x i ≡ F −1 y i |x i . In linear QR problems, the goal is to estimate models of the type Q y i |x i (τ ) = x i β (τ ) , where τ , 0 < τ < 1, denotes the quantile level of interest. The τ th regression quantile is defined as any solution of (Koenker and Bassett 1978) where ρ τ (v) = τ max(v, 0)+(1−τ ) max(−v, 0) is the asymmetrically weighted L 1 loss function. Nonlinear QR is discussed by Koenker (2005). Over the past few years, a number of QR methods based on the AL distribution appeared in the literature (Koenker and Machado 1999;Yu and Moyeed 2001;Geraci and Bottai 2007;Lee and Neocleous 2010;Farcomeni 2012;Wang 2012). A continuous random variable w ∈ R is said to follow an AL density with parameters (µ, σ, τ ), w ∼ AL(µ, σ, τ ), if its density can be expressed as where µ ∈ R is the location parameter, σ ∈ R + is the scale parameter, and 0 < τ < 1 is the skewness parameter. Mean and variance of this distribution are given by (1−τ ) 2 τ 2 (Yu and Zhang 2005). Note that ρ τ (·) is the loss function introduced in (1). It is easy to verify that the location parameter µ is the τ th quantile of w, i.e., P (w ≤ µ) = τ . Therefore, for a fixed τ , it is convenient to estimate the τ th regression quantile from the model where µ Figure 1: Trellis plot of pituitary-pterygomaxillary fissure distance conditional on age in 10 girls (subjects F1-F10) and 10 boys (subjects M01-M10). A loess smooth is shown in each panel. (MLE) of β (τ ) from Equation 2 is equivalent to the solution of (1). Such computational equivalence provided Geraci (2005) with a (quasi-)likelihood framework within which estimating the conditional quantiles of longitudinal outcomes. This paper's focus is on the package lqmm (Geraci 2014) for the R statistical computing environment (R Core Team 2013), which is available from the Comprehensive R Archive Network (CRAN) at http://CRAN.R-project.org/package=lqmm. In the next section, the dataset that will be used to illustrate LQMM methods is briefly described. The models and the estimation algorithms are introduced in Sections 3 and 4, respectively. Section 5 is dedicated to lqmm methods. Short examples and related R code are given throughout to illustrate individual commands. A more extensive data analysis using LQMMs is offered in Section 6. The notation used in this paper, as well as the labeling adopted in the lqmm package, follows closely that of Geraci and Bottai (2014). Vectors and matrices are denoted in bold, I k denotes the k × k identity matrix, and n i=1 A i is the direct sum of the n matrices A i . Orthodontic growth data These data collect repeated measurements of the distance between the center of the pituitary to the pterygomaxillary fissure, two points that are identified on x-ray exposures of the side of the head, in 27 children (16 boys, 11 girls). Measurements were taken by researchers of the University of North Carolina Dental School at four different ages (8, 10, 12, 14 years), giving 108 observations in total, to study growth patterns by sex. The dataset was reported in Potthoff and Roy (1964) and used for illustration of mixed modeling methods by Pinheiro and Bates (2000). The dataset is available in the package nlme (Pinheiro, Bates, DebRoy, Sarkar, and R Core Team 2014) as well as in lqmm. I load the former as it provides useful functions for objects of class 'groupedData' and then I obtain the summary of the dataset as follows: R> library("nlme") R> data("Orthodont", package = "nlme") R> Orthodont$Subject <-as.character(Orthodont$Subject) R> Orthodont <-update(Orthodont, units = list(x = "(years)", y = "(mm)"), + order.groups = FALSE) R> summary ( A trellis plot (Sarkar 2008) of selected individual temporal trajectories in the pituitarypterygomaxillary fissure distance is shown in Figure 1. This was obtained with the plot method for 'nfnGroupedData' objects provided by package nlme. To simplify some of the examples, only a subset (i.e., girls) is used (see also Pinheiro and Bates 2000, p. 35). The full dataset is analyzed in Section 6. Linear mixed models Consider clustered data in the form x ij , z ij , y ij , for j = 1, . . . , n i and i = 1, . . . , M , N = i n i , where x ij is the jth row of a known n i × p matrix X i , z ij is the jth row of a known n i × q matrix Z i and y ij is the jth observation of the ith response vector y i = (y i1 , . . . , y in i ) . Mixed effects models represent a common and well-known class of regression models used to analyze data coming from similar designs. A typical formulation of a linear mixed model (LMM) for clustered data is given by where β and u i , i = 1, . . . , M , are, respectively, fixed and random effects associated with p and q model covariates and the response vector y i is assumed to follow a multivariate normal distribution characterized by some parameter θ. The dependence among the observations within the ith cluster is induced by the random effect vector u i , which is shared by all observations within the same cluster. However, the random effects and the within-cluster errors are assumed to be independent for different clusters and to be mutually independent for the same cluster (Pinheiro and Bates 2000). In matrix form, the model above can be written as where y = (y 1 , . . . , y M ) , X = X 1 | . . . |X M , Z = M i=1 Z i , and u = (u 1 , . . . , u M ) . From a conditional point of view, the LMM is a location-shift model. That is, the predictors X = {x 1 , . . . , x p } and Z = {z 1 , . . . , z q }, where often X ⊇ Z, are assumed to shift the conditional expectation E (y|u) = Xβ + Zu only, without affecting the response distribution in any other respect. However, a particular marginal model can be derived from (3) (Lee and Nelder 2004). Suppose X = Z and COV ( ) = ψ 2 I N . Equation 3 can be rewritten as y = Xβ + * , where * = Zu+ , with E (y) = Xβ and COV ( * ) = ZCOV (u) Z +ψ 2 I N . The within-cluster random term then confers a specific heteroscedastic structure to the model's error term * , despite the iid assumptions on . For example, consider the orthodontic growth data described earlier. Using the package lme4 (Bates, Maechler, Bolker, and Walker 2014), a random-intercept model is fitted by restricted maximum likelihood (REML) to estimate the growth rate in girls. The random effects u i , i = 1, . . . , 11, are therefore assumed N 0, ψ 2 u . Note also that the variable age is centered at 11 years so that the cross-product between intercept and slope is zero. Age 8 Distance ( which suggests that measurements within the same subject are strongly correlated at the mean of the marginal distribution of the response. In other words, 87% of the total variability in the response not explained by the population average and the population age effect is due to unobserved heterogeneity between subjects. These results, however informative, do not tell us what happens in the rest of the conditional distribution. Most importantly, the data show signs of non-normality ( Figure 2) for which the assumptions of a location-shift model may prove inappropriate. Linear quantile mixed models As seen in Section 1, the convenience of using an AL distribution allows estimating the τ th conditional quantile using maximum likelihood methods. Let us assume that the y i 's, i = 1, . . . , M , conditionally on a q×1 vector of random effects u i , are independently distributed according to an unknown continuous distribution F y i |u i (therefore the Gaussian assumption for y i |u i of a LMM is abandoned). Independence is also assumed for the within-cluster errors, though in principle extensions to allow for within-cluster correlation can be considered. A joint AL model for y i |u i is introduced, with location and scale parameters given by µ x ∈ R p is a vector of unknown fixed effects. The τ th LQMM is given by where , which can be compactly written in matrix form as µ (τ ) = Xθ (τ ) x + Zu. The skewness parameter τ is set a priori and defines the quantile level to be estimated. Also, u i = (u i1 , . . . , u iq ) , for i = 1, . . . , M , is assumed to be a zero-median random vector independent from the model's error term and distributed according to where Ψ (τ ) is a q × q covariance matrix. Note that all the parameters are τ -dependent. The random effects vector u depends on τ through Ψ (τ ) . The superscript τ will be omitted when this is not source of confusion. From the LQMM in Equation 4, the joint density of (y, u) based on M clusters in the τ th quantile is given by It is worth stressing that, although the conditional distribution F y ij |u i is assumed to be unknown, its τ th quantile is conveniently estimated as the location parameter µ x + z ij u i of an AL distribution with scale σ (τ ) and (given) skewness τ . Since the model's interpretation is conditional, one could also define the parameter θ (τ ) x as the τ th quantile of y ij |u i = 0. Following the definition of the joint density in Equation 5, the next step consists in adopting an estimation strategy for the parameters of interest, which prompts considerations on how to deal with the unobserved random effects u. There is a rich literature on mixed models estimation. The typical approach is to integrate the random effects out from the joint distribution and then optimize the integrated (log-) likelihood. The marginal likelihood of a LMM, for example, has a closed-form expression (Pinheiro and Bates 2000) and (restricted) maximum likelihood estimation is carried out using iterative algorithms such as EM (Dempster, Laird, and Rubin 1977) or Newton-Raphson (Laird and Ware 1982) algorithms. In models where integrals do not have a closed-form solution, one needs to resort to approximate methods such as marginal and penalized quasi-likelihood methods, Markov Chain Monte Carlo methods, and numerical integration. This is the case of generalized linear mixed models (Booth and Hobert 1999;Rabe-Hesketh, Skrondal, and Pickles 2002;Pinheiro and Chao 2006), nonlinear mixed models (Pinheiro and Bates 1995) and models for non-Gaussian continuous responses (Staudenmayer, Lake, and Wand 2009). The first attempt to fit quantile regression models with random intercepts led to a Monte Carlo EM procedure (Geraci and Bottai 2007), which, however, can be computationally intensive and inefficient. A different approach based on Gaussian quadrature has been proposed by Geraci and Bottai (2014) and implemented in the R package lqmm. Briefly, the log-likelihood for M clusters integrated over R q is (up to an additive constant) given by where I used the simplified notation σ (τ ) . For fitting purposes, the covariance matrix of the random effects is parameterized in terms of an m-dimensional vector of non-redundant parameters θ z , i.e., Ψ (θ z ). The parameter of interest is defined as θ = θ x , θ z . It is immediate to verify that there is a strict relationship between the choice of a distribution for u and the type of quadrature. The assumption of normal random effects, u ∼ N (0, Ψ), which is often introduced in mixed models, leads directly to a Gauss-Hermite quadrature for the approximate AL-based log-likelihood with nodes v k 1 ,...,kq = (v k 1 , . . . , v kq ) and weights w k l , l = 1, . . . , q, respectively. The constant K is an integer giving the number of points for each of the q one-dimensional integrals over the real line. As a robust alternative to the Gaussian distribution, it is also possible to consider the symmetric Laplace (double-exponential) distribution, which leads to an approximation similar to Equation 6, but where the nodes and weights are those from a Gauss-Laguerre quadrature rule. In lqmm, the quadrature nodes and weights are obtained using the command gauss.quad from statmod (Smyth, Hu, Dunn, Phipson, and Chen 2013). The main call lqmm I now illustrate the basic arguments in the main command lqmm. After loading the package lqmm R> library("lqmm") Package lqmm (1.5) loaded. Type citation("lqmm") on how to cite this package the documentation for lqmm is accessed through help("lqmm"). The arguments of this function can also be displayed on the R console: R> args(lqmm) function (fixed, random, group, covariance = "pdDiag", tau = 0.5, nK = 7, type = "normal", rule = 1, data = sys.frame(sys.parent()), subset, weights, na.action = na.fail, control = list(), contrasts = NULL, fit = TRUE) Let us start with a simple model for the conditional median of distance using the orthodontic growth data. As in Section 3.1, a random intercept is included in the linear predictor to account for the correlation of repeated observations within Subject. The first two arguments, fixed and random, are formula objects that define, respectively, the fixed and the random parts of the linear predictor µ . . , M , while the clustering or grouping variable is defined in the argument group. All variables are taken from the optional data frame data. Finally, the quantile level τ is specified using the argument tau (by default, the median): R> fit.lqmm <-lqmm(fixed = distance~age.c, random =~1, group = + Subject, tau = 0.5, nK = 7, type = "normal", data = Orthodont.sub) R> fit.lqmm Call: lqmm(fixed = distance~age.c, random =~1, group = Subject, tau = 0.5, nK = 7, type = "normal", data = Orthodont.sub) Quantile 0.5 Before entering in the details of the estimation algorithms, let us read the output obtained from the above call to lqmm, which returns an R list of class 'lqmm'. The estimated fixed effectŝ θ x = (22.94, 0.44) show that the median distance at age 11 years in girls is 22.94 mm while the median slope or growth rate is 0.44 mm per year. The random effects have an estimated variance ofψ 2 u = 2.34 mm 2 . The ICC is given byψ 2 u /(ψ 2 u +ψ 2 ) = 2.34/(2.34 + 0.84 2 ) = 0.77. One of course might ask where the number 0.84 comes from. Recall that a measure of the scale of the residuals obtained after solving (1) can be calculated as (Koenker 2005 which is also the MLE of the scale parameter of an AL distribution (Yu and Zhang 2005). As seen previously, σ is related to the variance of an AL, which provides a model-based 'residual variance' to derive the ICC above. Givenσ = 0.2969 and τ = 0.5, one can also calculate R> sqrt(varAL(sigma = 0.2969, tau = 0.5)) [1] 0.83976 The estimated parameters are stored in the fitted object fit.lqmm as fit.lqmm$theta (θ), fit.lqmm$theta_x (θ x ), fit.lqmm$theta_z (θ z ), and fit.lqmm$scale (σ). The generic function coefficients (or coef) can be used to obtain the fixed effects while the function VarCorr extracts the matrix Ψ, as shown below. The use of VarCorr is recommended since, as mentioned before, Ψ is parametrized in terms of θ z . R> coef(fit.lqmm) In general, the interpretation of the LQMM parameters is specific to the quantile being estimated . For example, the output below shows that rate of growth in girls is 0.50 mm per year at the third quartile (τ = 0.75). The random effects have an estimated variance ofψ 2 u = 2.21 mm 2 , yielding an ICC equal to 0.71. The basic idea of LQMM is that the covariates might exert different effects at different quantiles of the outcome distribution, as assessed in standard quantile regression (Koenker and Bassett 1978), and that the degree of unobserved heterogeneity might also be characterized with τ -specific variance parameters. R> lqmm(fixed = distance~age.c, random =~1, group = Subject, + tau = 0.75, nK = 7, type = "normal", data = Orthodont.sub) Call: lqmm(fixed = distance~age.c, random =~1, group = Subject, tau = 0.75, nK = 7, type = "normal", data = Orthodont.sub) Quantile 0.75 Let us now focus on the arguments type and covariance, which are relevant to the choice of the distribution of the random effects and, ultimately, to numerical integration. The number of quadrature nodes K is specified with nK and this is set to 7 by default. Since guidance on how to choose K in LQMMs is given by Geraci and Bottai (2014), here I will not discuss this issue further. However, a general recommendation is to start with low values of K (say, K < 10), as the total size of the quadrature grid, K q , grows exponentially with the number of random effects. The argument type takes the character string "normal" for Gaussian random effects (i.e., Gauss-Hermite quadrature) or "robust" for Laplace random effects (i.e., Gauss-Laguerre quadrature). However, while the Gauss-Hermite quadrature allows for all types of covariance matrix Ψ implemented in lqmm, the Gauss-Laguerre quadrature can be used for uncorrelated random effects only (see Geraci and Bottai 2014, for further details). Table 1 gives a summary of the options currently available in the package. The types of covariance matrices specified in covariance are named as in nlme. The table also includes the number of unique parameters for each type of matrix, that is, the dimension m of θ z . Another argument related to the choice of the quadrature rule is rule, which introduces integration on sparse grids (Heiss and Winschel 2008) and nested integration rules for Gaussian weights (Genz and Keister 1996) as implemented in the package SparseGrid (Heiss and Winschel 2008;Ypma 2013). This part of the code, which has not yet been extensively tested, aims at introducing computational relief when the size of the quadrature grid is large. Further studies are needed to assess the performance of these approaches. Estimation algorithms 4.1. Optimization control In lqmm, there are currently two algorithms to minimize the negative integrated log-likelihood in Equation 6. The default is the gradient-search method described by Geraci and Bottai (2014), which makes use of the Clarke's derivative of the objective function to find the path of steepest descent. The alternative is Nelder-Mead optimization, as implemented in optim, which belongs to the class of direct search methods. Gradient-search optimization The function lqmm.fit.gs executes the gradient-based estimation of θ. It is a wrapper for the C function gradientSd_h and it performs pre-and post-estimation checks. The gradientsearch algorithm ) works as follows. From a current parameter value, the algorithm searches the positive semi-line in the direction of the gradient for a new parameter value at which the likelihood is larger. The algorithm stops when the change in the likelihood is less than a specified tolerance. At iteration t, let s θ t denote Clarke's gradient of the negative approximate log-likelihood, rewritten compactly as app θ t , σ 0 , given σ 0 . The minimization steps for θ are: 1. Initialize θ 0 ; δ 0 ; σ 0 and set t = 0. A check on the convergence of the parameter θ can be introduced in Step (b) by verifying max | δ t s θ t |< ω 2 , where ω 2 > 0 controls the tolerance. By default, this check is not carried out in lqmm but it can be changed by setting check_theta = TRUE in lqmmControl. The iterative loop for θ is the 'lower' level of the optimization. The 'upper' level of the algorithm consists in updating the scale parameter: once the algorithm finds a solution for θ, given σ 0 , the scale parameter is estimated residually to obtain σ 1 . If the change in the parameter, is sufficiently small, say |σ 0 − σ 1 | < ω 3 , with ω 3 > 0, the algorithm stops. Otherwise another iterative loop for θ is initialized by setting σ = σ 1 and so forth until convergence is achieved for σ as well. The starting values θ 0 and σ 0 are specified in lqmm.fit.gs's arguments theta_0 and sigma_0, respectively. The starting values for θ x and σ are provided automatically by lqmm, either using ordinary least-squares estimates or, if startQR = TRUE, L 1 -norm estimates using lqm (see Section 7). The elements of theta_z are all set equal to one. The initial step δ 0 > 0 is specified in the argument LP_step of lqmmControl. If not provided by the user, this is set equal to the standard deviation of the response variable. Additionally, it is possible to instruct the algorithm to reset δ to its initial value (reset_step = TRUE) at Step (2.ii) of the algorithm above, i.e., δ t+1 = δ 0 . The values for the tolerance parameters ω 1 , ω 2 , and ω 3 can be passed to the arguments LP_tol_ll, LP_tol_theta and UP_tol, respectively. The contraction step factor a ∈ (0, 1) and the expansion step factor b ≥ 1 are specified in the arguments beta and gamma, respectively. Finally, the maximum numbers of iterations for the lower and upper loops are given in LP_max_iter and UP_max_iter, respectively. It is possible to monitor the value of the objective function as the algorithm proceeds by setting verbose = TRUE. The output above shows that setting the starting value for theta_z (which in this case is the third element of theta) to 0.001 causes the algorithm to converge to a different optimum for θ z , in the vicinity of the starting point itself. Care, therefore, must be taken in defining the initial values. The output also reports the number of iterations at convergence for the upper loop (opt$upp_loop) and that for the last cycle of the lower loop (opt$low_loop). If the algorithm fails to converge, a warning will be produced. By way of example, let us set the maximum number of iterations for θ to a small value, say LP_max_iter = 10: R> fit.args$control$LP_max_iter <-10 R> do.call("lqmm.fit.gs", args = fit.args) The warning message will suggest using less restrictive convergence criteria, while reporting in parentheses those last used. Note that opt$low_loop is equal to −1, which is the value that the function errorHandling interprets as 'algorithm did not converge', as opposed to −2, which is the code for 'algorithm did not start'. Derivative-free optimization As mentioned before, the derivative-free algorithm is based on Nelder-Mead optimization routines. The function lqmm.fit.df is a wrapper for the command optim, which in turn minimizes the negative approximate log-likelihood as returned by the C function ll_h_R. It proceeds similarly to gradient-search by alternating a loop for θ and a step to update σ. The parameters LP_tol_ll, LP_max_iter and verbose in lqmmControl are passed to optim via the arguments abstol, maxit and trace, respectively. Two successive calls to fit.lqmm.df using the list fit.args are shown below: the first with theta_z left equal to 0.001, the second with theta_z changed back to 1. The maximum number of iterations is restored to 500. The summary and bootstrap functions Consider first a call to lmer using the orthodontic growth data, in which both random intercepts and slopes are specified in the linear predictor: To estimate a similar model for the quartiles of distance conditional on age.c (age centered on 11 years), the vector c(0.25, 0.5, 0.75) is passed to the argument tau. The resulting lqmm object will contain the estimates of the three quantile models, which are fitted sequentially and independently using the same formulas for fixed and random effects. The covariance model of the random effects "pdSymm" will have in this case m = 3 unique parameters (two variances, one for the intercept and one for the slope, and one intercept-slope covariance). R> fit.lqmm <-lqmm(distance~age.c, random =~age.c, group = Subject, + covariance = "pdSymm", tau = c(0.25, 0.5, 0.75), nK = 7, + type = "normal", data = Orthodont.sub, + control = lqmmControl(method = "df")) The summary method for 'lqmm' objects produces a summary object that provides standard errors, (1 − α)% confidence intervals and p values for the coefficients and the scale parameter of each quantile model. Inference on parameters is based on block-bootstrap , which is currently the only method implemented in the package. The number of bootstrap replications (default 50) can be specified in the summary method using the additional argument R. The results of the likelihood ratio test and AIC values are also produced. R> system.time(print(fit.lqmm.s <-summary(fit.lqmm, R = 100, seed = 52))) Call: lqmm(fixed = distance~age.c, random =~age.c, group = Subject, covariance = "pdSymm", tau = c(0.25, 0.5, 0.75), nK = 7, type = "normal", data = Orthodont.sub, control = lqmmControl(method = "df")) tau = 0.25 It is interesting to note in the output above that the magnitude of the slope for age increases with increasing quantile level. The elapsed CPU time was about one minute to run 100 replications for three quantiles (approximately 0.04 seconds per sample). The 'non-convergence' warnings R> warnings() Warning messages: 1: In errorHandling(OPTIMIZATION$low_loop, "low", control$LP_max_iter, ... : Lower loop did not converge in: lqmm. Try increasing max number of iterations (500) or tolerance (1e-05) [...] may be of less concern if they occur during bootstrap. This may happen when the algorithm requires a certain number of data points to estimate the specified regression model but one or more bootstrap samples do not provide adequate information because of a particular configuration of their units. Clearly, this situation is more likely to happen when estimating tail quantiles with a number of parameters that is relatively high given the size and design of the dataset. In the first instance, one can assess the summary output by using less stringent optimization parameters. This is shown below by increasing the number of maximum iterations and the tolerance for the previous example. As a result, essentially the same estimates are obtained but with no warnings: R> fit.lqmm$control$LP_tol_ll <-1e-3 R> fit.lqmm$control$LP_max_iter <-1000 R> summary(fit.lqmm, R = 100, seed = 52) The function boot.lqmm executes the bootstrapping, producing an object of class 'boot.lqmm' that is stored within the summary output (e.g., fit.lqmm.s$B). The function boot.lqmm can also be applied directly to a fitted 'lqmm' object: R> fit.boot <-boot(fit.lqmm, R = 100, seed = 52, startQR = FALSE) Among the boot.lqmm's arguments, it is worth calling the attention on startQR. If set to TRUE, the fitted parameters in the 'lqmm' object are used as starting values for each bootstrap sample. On the one hand this may speed up the fitting process. However, it may also cause the algorithm to converge too often to a similar optimum, which would ultimately result in underestimated standard errors. Finally, all bootstrap estimates are stored in an array of dimension c(R, p + m, nt), where R represents the number of bootstrap replications, p + m the length of theta and nt the length of tau. Estimated parameters for fixed (theta_x) and random (theta_z) effects can be extracted separately from a 'boot.lqmm' object using the generic function extractBoot, while a summary can be produced using function summary method for 'lqmm' objects: R> extractBoot(fit.lqmm.s$B, "random") R> extractBoot(fit.boot, "random") R> summary(fit.lqmm.s$B) R> summary(fit.boot) Bootstrap estimates can also be used to perform inference on the difference between regression quantiles. A 95% bootstrap confidence interval for the interquartile regression coefficients can be computed as follows: R> B <-extractBoot(fit.boot, "fixed")[, "age.c", c(1, 3)] R> quantile(apply(B, 1, diff), probs = c(.025, 0.975)) 2.5% 97.5% -0.4141825 0.1731508 Prediction functions Other important functions include ranef.lqmm and methods for predict and residuals for 'lqmm' objects. Here I introduce the former in detail and briefly describe the other two. The BLP in Equation 8 resembles that obtained from a LMM. It is therefore reasonable to expect that predicted random effects from a mean and a median mixed model should be comparable when the parameters' estimates are similar. The code to obtain predicted subjectspecific intercepts and slopes of the mean and median models for the orthodontic growth data is shown below: The BLP of the random effects is implemented in the function ranef.lqmm, which takes a 'lqmm' object as the only argument. If more than one quantile model has been fitted, the output of ranef.lqmm will be a named list of predictions, with names given by tau. Prediction of the response can be carried out using the corresponding predict method. The argument level specifies whether predictions should be returned at the 'population' level (level = 0), that isŷ (τ ) = Xθ (τ ) x , or at the 'cluster' level (level = 1), that isŷ (τ ) = Xθ BLP . This is similar to the argument level in the predict method for 'lme' objects in package nlme (the reader is referred to Geraci and Bottai 2014 for a discussion on the interpretation of the regression coefficients at the population level in LQMM). Analogously, residuals can be calculated at one or the other level by using the corresponding residuals method as shown below: R> predict(fit.lqmm, level = 0) R> residuals(fit.lqmm, level = 0) By way of example, Figure 3 shows the mean and quartile regression lines predicted for each of the 11 girls (that is, level = 1) in the orthodontic growth data (see Figure 1.14 in Pinheiro and Bates 2000, p. 38). While medians and means tally in magnitude and sign, for some subjects the distance between the first and third quartiles varies by age, which suggests the presence of a slight heteroscedasticity in the conditional distribution y|u. Modeling conditional quantiles So far individual lqmm commands have been described separately. In this section the focus is on performing a quantile analysis and discussing related modeling choices. The estimates from Model 1 suggest that at τ = 0.75 the population growth rate in boys is faster than the rate at lower quantiles. In other words, the effort required to stay on the same quantile level between any two time points is greater for those who rank higher in the outcome distribution than for those who rank lower in the distribution (clearly, this does not imply that if a given subject ranks, say, 75th at age 8 years will necessarily rank similarly at a later time). However, the difference in slopes between the first and third quartiles contributes to a mere 0.66 mm over a 6-year time period, which may not be clinically relevant. Growth trajectories in girls start from lower levels and proceed at slower pace than in boys at all quantiles. The estimated variance of the random effects, assumed to be the same for age, sex and age-sex interaction, is very small or null at τ = 0.25, but larger at higher quartiles. As compared to Models 2-4, Model 1 is the most parsimonious but it provides the worst AIC values for τ = 0.25 and τ = 0.75. Model 2, which allows random effects to be correlated, yet imposes the same variance parameters, does not offer an improvement with respect to Model 1, except for τ = 0.25. Models 3 and 4, in contrast, show substantially lower AIC values, and these are similar between the two models at all quartiles. Note also that, as compared to the first two models, Models 3 and 4 provide a better AIC value for the mean. Furthermore, the AIC for Model 4 (429.52) is lower than that for Model 3 (450.60). This improvement is explained by the heteroscedastic component of the model. Without it, Model 4 provides an AIC value of 448.58. The estimates of the fixed slopes for age, in either boys or girls, are approximately constant across quartiles. The estimated variance parameters also indicate that the random intercepts vary considerably between subjects, slightly less for τ = 0.5 as compared to the other two quartiles and the mean. In contrast, there is little or null variation between subject-specific slopes. The conclusion that can be drawn based on the orthodontic growth data is that there is no evidence of unequal growth rates in terms of pituitary-pterygomaxillary fissure distance for either boys or girls ranking below and above the median. There seems to be, however, a reduced level of heterogeneity near the median. The analysis can be further extended to assess individual growth trajectories. Each panel in Figure 4 plots individual measurements for boys and girls at different ages, together with predicted marginal (that is, level = 0) quartiles using Model 4 ( Table 2). Note that the slopes differ by sex but not by subject. This type of plot provides centile curves analogous to those typically used for screening purposes. For example, the growth paths of subjects 'F01' and 'F10' lie below the first quartile at all ages. A plot like the one in Figure 3, on the other hand, can be used for 'conditional' screening (Wei and He 2006). Note that in the conditional Table 2: Fitted models of pituitary-pterygomaxillary fissure distance conditional on age (baseline: 11 years) and sex (baseline: boys) for three quartiles and the mean (LMM). Standard errors are in parentheses. plot the measurements for subjects 'F01' and 'F10' rank within the first and third conditional quartiles or very close to them. Quantile regression for independent data The package lqmm deals mainly with clustered data. However, some functions are also provided to estimate quantile regression models for independent data via maximization of the AL likelihood with a gradient-search algorithm (Bottai, Orsini, and Geraci 2014). See also the package quantreg (Koenker 2013) for estimation based on linear programming algorithms. The main function lqm has the typical syntax of R regression fitting functions, with some arguments analogous to those in lqmm: Conclusion The lqmm package implements methods developed by Geraci and Bottai (2014) for conditional quantile estimation with clustered data, originally proposed by Geraci (2005). The R code is written in S3-style, while main fitting procedures are coded in C. Ongoing methodological work in LQMMs includes developing approaches with a reduced computational burden for both model's estimation and standard error calculation. Future extensions of the lqmm package will also provide functions for complex survey estimation and multiple imputation as proposed by Geraci (2013).
9,055.2
2014-05-06T00:00:00.000
[ "Computer Science", "Mathematics" ]
Assessing Convergence of the Markov Chain Monte Carlo Method in Multivariate Case The formal convergence diagnosis of the Markov Chain Monte Carlo (MCMC) is made using univariate and multivariate criteria. In 1998, a multivariate extension of the univariate criterion of multiple sequences was proposed. However, due to some problems of that multivariate criterion, an alternative form of calculation was proposed in addition to the two new alternatives for multivariate convergence criteria. In this study, two models were used, one related to time series with two interventions and ARMA (2, 2) error and another related to a trivariate normal distribution, considering three different cases for the covariance matrix. In both the cases, the Gibbs sampler and the proposed criteria to monitor the convergence were used. Results revealed the proposed criteria to be adequate, besides being easy to implement. INTRODUCTION Nowadays, Bayesian inference is a matter of extreme interest, despite having been developed long before frequentist statistics. In some cases, Bayesian inference requires Markov Chain Monte Carlo (MCMC) methods. The Gibbs sampler is one of the major classes of stochastic simulation schemes proposed, which is being used in many situations (Gamerman and Lopes, 2006). The quality of the simulation methods re-lies on good-quality uniform random number generators, an issue recently discussed by Luizi et al. (2010). However, a great difficulty is the empirical diagnosis of convergence to the stationary distribution. Several techniques in the literature help in identifying and monitoring convergence (Heidelberger and Welch, 1986;Geweke, 1992;Raftery and Lewis, 1992;Cowles and Carlin, 1996;Brooks and Roberts, 1998;Brooks and Giudici, 2000). Besides Bayesian analysis being increasingly used, the results are often questioned because researchers do not use or do not clearly address the implemented criteria to check for convergence. Moreover, in cases of complicated models, Bayesian inference requires a great computational effort. This effort can be minimized by monitoring the chain convergence, thus avoiding iterations beyond the necessity. In the literature, there are univariate and multivariate criteria for monitoring the convergence of MCMC output to the stationary distribution, where the Gelman and Rubin (1992) criterion is a univariate representative. This criterion uses parallel chains from different starting points, i.e., different arbitrary initial values and the idea that the trajectories of the chains should be the same after convergence has been formalized. This criterion is based on the use of analysis of variance techniques, seeking to verify whether the dispersion within the chains is greater than that between the chains. By analogy, this process has been extended to the multivariate form by Brooks and Gelman (1998). When dealing with many parameters, the convergence of the distribution will only occur when all Science Publications JMSS the parameters converge. This is a practical problem because it turns out to be impractical for a large number of parameters. The multivariate criterion is based on a single value for assessing the convergence of the MCMC output for all the parameters. The possible issues of the multivariate criteria and the cases where they are impossible to compute have been pointed out in Brooks and Gelman (1998). Therefore, this study presents an alternative way for obtaining the Brooks and Gelman criterion, as well as two new alternatives of multivariate convergence criteria and evaluates the performance of the three methods for convergence under two models. The study is outlined as follows: Section 2 presents the original convergence criterion (Brooks and Gelman, 1998), along with the research problem and motivation (2.1), an alternative computation (2.2) and two proposed criteria (2.3). In Section 3 the methodology and application of the criteria to the two models are presented and in Section 4, they have been compared. Lastly, Section 5 presents the conclusions. Convergence Criterion of Brooks and Gelman The original criterion of Brooks and Gelman (1998) is an extension of the criterion of Gelman and Rubin (1992). According to Gelman and Rubin (1992), in many cases, the convergence of chains to the stationary distribution can be easily determined using multiple independent chains, in parallel, but cannot be diagnosed using the simulation result coming from any single chain. They proposed a method using multiple replications of chains to determine if the stationary state was reached in the second half of each sample (chain). The method assumes that m chains have been simulated in parallel, each from a different starting point. After a starting point belonging to the parameter space of the posterior distribution has been obtained, the chains are run for 2n iterations, of which the first n are discarded to avoid the period of heating (burn-in) and the influence of initial values. The m chains yield m possible statistics. If those statistics are quite similar, it is an indication that convergence has been reached or is close. These authors have also suggested comparing these statistics with those obtained from the union of the chains, i.e., union of the nm values. The convergence indicator Gelman and Rubin (1992) is named Potential Scale Reduction Factor (PSRF). For the multivariate case, Brooks and Gelman (1998) proposed to replace the scalar estimators by p × p covariance matrices B and W (between and within chains, respectively) of the vector of parameter θ, whose elements are θ ji , where (p) ji θ is the p-th element of the vector of the parameters of the chain j at iteration i. For large dimension, one should estimate the covariance matrix of the a posteriori chains of the parameters by: are p-dimensional matrices estimated from chains of the p parameters. Thus, researchers could monitor the convergence by using the covariance matrices V and W. The distance between V and W is summarized as a scalar measure that should be close to 1 when the convergence is achieved. One way to do this is by seeking the maximum of the characteristic root lambda of the product W −1 V, which is also the maximum of the PSRF of any linear projection of θ. The maximum is given by differentiating the ratio of quadratic forms with respect to the vector a, by setting it to zero, i.e., 0 a ∂λ = ∂ and adopting the restriction given by a T Wa = 1. Then: The homogeneous system of equations has a nontrivial solution if and only if, |V−λW| = 0. The solution can be obtained by taking the eigenvalues of W −1 V, but in some cases, it is not straightforward to obtain the inverse of W and the eigenvalues of the matrix W −1 V, because this new matrix is non-symmetric. The convergence indicator Brooks and Gelman (1998)-named Multivariate Potential Scale Reduction Factor (MPSRF)-is based on following quadratic forms' maximization Equation 1: Under equality of the average of the chains, λ 1 → 0. Is it the maximum eigenvalue of W −1 B. p R 1 → when n is large enough, where p is the number of parameters. According to Brooks and Gelman (1998) if both W and B are both singular we cannot calculate the MPSRF. This can occur if two or more parameters are correlated or one parameter is obtained by linear combinations of the other. If only W is singular, then this problem can be circumvented as the iterations go on. Another problem that can arise is the time elapsed to perform the inversion of W, because it could be large in many circumstances. The methods proposed in this study come to solve such issues. Alternative Computation for Maximizing Quadratic Forms The maximization of quadratic forms is widely used for circumstances in which we want to get a value that represents a direction of the greater variability of the system. In maximizing quadratic forms, a homogeneous system of equations given by (V−λ i I) a i = 0 occurs. Maximizing quadratic forms ratios yields a homogeneous system given by (V−λ i W) a i = 0 and obtaining the characteristic roots and characteristic vectors of the second case is not a trivial task. Therefore, this study proposes rotating the axis on the Cartesian system, seeking for new directions of greater variability, thereby reducing the system of the ratio of two quadratic forms to a system of one quadratic form. Let the ratio of quadratic forms of the Brooks and Gelman criterion (1) be the one to be maximized. In the Bayesian literature, it is common to find the maximization given by obtaining the eigenvalues and eigenvectors of W −1 V . This study proposes an alternative that is described hereafter. For this, the matrix W is factored (Cholesky factor) as W = SS T . Setting z as the linear transformation of the vector a by z = S T a gives a = (S −1 ) T z, because, in accordance with the properties of the Cholesky factor, SS −1 = S T (S −1 ) T = I. If the equation ( V −λ i W)a i = 0 is premultiplied by S−1, then S −1 W (S −1 ) T = I. Setting H = S −1 V (S −1 ) T , gives (H −λ i I)z i = 0; therefore, we achieve the same solution for the case of maximizing a quadratic form, except that a = (S −1 ) T z must be recovered, because the eigenvectors z are changed by non-singular transformation. The eigenvalues are invariant to the non-singular transformation performed. According to Brooks and Gelman (1998), the maximum eigenvalue λ 1 is the R P itself, where R P is the p-dimensional PSRF, given by (1). Thus, the transformation gives the system: From this equation, we can determine R P and avoid the problems mentioned by Brooks and Gelman (1998), because there is no need to invert W to obtain the solution of maximizing the ratio of two quadratic forms. The prerequisite is that W must be positive definite for the condition of existence of the Cholesky factor to be satisfied. Although the Cholesky factor can be used on positive semi-definite matrices, it does not handle issues of null determinants. Therefore, two new criteria have been proposed, which are presented as follows. In the next section, it can be noted that the trace criterion is efficient to handle null determinant issues. Two Proposed Convergence Criteria The original multivariate criterion (Brooks and Gelman, 1998) is theoretically considered efficient if there are high correlations between the parameters. The extreme case is the circumstance of perfect correlation between the parameters. In that case, the multivariate criterion would be equivalent to the univariate criterion of any of the p parameters, because monitoring the correlation of one parameter reflects what happens in the other parameters. In the case of low correlations between the parameters, or their subgroups, this criterion may fail to monitor the convergence, because it considers only the direction of greatest variability in p-dimensional hyperspace. In the extreme case of no correlation, this criterion will only monitor the parameter of greatest disturbance (variance) for the system. Such limitations allow further reflection on this method. This reflection has allowed two new multivariate alternatives. Both consider the variability in all directions of hyperspace, which are linear orthogonal transformations (rotations) of the parameters axes. The first alternative proposed is based on replacing the scalars a T V a and a T Wa from the original criteria by new Science Publications JMSS scalars trace ( V ) and trace (W), respectively, where W is the covariance matrix within the chain given by (1) and V is the estimated total covariance matrix given by (1) The second criterion proposed is the product of nonzero eigenvalues of the matrix MATERIALS AND METHODS For assessing the convergence criteria of the MCMC two models were considered. For this, specific cases were simulated for the values of the parameters and the Gibbs sampler was used to generate values for the models. Samples of the posterior joint distribution and the marginal distributions of the parameters were obtained. The first model used was a time series model with two interventions and correlated errors. The fitted intervention model was an ARMA(2, 2) (Morettin and Toloi, 2008). The intervention model with autoregressive moving average error of order p and q, denoted by ARMA(p,q), is given by: where at is the residue, considered as white noise, which is a sequence of random variables i.i.d ∼N(0, τ −1 ), where τ is the precision and τ −1 = σ 2 is the variance, is a matrix (n×w) of binary variables in which each element is a vector and w is the number of interventions and β T = [β1 β 2 … β w ] is a vector of intervention parameters. This model was characterized by Diaz (1988). The Bayesian analysis with prior and full conditional posterior distributions for each parameter was developed by Milani (2000). The second model used was the trivariate normal distribution. The Gibbs sampler was used to generate Monte Carlo samples for the three variates. Furthermore, the property of the multivariate normal distribution, stating that all the subsets of X are also multivariate normally distributed, was used. By taking up a partition q 1 1 p 1 (p q) 1 2 and its corresponding partitions of the mean vector and covariance matrix, one can obtain: q ( p q ) q ( p q ) q q 11 12 p 1 1 p 1 (p q) 1 2 (p q) (p q) 21 22 where, X 1 ∼N p (µ 1 , ∑ 11 ), X 2 ∼N p (µ 2 , ∑ 22 ) for q < p and q ∑ 11q and (p−q) ∑ 22(p−q) are the covariance matrices of X 1 and X 2 , respectively. Under such partition, the conditional distribution of X 1 |X 2 is given by: where the vector of means (Bock, 1985). JMSS The trivariate normal model was simulated by using the distribution given in (2), by considering three cases of correlation between the variates with distinct variances (1, 10 and 100). The correlation matrices, ρ i and the resulting covariance matrices, ∑ i , adopted were: N N 1 0 0 1 0 0 0 1 0 and 0 10 0 , 0 0 1 0 0 100 This distinction allows cases of high, medium and null correlation between the conditionals to be simulated, coinciding with the extreme case, where the conditionals are same as the marginals. The criterion of Brooks and Gelman (1998) was adopted to assess both the convergence of the seven parameters in the intervention model with ARMA(2,2) error and the variates of the trivariate normal mo-del. The criterion was performed iteratively along with the Gibbs sampler. At 20 iterations, the criterion was calculated for the first time and then every two iterations, always considering a burn-in of 50%. For the times series model, 3,671 iterations were performed when considering m = 2 chains in parallel. For 3, 5 and 7 chains, 5,000 iterations were calculated and performed in parallel. For the trivariate normal model, 5,000 iterations were calculated for 2, 3, 5 and 7 chains in parallel. Additionally, the two new criteria proposed and described earlier were implemented. Intervention Model with Error ARMA (2, 2) The number of non-zero eigenvalues of the H matrix is r = min(m−1; p), where m is the number of chains and p is the number of parameters. Hence, the number of non-zero eigenvalues is directly related to the number of chains used. Therefore, the cases with 2, 3, 5 and 7 chains in parallel were studied. In the subsequent section, only the evaluation of the performance of the convergence criteria is addressed and the estimation of the model parameters (2) will not be mentioned. The trace criterion presented somewhat smoother behavior and lower values than the others (m = 2). From Fig. 1, it can be observed that the determinant criterion agrees with the original Brooks and Gelman, indicating convergence at R = 1.2 with 396 and 458 iterations, but has a small fluctuation around R = 1.05 with 1,760 and 1,770 iterations, respectively. The trace criterion, as reviewed, characterizes the convergence at R = 1.2 with 384 iterations. The univariate criterion of Gelman and Rubin was obtained and the value of convergence was reached at 380 iterations. It can be noted that the trace criterion detected the convergence as fast as the univariate criterion. The results of 7 chains are presented in Fig. 2. When considering more chains, there is greater precision in the estimation of the covariance matrix and thereby reducing the fluctuations. For 3 chains, convergence at R = 1.2 with 486 iterations for the trace criterion, 526 iterations for the determinant criterion and 524 iterations for the original criterion of Brooks and Gelman was achieved. When only 5 chains were used in parallel, the convergence was detected with 352, 370 and 364 iterations, respectively. For 7 chains, the convergence was detected with 422, 880 and 850 iterations, respectively (Fig. 2). It can be observed that the trace criterion always characterized the convergence before the others. Another interesting point is that the iterations that characterized the convergence did not vary much as the number of chains increased for the trace criterion. The results for the determinant criterion were very close to the original Brooks and Gelman criterion in all circumstances. The increase in the number of chains increased the number of eigenvalues. By evaluating the results of these eigenvalues, we can see that there is always an eigenvalue representing over 70% of the variation of the system due to the parameters of the intervention model being very much correlated. Trivariate Normal distribution By using a trivariate normal model and considering no correlation, there is, as commented earlier, only Monte Carlo simulation. Therefore, the process is expected to suffer only the influence of the arbitrary initial value. For this situation, there are results considering 2, 3, 5 and 7 chains in parallel. Figure 3 presents the criteria for 7 chains. The characterization of convergence used 2 chains for R = 1.2, when it occurred with 28 iterations for the original criterion of Brooks and Gelman, with 26 iterations for the determinant criterion and with less than 20 iterations to the trace criterion. When 7 chains were used, the convergence was characterized at very close values: 30, 30 and less than 20 iterations, respectively. In all cases, the convergence was achieved with fewer iterations using the trace criterion. It appears that the other two methods overestimated the convergence time because, as already reported, the chains genuinely originated from a Monte Carlo process and were serially uncorrelated. Except for the initial value, the sample was already in equilibrium from the second iteration. Similar to the time series model, one can notice that there are few differences when using a different number of chains. However, only for the trace criterion, such possible differences were not apparent, because this criterion showed the best results for the lack of a correlation situation. This can also be explained by the fact that the model has only three parameters and hence, there is no gain in increasing the number of chains from 3. Now, with regard to the intermediate correlation, a similar behavior of the criteria for 7 chains was observed, as shown in Fig. 4. The characterization of convergence when 2 chains were used for R = 1.2 occurred with 54 iterations for the original criterion of Brooks and Gelman, with 40 iterations for the determinant criterion and with 40 iterations for the trace criterion. The characterization of convergence when 7 chains were used showed little difference between the previous values, i.e., 40, 32 and less than 20 iterations, respectively. It must be noted that high correlation between the variables allows a single variable to explain the other two, i.e., the sampling process becomes slow due to the dependence of the full conditional, which exhibit the same behavior (Gamerman and Lopes, 2006). Therefore, it has an eigenvalue that explains virtually 100% of the variation. From Fig. 5 and 6, one can observe, as expected, the equality of the criteria even with the increase in the number of chains. The characterization of convergence when 2 chains were used for R = 1.2 occurred at 1,544 iterations for the original criterion of Brooks and Gelman, at 1,544 iterations for the determinant criterion and at 1,538 iterations for the trace criterion. Furthermore, the characterization of convergence when 7 chains were used showed the values of 1, 760, 1, 762 and 1,754 iterations, respectively. It is clear that in the presence of high correlation, the process mixes slowly, requiring more iterations. One way to accelerate the convergence would be reparametrization, but in some circumstances, this is not desired, requiring greater attention. DISCUSSION Multivariate methods are essential even in circumstances with few parameters, taking into account the variation and correlation in hyperspace. The criterion of Brooks and Gelman (1998) presented results consistent with the convergence of simulated models, but as a generalization of the criterion of Gelman and Rubin, it also has the feature of only monitoring of convergence rather than with the quality of the sample. The proposed alternative for computing such criterion was easily implemented and found to be numerically robust during the simulation. Two alternative criteria were proposed to cover the whole range of parameters in the chains. The trace criterion was easily implemented and showed consistent results and in some cases, was more consistent than the other competitors. In some circumstances, such as when one is interested in linear combinations of the parameters, the matrix of covariances within chain (W) will present linearly dependent columns. Its determinant is zero and hence it does not allow the use of the determinant criterion. The algorithms built in software such as the SAS and R allow obtaining the Cholesky factor of positive semi-definite matrices. In some cases, due to numerical problems, these algorithms result in negative eigenvalues, not allowing the computation of the criterion (Brooks and Gelman, 1998). As the trace criterion did not present any problem of this type of situation, it is considered more robust. Moreover, the case of lack of correlation allowed the conclusion that this criterion provides a more precise time of convergence, measured in the number of iterations. Thus, the two competing criteria tend to overestimate the number of iterations needed for convergence to equilibrium.
5,187
2012-04-01T00:00:00.000
[ "Mathematics" ]
The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. We argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decision-making. Introduction Recent developments in artificial intelligence (AI) and machine learning, such as deep learning, has the potential to make medical decision-making more efficient and accurate. Deep learning technologies can improve how medical doctors gather and analyze patient data as a part of diagnostic procedures, prognoses and predictions, treatments, and prevention of disease (Becker, 2019;Ienca & Ignatiadis, 2020;Topol, 2019aTopol, , 2019b. However, applied artificial intelligence raises numerous ethical problems, such as the severe risk of error and bias (Ienca & Ignatiadis, 2020, p. 82;Marcus & Davis, 2019), lack of transparency (Müller, 2020), and disruption of accountability (De Laat, 2018). Describing the ethical challenges and concerns has so far been the main focus of the increasing research literature in general AI ethics (Müller, 2020) and ethics of medical AI (e.g., Char et al., 2018Char et al., , 2020Grote & Berens, 2019;McDougall, 2019;Vayena et al., 2018). Furthermore, if clinicians' decisions are to be substantially assisted, or even replaced by AI and machine learning, shared decision-making-a central ethical ideal in medicine that protects patient autonomy by letting patients make informed choices about their healthcare in line with their values-is challenged. The opacity and dynamic nature of machine learning algorithms might undermine proper interaction between medical doctors and patients over the basis for a diagnosis and the choice of a treatment. This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making. Whether these premises should be reconfigured as AI develops is a separate ethical issue that we leave aside in this paper. The severe ethical concerns over applied AI and machine learning have led to numerous ethical initiatives from governments, industry, NGOs, and academia seeking to formulate ethical principles that ensure ethically acceptable and trustworthy AI (for overviews, see Jobin et al., 2019;Schiff et al., 2020). Recent policy documents such as the European Commission's Ethics Guidelines for Trustworthy AI and the research literature have often suggested that AI should be made ethically acceptable by increased collaboration between developers and other stakeholders (Char et al., 2020; Independent High-Level Expert Group on Artificial Intelligence, 2019; Reddy et al., 2020). We agree that this is the preferred way to proceed. As AI technology moves forward, it has become urgent for relevant stakeholders to actively contribute to the translation of broadly acknowledged ethical principles throughout the process of design, implementation, and evaluation. Moreover, for transformative AI technology that reconfigures the conditions of medical practice and leads to abruption with shared normative ideals, such as shared decision-making, stakeholders include everyone affected by healthcare, now or in the future. Thus, in order to ensure AI design that serves patients best, a broad public debate beyond AI designers, bioethicists, and experts might be called upon (Baerøe & Gundersen, 2019;Baerøe et al., 2020). But how should this collaboration be structured and carried out? A deficit in the literature and policy documents on AI is that the main focus has so far been on the formulation of principles (Mittelstadt, 2019), and there has been less focus on how users and designers of AI can apply these principles to shape the use and development of AI (for an exception, see for instance Floridi, 2019). In particular, there have so far been few attempts at providing constructive proposals for the proper role of professionals, AI designers, and other stakeholders in applying these principles to the development and use of AI. In order to mend these deficits, the aim of this paper is to provide a systematic discussion of how medical doctors, AI designers, and other stakeholders might help realize ethically acceptable AI in medicine, based on four different models of integrating their input. We refer to these models as the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. Using the obligations of medical doctors derived from the shared decision-making ideal as a normative standard, we provide the basis for a more concrete discussion of why some approaches to ethical AI are insufficient. Role Obligations Derived from the Shared Decision-Making Ideal Rosalind McDougall has recently called for AI technologies in medicine that allow for and facilitate shared decision-making, and concludes that we "need greater dialogue between bioethicists and AI designers and experts to ensure that these technologies are designed in an ethical way in order to ultimately serve patients best" (McDougall, 2019, p. 159). Shared decision-making has been endorsed as the best way to promote trust and well-functioning relationships between doctors and patients since the middle of last century (Kaba & Sooriakumaran, 2007). While including AI within this relationship can improve the clinical outcome, patients can be deprived of their value judgments and treatment options if values become fixed within the AI design (McDougall, 2019). Moreover, introducing AI in the doctorpatient relation might have a negative impact on the trust dimension of the relation (Kerasidou, 2020). Thus, to avoid falling back into paternalistic care where patients are forced to trust blindly either the doctor, the AI system or both, deployment of AI should be aligned with protecting the ideal of shared decision-making. Shared decision-making can be understood and conceptualized in several ways, but some features are essential. Normative models for the physician-patient relationship highlight the responsibility of the professional to include patients in decision-making. This responsibility encompasses several conditions that must be satisfied for the relationship to be truly inclusive. The doctors must provide adequate information about risks and benefits of treatment options and make sure the patient understands. For example, the information must be based on explainable knowledge, presented in a way that promotes patient understanding. Also, the doctor must ensure that the patients' values and preferences are explored and taken into account when choosing treatment or preventive care (see, e.g., Emanuel & Emanuel, 1992;Veatch, 1972). For example, people might differ when trading off prospects of some months prolonged life versus burdensome side-effects from medication. From these general points, we derive four essential role obligations that professionals must accommodate to promote real interaction, communication, and shared discussion (patients must contribute to this process too (see, e.g., Eide & Baerøe, 2021), but to discuss these role obligations falls beyond the scope of this paper): Doctors must (a) understand the connection between patients' conditions and the need for potential interventions (this involves both technical and normative considerations) both in general and as translated into the particular contexts of individual patients to identify options, and (b) trust the source of evidence upon which the decisions are to be based (including the reasoning processes involved) to make sure the information is relevant and adequate. Moreover, for doctors to enable patients to participate in the process and share their relevant values and preferences, they must (c) understand all relevant information about benefits and harms and trade-offs between them, and (d) convey it to patients in a clear and accessible manner to ensure that the patients have understood the information and invite them to share their thoughts and deliberate together on the matter. The list is not necessarily exhaustive, but all conditions must, as a minimum, be in place for patients to be justified in trusting that the doctor is aiming for the best outcomes and involving them in doing so. Moreover, as professionals enjoying the discretionary power of being responsible for health care in society, doctors are responsible for ensuring that conditions (a) through (d) are satisfied. If AI systems are to mediate in this relationship between doctors and patients in an ethically acceptable way, they will have to be developed in ways that support shared decision-making. We will now discuss how four different models for medical AI fare in this regard (Table 1). Four Models for Medical AI In this section, we articulate and examine what we take to be four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. While all models carry significant normative insights, we argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decisionmaking. Before presenting each of the four models for ethically acceptable medical AI, let us briefly account for the role we attribute to them. First, these models do not primarily purport to represent the ways in which central actors in health care, industry, or academia matter-of-factly see the future ethics of AI in medicine. Rather, the purpose of articulating these four models is mainly to identify central approaches and be able to assess approaches and standards for ethically acceptable AI more clearly. That said, we do think that the four models capture central approaches to ethical AI, and similar ideas can be found tacitly in the literature or, at least, be inferred from what central authors have said about this issue. As we present them here, the models differ along three main dimensions: (1) the extent to which AI is viewed as a fundamentally transformative technology that calls for new principles, practice, regulation, or governance, (2) the required level of ethical attention among AI designers and users, and (3) the proper division of labor and interaction between AI designers and the medical doctors who use AI in medicine. The Ordinary Evidence Model The ordinary evidence model, as we construe it here, involves two central claims, namely (1) that the output generated by AI amounts to ordinary medical evidence, and (2) that the ethically acceptable use of AI requires that medical doctors (and other health professionals) apply it in a responsible manner using their judgment, medical expertise, and commitment to central principles of medical ethics. To take the former claim first, "medical evidence" is here understood in a roughand-ready sense of factual claims based on observations, measurements, research literature, and systematic reviews that can be used to justify significant decisions Table 1 The main components of the ideal of shared decision-making Short description of the components of shared decision-making Expanded descriptions of the components of what is required of doctors in shared decision-making. These can be perceived as minimum standards How AI can undermine the conditions for shared decision-making (a) Understanding the patient's condition Doctors must understand the connection between patients' conditions and the need for potential interventions on a general, technical, and normative level and as translated into the particular contexts of individual patients. If the clinical outcome of AI is beyond what doctors are able to understand themselves, their clinical competence is undermined, and by that a crucial presupposition for why the patients have reason to trust them in the first place (Kerasidou, 2020). (b) Trust in evidence Doctors must base their decision on sources of evidence they trust to make sure the information is relevant and adequate. If doctors suggest treatments on the basis of AI sources to information they cannot fully account for, they force patients to place blind trust in their recommendations. This is just another version of paternalism. (c) Due assessment of benefits and risks Doctors must understand all relevant information of benefits and risk and trade-offs between them. If doctors cannot fully understand how, and why, AI has reached an outcome, say, a classification of an x-ray, uncertainty regarding assessments of risk, benefits and trade-offs will follow. This, in turn, undermines patients' reasons to have confidence in their judgments as their role as the expert in the relation. (d) Accommodating patient's understanding, communication, and deliberation Doctors must convey assessment of risks and benefits to patients in a clear and accessible manner, ensure they have understood the information, and invite them to share their thoughts and deliberate together on the matter. If AI systems makes it hard for doctors to understand how, and why, they reach their outcome, they cannot facilitate patients understanding either. Rather, they will have to paternalistically require that the patient should accept that the AI ' knows best'. concerning diagnosis, prognosis, treatment, and prevention of disease. Evidence must satisfy certain standards such as accuracy, reliability, and consistency (which tend to vary depending on the source of the evidence) that provide medical doctors with reasons for making a decision. According to the ordinary evidence model, the use of output from machine learning such as deep neural networks can be integrated into established ways of providing health care services. Medical doctors can apply the output given by the algorithm (e.g., about the probability of cancer in a patient's sample) the same way they treat other kinds of observations, measurements, and research results as evidence for medical decisions. From this point of view, there is nothing distinct or new about the use of AI in medicine, and its successful implementation is primarily conditioned on its efficiency, reliability, accuracy, and proven effect in clinical trials. The regulatory approval process can follow established procedures. Those AI methods that are applied by doctors in clinical practice must have been proven effective and accurate in clinical trials, approved by regulatory agencies, and introduced to medical practice via some form of training in the use of the method. Now, turning to the second claim of the ordinary evidence model that we mentioned above, the ethically acceptable use of AI in medicine requires that medical doctors (and other health professionals) apply AI in accordance with established standards of medical expertise, ethical guidelines, and laws and regulations. The ordinary evidence model thus fits well with widely held notions of the professional responsibility of medical doctors. A central view of professional decision-making is that medical doctors apply their expertise based on education, training, and clinical experience and act in accordance with their expertise and ethical principles to promote the health of the patients (see for instance, Patel et al., 1999). Given that an algorithm has proven to be accurate and effective, the responsible use of AI in medicine is ensured by the medical doctors' expertise, judgments, and actions. In other words, algorithms are only assessed according to their epistemic accuracy and instrumental efficiency and not their standards of medical ethics, according to this model. The ordinary evidence model implies a clear division of labor between the designers of algorithms and medical doctors who are to apply the algorithms in clinical practice. After regulatory approval and successful clinical trials, the algorithms can be sold to health care providers around the world. This means that the design process of, say, an algorithm that detects eye disease or depression is not informed by the medical doctors who use the algorithm. The medical doctors need not have any proper knowledge about the AI methods they apply, the way in which they are designed, and the choices made during that process, and the expertise of the designers and the context of the design. The design process might be culturally remote from the medical contexts in which it is applied. The designers need not have any broad medical expertise or familiarity with the doctors' code of conduct, national regulations, or patients. That said, the ordinary evidence model does not preclude the participation of medical doctors in algorithmic design. The point is rather that the contribution of broad medical expertise is not required for proper design. There are several objections that can be raised to the ordinary evidence model. A first objection is that medical doctors cannot alone ensure the ethically acceptable use of medical AI. An obvious reason for this is that there might be unethical conduct in the process of research and development over which the medical doctors have no direct control; there may also be no due oversight over whether the sources of evidence should be trusted (cf. condition b) of the ideal of shared decision-making above). For instance, if the design of algorithms has violated the privacy of those patients whose data are used for training of algorithms, it will not suffice for doctors to apply them in a clinically adequate manner. However, the ordinary evidence model faces more fundamental objections. Indeed, most of the distinct ethical concerns over medical AI in the literature over such things as risk of error, discrimination due to algorithmic bias, problems with accountability, and lack of transparency can be formulated as objections to the ordinary evidence model. In short, the ordinary evidence model does not provide a reasonable normative response to the ethical challenges that applied AI raises for any professional practice. To see this, let us consider one of these concerns: an accountability problem that the use of AI can cause in medicine. The ordinary evidence model states that the ethical standards of medical AI are ensured by the responsible conduct of doctors according to the standards of medical expertise and ethical principles. This presupposes that medical doctors can be properly held to account when using AI in their clinical practice. There are two main ways in which medical AI poses a problem for accountability. One source of the accountability problem is structural and concerns the difficulties in ascertaining to whom praise and blame can be attributed. In the case of errors such as misdiagnosis based on false evidence generated by AI, it is not clear who should be blamed. Is it the medical doctor who made the diagnosis using AI, the institution in which the doctor works that decided to apply that method, the computer scientists who designed the algorithm, or the algorithm itself? Another problem for doctors' accountability stems from the opacity of AI. If medical doctors are unable to fully understand the processes behind the output of machine learning algorithms upon which they base their decisions in clinical practice-for instance, whether it involves relevant uncertainties, biases, and privacy threats-it will be difficult for them to give proper account to patients as essential to shared decisionmaking. Opacity challenges the professional role obligation of the ideal of shared decision-making involved in translating general medical knowledge into particular cases, i.e., as in (a) above, on inaccessible reasoning, since the relevant factors in the particular situation (which can become implicitly or explicitly known to the doctor through experiences) remain inaccessible if processed by machine learning. Moreover, opacity challenges the conditions of ensuring sources of evidence that can be trusted, making due assessment of risks and benefits, and even engaging in clear communication with patients-i.e., conditions (b) through (d)-too. In sum, the reliance on AI in medicine challenges and even disrupts the professional accountability upon which the ordinary evidence model rests. The Ethical Design Model Our discussion of the ordinary evidence model suggests that the use of AI in medicine raises ethical problems that cannot be solved by medical doctors' responsible use alone. A reasonable response to the fact that AI has ethically problematic consequences, then, is to improve the very process of AI design. According to the ethical design model, the ethical use of AI in medicine requires that algorithms be designed in an ethical way by encoding ethical values directly into them. Indeed, much work in AI research and machine ethics currently revolves around this approach (for an overview, see Misselhorn, 2018). In a recent book, The Ethical Algorithm. The Science of Socially Aware Algorithm Design (2020), computer scientists Michael Kearns and Aaron Roth argue that the most promising approach to avoiding harm to people as a result of the use of machine learning algorithms is found in "the emerging science of designing social constraints directly into the algorithms, and the consequences and trade-offs that emerge" (p. 16). In their view, the "science of ethical design" avoids the problems of traditional approaches of new laws and regulations (such as the General Data Protection Regulation of the EU) "to enforce still-vague social values such as 'accountability' and 'interpretability' on algorithmic behavior" (p. 15). Their approach is rather understood as "the new science underlying algorithms that internalize precise definitions of things such as fairness and privacy-specified by humansand make sure they are obeyed. Instead of people regulating and monitoring algorithms from the outside, the idea is to fix them from the inside" (pp. 16-17). To apply this principle to medicine, AI designers could, then, implement widely shared ethical principles in the machine learning algorithms, which would then ensure that their output is ethically acceptable in medical contexts. This model conveys some reasonable claims. Above all, this approach takes the distinct ethical challenges of applying machine learning algorithms seriously. The ethical design model is a reasonable response to the lack of ethical attention in AI design. Moreover, some of the ethical problems that are raised by AI in medicine require that AI designers play an active role. A case in point is respect for the privacy of patients who have generated the data or the securing of access to the data; this must be handled in the design process. If the design of algorithms disregards the right to privacy of patients, using algorithms so developed cannot be ethically defensible. Moreover, algorithmic bias is a recurring problem. Algorithmic bias can be caused by skewed data that result from the fact that some groups are underrepresented in the available data or that the designers are biased when selecting data (for a detailed taxonomy of algorithmic bias, see Danks & London, 2017, see also, Suresh & Guttag (2021). Algorithmic bias can lead to discrimination against certain social groups due to their gender, ethnicity, and sexual preference. While it is difficult to remove vectors that contain information that will yield such biases, AI designers have developed concrete techniques for approximating fairness in design (for examples, see Kearns & Roth, 2020, chapter 2). Even though the ethical design model might generate more ethically attentive design, it faces several challenges. It entails a problematic division of labor between designers and medical doctors, which generates a set of problems pertaining to a lack of fit between design and medical practice. Let us point to some problems that the strict division of labor between AI designers and medical doctors can lead to. First, according to the strong interpretation of the ethical design model, ethical design is sufficient for the ethically acceptable use of AI. It implies that AI can be ethically acceptable independently of how it is used (cf. the concept of "the ethical algorithm"). This means that in so far as values such as privacy and fairness are encoded directly into the algorithm, ethically acceptable medical practice is ensured. Indeed, both Kearns and Roth and the field of machine ethics focus on algorithms as ethical subjects with the capability of making ethical choices to avoid harm. While this view does not necessarily mean that the moral judgment of medical doctors will become superfluous, it goes as far as to imply a technocratic view in which ethical choices are made by experts (see Jasanoff, 2016 for an interesting discussion of technocracy in the context of technological innovation). Moreover, the choices made in the design process of which values to encode into the algorithms might make it difficult for medical doctors to bypass or overturn these choices. For instance, if the algorithms are designed to detect disease in its early stages or in less clear cases, this might lead to an overly high instance of false positives that cannot be counteracted and corrected by medical doctors. In our view, while some ethical problems can be addressed in the design process, they cannot alone make AI ethically acceptable in medicine. By applying algorithms that have undergone the "science of algorithmic design," practitioners might be led to think that further ethical reasoning and deliberation is superfluous. By implication, the goal of designing ethical algorithms removes a central part of ethically relevant reasoning among doctors-for instance, about such things as the distribution of false positive and false negatives as part of the harm assessments, whether the observed symptoms warrant further examination due to uncertainty, whether the patient should receive this or that treatment, or whether other factors in the patients' lives besides the analyzed data are relevant to further treatment. It could create the misconception that once the ethical design of AI is in place, then the implementation of that technology can proceed seamlessly in a responsible manner. If so, ethical design might entail the outsourcing of ethical deliberations by medical practice to AI designers; in such a case, the ethical structural condition (c) of due assessment of risk and benefits and consequently (d) of accommodating for patient's understanding, communication, and deliberation may not be obtained. When this happens, it undermines the conditions for realizing the shared decision-making ideal. Second, ethical design involves the formalization of the ethical values encoded into the algorithms. Therefore, to the extent that algorithms can be made ethically acceptable, values such as privacy, fairness, veracity, and accuracy must be formalized. Such formalization raises several difficult problems, some of which Kearns and Roth are aware of, but they do not include a promising approach for solving them. In our view, the ethical design model downplays the need for specification and translation of ethical values in concrete cases. While most people are committed to fairness in public health care, it is open to debate exactly how fairness should be understood (this goes for both substantive versions of distributive fairness and procedural fairness) and how it applies to concrete cases where medical doctors make crucial decisions concerning their patients. Values and principles such as veracity, accuracy, transparency, and accountability are partly constituted by humans interpreting them and balancing them in concrete cases. Given the fact that values must be interpreted to make sense in a concrete case, it seems misguided to claim that ethical designs made at a distance from the context of application are comprehensively justified. The ethical structural condition (a), which underscores the doctor's ability to translate general knowledge of normative concerns into the specific circumstances of individual patients, is not satisfied. Thus, shared decision-making is undermined by the formalization of ethical values encoded into algorithms. We draw two important lessons from our discussion of the ordinary evidence model and the ethical design model. First, the discussion so far points to the need for ethical deliberation in design and use. While it is reasonable for designers to take ethical considerations carefully into account, this should not exempt doctors from critically assessing the design process and the algorithms' appropriateness in use. Second, in both models, the division of labor between algorithmic designers and medical doctors who apply the algorithms becomes too strict. Most important, in addition to being procedurally legitimate in terms of respecting privacy and enabling professionals to give account, medical AI must be substantially informed by the code of conduct of health professionals who have direct experience with ethical problems. The collaborative model provides us with reasonable ways to deal with these two problems. The Collaborative Model The collaborative model states that collaboration and mutual engagement between medical doctors and AI designers are required in order to align algorithms with medical expertise, bioethics, and medical ethics. Indeed, this model aims to bridge the gaps between AI designers and medical doctors in terms of their expertise and their commitment to ethical principles. The collaboration model comprises two main claims. First, it states that there must be collaboration between designers and doctors, as well as expertise in ethics, in both the design and use of medical AI. Second, AI designers, bioethicists, and medical doctors must have the capacity to communicate meaningfully about the way algorithms work, their limitations, and the algorithmic risks that arise in clinical decision-making. In order to clarify the collaborative model, we shall here explicate the nature and scope of collaboration. Moreover, we shall argue that fruitful collaboration is conditioned on a set of competencies. Let us here suggest three ways in which such collaborations could be realized. First, medical doctors can be an active part of the design of medical AI. In fact, there seems to be a de facto commitment to such collaboration in ongoing research and development in medical AI. Both in academic research institutions and industry, the design process is often informed by medical expertise. Doctors and designers can collaborate in the initial stage of research and development by identifying what medical specialties or tasks might benefit from AI assistance. Existing studies examine the accuracy and efficiency of deep learning by testing how well algorithms perform a specific task, such as identifying cancer in pictures, in comparison to the performance of clinicians, without examining whether the clinical use of the algorithms leads to better health care services and improved health for patients (Topol, 2019a). Since there are few studies examining the effect on clinical practice, AI designers and doctors could set up proper clinical trials and studies. Based on their medical expertise and experience of communicating with patients about patients' needs and values, medical doctors can, together with bioethicists with training in ethical theory and analytical discrimination of normative concerns, inform algorithmic designers about what parts of decision-making require patient involvement and individual trade-offs. Medical doctors could communicate to designers what levels of accuracy are needed for specific tasks and the trade-offs between principles and standards in real-time decision-making, for instance between accuracy and urgency in emergency situations. Moreover, medical doctors could play a vital role in properly calibrating the algorithms' rates of false positives and false negatives. Given the fact that some algorithms have proven to have unacceptably high rates of false positives, doctors could provide useful input to the design process about the importance of reducing the algorithm's "eagerness" to detect signs of disease in some cases. Second, AI designers can engage with medical doctors to better understand and interpret AI output in a reasonable manner in clinical practice. As we have shown above, given the lack of understanding of how deep learning algorithms work, these limitations should be taken into consideration when applying algorithms in decisionmaking. When deep learning algorithms provide an analysis of data, for instance by classifying a patient's data as an indication of pneumonia, medical doctors must be able to properly interpret the algorithm's output-for instance, what it means that the algorithm states that there is a 70% probability of the patient having pneumonia, the algorithm's distribution of false positives and false negatives, and the reliability of the algorithms in the face of outliers and novel phenomena. Regarding the second claim of the collaboration model, fruitful collaboration between designers and doctors is conditioned on their capacity to communicate across their domain of expertise. On the one hand, algorithmic designers must be aware of the ethical aspects of the algorithms they develop and be well informed about medical expertise and the ethical guidelines that regulate medical practice. On the other hand, medical doctors who use AI in medicine must be well informed about how the algorithms work and the uncertainties and limitations of AI output. They should also be able to explain to patients how an AI analysis has been performed and how it has informed their decision about a diagnosis or treatment plan. In sum, medical doctors must play a role in the design process in order to enhance both the medical literacy of AI developers and the AI literacy of medical doctors (for a relevant discussion of the notion of expert literacy, see Eriksen, 2020). Finally, a third way in which collaboration between designers and other experts can be realized is through evaluating the impact of AI on clinical practice. If AI is going to inform crucial parts of medical decision-making in the future, it is vital that medical doctors who apply AI as a part of their clinical practice evaluate the impact of AI on decision-making and share their assessment with colleagues and AI designers. We shall not go into detail here about how such evaluation should be performed and governed. Our main point is merely that there must be established avenues for criticism, objections, and suggestions from clinicians and bioethicists to designers. Now, how is the collaborative model able to avoid some of the ethical problems that we have discussed so far? Collaboration and mutual capacity of communication between AI designers, bioethicists, and medical doctors avoid some of the problems that stem from viewing AI outputs as standard medical evidence. If medical doctors understand the way the output is generated and its reliability, both the input from AI and doctors' understanding and assessments will become more transparent, thus alleviating some of the accountability problem and satisfying conditions (a) through (c) for shared decision-making. This applies particularly to the ability of medical doctors to give proper accounts to patients in the case of error by explaining why AI is being used, how it works, its known limitations, and its possible causes of errors. A crucial issue here, concerns how medical doctors can gain the required competence in AI. While we cannot go in detail about this issue here, the use of AI in medical decision-making should be taken into consideration by higher education institutions when designing the curricula for medical doctors and other health care professionals. Moreover, since this technology is new and evolving, it seems reasonable for universities, public health institutions, industry, and medical associations to collaborate on developing courses for medical doctors (for a very interesting discussion of this issue, see Quinn et al., 2021). The medical literacy of AI developers and the AI literacy of medical doctors can enable doctors to promote shared decision-making's emphasis on sources of evidence deemed to be trusted, due harm assessment, and proper communication. While we find the collaborative model promising, we will now briefly point to the need for a fourth model in light of the high risk of AI in healthcare of reducing interhuman encounters and communication. The Public Deliberation Model The use of AI involves what we call meta-ethical risks that arise from a lack of inter-human encounters, experiences with human vulnerability, and deliberation. By "meta-ethical risks," we refer here to circumstances that may pose a challenge to conditions for practical ethics within a human-intelligence-centered worldview (as we are familiar with today). To the extent that AI technologies lead to a decrease in required communication and exchanges of information between doctors and patients, we face the risk of undermining the ideal of shared decision-making. Moreover, while leaving some of the communicative examination work to algorithms may produce more effective health care, it can also undermine professionals' engagement with patients' social, emotional, and existential challenges (Baerøe & Gundersen, 2019). Compassion, empathy, solidarity, and recognition of injustice may arise in such encounters and in turn influence motivation, actions, practical ethics, and political ideology. Such meta-ethical conditions for ethics in practice may be fundamentally changed if the social conditions for interaction and shared decision-making in healthcare are increasingly replaced by AI technology. In our view, both the threats of undermining the ethical conditions for shared decision-making and meta-ethical conditions driving ethics in medicine in general "as we know it" might be considered unavoidable risks of applying AI in medicine. However, deciding on designing and employing technology with such disruptive, transformative impacts should not be left to AI designers, bioethicists, and medical expertise alone; it calls for broad public debate about whether the costs and risk of AI are outweighed by its potential benefits for patients and society at large. The public deliberation model involves more stakeholders than AI designers, bioethicists, and medical experts. It includes policymakers and the general public, too. When agents in the cooperation model screen for and identify the fundamentally transformative impact of a new AI technology, the public deliberation model is required. It is beyond the scope of this paper to discuss the details of how this deliberation should be organized. We will therefore simply point out that a reasonable, general expectation is that it should be carried out in correspondence with conditions of democratic governance. This view is also compatible with the EU report on how trustworthy AI requires public deliberation (Independent High-Level Expert Group on Artificial Intelligence, 2019), but much more work is required to protect the ethics of-and within-such broad, public, shared decision-making processes (Table 2). Conclusion In this article, we have argued that the ordinary evidence model and the ethical design model downplay the fact that AI involves ethical value judgments in both design and application. The clear division of labor between designers and doctors, which both models imply, has problematic consequences in terms of not aligning AI with medical expertise and medical ethics and by not enabling medical doctors to properly understand the way in which the algorithm is designed and its limitations. The collaborative model alleviates these problems by emphasizing the need for including medical doctors and bioethicists in algorithmic design and improving their AI literacy in the context of application. However, this does not mean that the collaborative model solves all the central ethical challenges raised by the use of AI in medicine. AI technology that can increase effectiveness and precision but may disrupt conditions for human-intelligence-centered ethics and undermine ethical ideals, like shared decision-making, calls for broader deliberation over value trade-offs involved in the development and use of AI in health. The public deliberation model captures the broader social processes of including the public beyond AI designers, medical experts, and bioethicists. Further work is clearly required to carve out the distinct roles of AI designers, medical and ethical experts, policymakers, and the general public in developing AI for health. Our contribution in this paper is both systematic and substantial. In regard to systematicity, we provide an account of the central ways in which medical doctors, bioethicists, and designers can make AI ethically acceptable that we find to be lacking in the current AI ethics literature more generally and in the case of medical AI. Partly distinct, partly integrated Can deal with "meta-ethical risks" The models need more organizational specification By articulating four models, we enable a more systematic discussion of how different kinds of ethical concerns can be approached in medicine by central actors. In regard to our substantial contribution, our discussions of the distinct models purport to provide the future of medical AI not only with principles, but also a proposal (Estlund, 2019, p. 10) for how central actors can contribute to making medical AI ethically acceptable by interaction, mutual engagement, and their competencies. Funding Open access funding provided by OsloMet -Oslo Metropolitan University. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
9,314.2
2022-04-01T00:00:00.000
[ "Medicine", "Computer Science", "Philosophy" ]
Planning Minimum Interurban Fast Charging Infrastructure for Electric Vehicles: Methodology and Application to Spain The goal of the research is to assess the minimum requirement of fast charging infrastructure to allow country-wide interurban electric vehicle (EV) mobility. Charging times comparable to fueling times in conventional internal combustion vehicles are nowadays feasible, given the current availability of fast charging technologies. The main contribution of this paper is the analysis of the planning method and the investment requirements for the necessary infrastructure, including the definition of the Maximum Distance between Fast Charge (MDFC) and the Basic Highway Charging Infrastructure (BHCI) concepts. According to the calculations, distance between stations will be region-dependent, influenced primarily by weather conditions. The study considers that the initial investment should be sufficient to promote the EV adoption, proposing an initial state-financed public infrastructure and, once the adoption rate for EVs increases, additional infrastructure will be likely developed through private investment. The Spanish network of state highways is used as a case study to demonstrate the methodology and calculate the investment required. Further, the results are discussed and quantitatively compared to other incentives and policies supporting EV technology adoption in the light-vehicle sector. Introduction Road transport is one of the main contributors to CO 2 emissions and other pollutants; hence, reducing the use of fossil fuels is a major sustainability objective for the European Union (EU) [1].The shift to EV technology represents an opportunity for sustainable transport to achieve the EU goals, due to higher energy efficiency and renewable energy content.Still some drawbacks hinder this transition, such as charging limitations and the so-called "range anxiety". Range anxiety can be defined as the fear of not being able to reach a charging point for an EV.In this regard, the lack of available fast charging (level III, direct current (DC) or alternate current (AC) charge) stations, along highways, influences the use of electric vehicles (EVs), up to now restricted mainly to in-city mobility.However, fast charging technology together with battery swapping, already available on the market, allow EV interurban mobility with charging times comparable to refueling times for internal combustion vehicles.Additionally, although range anxiety may have other psychological reasons [2], the availability of fast chargers or battery exchange for interurban mobility is essential to prevent the appearance of this fear, since slow charging would extend the travel times beyond what is acceptable for any user, spending more time in the charging station than in the travel itself.Thus, according to the technology currently available, fast charging allows 15-20 min recharge for approximately 200 km range, while slow charging at 3 kW, would approximately take 3 h for 100 km range.Hence, with fast charging, a 500 km trip could imply two charging stops of 15-20 min each, compared to internal combustion vehicle only stopping once for 10 min. Slow charging can be done by using conventional electric sockets, therefore, to a large extent, the availability of a basic charging infrastructure already exists.However, the power requirements could imply necessary adaptation of existing low voltage infrastructure.The use of specific charging connectors, additionally, makes it necessary to adapt the normal sockets/plugs for EV charging.Furthermore, in order to complement the in-house charging points, public charging stations and parking charging points would have to be increased [3]. On the other hand, little investment on the necessary charging infrastructure has been detected in the private sector; this fact seems to be due to the difficulty of estimating the rate of EV adoption and its subsequent demand growth.Nevertheless, the EV sales are also influenced by the availability of charging infrastructures, since the development of these facilities accelerates the EV adoption [4].Thus, the dilemma of charging infrastructure versus EV, in terms of what to promote first, is part of a wider technological controversy [5,6], with social and cultural implications [7]. Experience with the operation of EVs has been analyzed in various studies as it is the case of the German households [8], concluding that normal commute range does not require other than private home slow charging devices, although the development of fast charging points would increase the use of EVs, also affecting their usage patterns.Additionally, research conducted by the Tokyo Electric Power Company in Tokyo, Japan 2007-2008 [9] has revealed an increase in the distance driven by EV users if fast chargers were available.It has been also argued that public fast charging infrastructure had a usage encouragement effect in the early stages of EV deployment [10]. Regarding economics of EV charging, Schroeder and Traber [11] concluded that fast charging is unlikely to be profitable without an increased EV adoption, and that the main risks for the investment on fast charging infrastructures were the EV adoption rates, local use rates, and competition between public and private charging devices.The results of the research suggest that initial high risk investment for public fast chargers could be incentivized or directly state owned.Additionally, in the case of interurban, highway fast chargers, the competition between private and public chargers is not present, therefore, the key to profitability will be the EV adoption rates and the usage patterns for interurban mobility. Policy support is analyzed in depth in the specialized literature [12], as it is the case of the assessment of the public support for electric mobility in Japan.The relevance of standards has been highlighted for infrastructure deployment in the US [13], whereas San Román et al. [14] frame the regulatory and associated business models for stakeholders involved in the electric mobility sector.Developments in batteries, carbon reduction policy, new business propositions and image of EVs, have been also identified as a pathway to vehicle electrification [6].Other policy implications, in regard to electric mobility, such as the effects on energy market and emissions, have been considered for the case of Portugal [15,16].Likewise, relevant research output exists on the synergies among EV charging, renewable energy integration and power system management [17][18][19]. Although the planning of the EV charging point infrastructure has not received much attention in practice, relevant studies have been performed in this field.For instance, in order to plan strategic public charging stations, an agent based tool built on user patterns was developed [20].While focused on city mobility, Liu [3] established a charging infrastructure for fast and slow charging stations, for the city of Beijing.Another perspective, simulating planned charging posts versus free entry, has been researched for the city of Barcelona [21].Results showed planned charging stations equilibrium difficult sound finance and may discourage necessary infrastructure.The proposal of this paper is to obtain an optimum minimization of the EV charging infrastructure in the planning period sufficient to accelerate the subsequent free entry by private investors. Optimal placement of charging stations has already been considered with multivariable optimization methods, to minimize the total travelled distance between centralized charging stations.Recent models focus on large-scale integration, with distances between charging stations, below 30 km [22]. A direct reference to the distance between fast charging in highways is found in the report by Zero Emission Resource Organization (ZERO) [23] on Norway's EV infrastructure; "would probably have to be available every 40-60 km for maximum effect", were factors influencing range are also detailed.This paper scrutinizes the statement. In summary, previous literature has not resolved the definition or methodology to calculate minimum highway fast charge infrastructure, although real countrywide decisions have been initiated to build this kind of infrastructure around the world.The work presented here focuses on the deployment of highway fast charging, by examining in detail the distance between stations, and proposes a methodology for planning infrastructure deployment.The study has focused on passenger vehicles, as we consider sufficient availability of models, and taking into account public transportation high-load models are currently not suitable for the interurban transportation, due to the weight and range capability.However, the considered fast-charging stations are valid for any vehicle complying with the charging standards. Although the issues related with battery performance, life-cycling, as analyzed in other research, or the grid implications and grid services of fast charging are not considered here in-depth [24,25], it is interesting to mention that there are potential services to be offered also by fast charging points, such as power quality services only discharging slightly the battery [26].Other services and applications, as highlighted in the available research, may not be feasible for highway fast charging, due to the need of the user to charge as fast as possible and continue the highway trip [25], unless the charging stations are paired with energy storage. The article is divided in five sections.The introduction is followed by the methodology description, the Minimum Distance between Fast Charging and the Basic Highway Charging Infrastructure (BHCI), the latter includes the case study of Spain.The article closes with a conclusion, including policy recommendations. Methodology The goal of the research is to assess the minimum requirement of fast charging infrastructure to allow country-wide highway EV mobility.In-city mobility is already possible given home charging possibilities, but country-wide mobility would need a minimum infrastructure of fast charging posts to further promote the use of EVs for longer journeys. The basic figure for the infrastructure planning is defined as the Maximum Distance between Fast Charge (MDFC), and is the number of km between fast charging stations that would allow every EV to reach the station, including a margin of security.This concept, developed as part of the research, is considered essential to plan interurban EV infrastructure.The calculation of the distance between two consecutive fast charges along the highway infrastructure has been identified as a priority for the infrastructure, since this value makes the optimal position of the charging stations easily locatable.Additionally, compared to other methods focusing on the number of charging stations per vehicle, calculating the distance with the proposed method is more practical, as it better enables the location of the charging stations, as well as the calculation of the necessary investment. To analyze range and range evolution, the available EVs in the market are selected discarding city vehicles with short ranges or vehicles with no fast charging capability.As the infrastructure deployment may take several years, the distances and infrastructure method takes into account progressive range extension evolution or is calculated based on the minimum range in the target vehicle category.Complementary info for the MDFC calculation is the weather conditions and flexibility margins, both factors considered country specific and explained in Section 3. The strategic location of charging posts utilizes the maximum distance value and the main highway infrastructure, complemented with the most transited areas.The method is presented as the assessment of a BHCI defined as the minimum charging points needed in a country's highways to allow the country-wide transportation with any single EV.The process followed is simplified in Figure 1. Input Data The main input data for calculating the MDFC, consists of the information obtained from EV manufacturers.The available and planned EV models, provided with fast charging capability, are show in Table 1.Any of these vehicles could be used for transportation distances above their range, given the availability of fast charging stations along the route.Other variables influencing the calculation of the MDFC are the weather conditions, the road and traffic status, and the driving and driving and charging behavioural patterns in the selected area.The MDFC will be therefore territory dependent, as variables mentioned as inputs vary with the region or area conditions where the distance is to be calculated. Classification In order to simplify the modeling, focusing on the relevant vehicles, we can classify the available EVs in three categories.First, the high-range category I comprises exclusive models with limited serial production and range equal or above 300 km.Second, the fast charge enabled city cars belong to Category II, which have a range of 100 km or less, and although capable of long interurban routes, will probably not be used for journeys involving more than two charging stops (<300 km).Third and last, the main group of EV models takes part of Category III, with the highest serial produced units and user adoption. Figure 2 shows the different categories proposed, where the size of the circles represents the manufacturing scale of the models.The analysis of the MDFC will be calculated from the Category III models, the most serial-produced, highway and fast charge enabled vehicles. MDFC and Range Trend Given the selected categories, the evolution of the minimum range and the mean range can be analyzed. As it has been proposed, MDFC is based on the Category III vehicles, where the shortest range in the category is 123 km and the average is 182 km.The analysis of the category evolution shows the mean range has not grown with time, as models planned for 2014 have ranges closer to 150 km rather than 200 km.Additionally, Figure 3 represents the linear trend in range for the vehicles included in Category III, according to manufacturer's production plans for 2014 and 2016.This trend is represented to show the available range for the existing models.As it can be seen in the graphic, range has not been increased in most recent EV models; conversely, its average value has slightly decreased along the last years, although still remaining in the range of 150-200 km.Battery cost reductions or breakthrough innovations are possible factors that might increase the commercially available range in future EV vehicles in the following years, but unless the vehicles are used for interurban transportation, there is no practical sense in increasing the capacity, other than the use of the battery as energy storage for other uses. The calculation can be factored by the number of existing Category III EV, in order to calculate the average range of the vehicles actually in use.Based on the accumulated sold units (estimate values are included in Table 2) the average range of EVs with fast charge, estimated operational, is 200.2 km.Although the average range of the Category III vehicles is 182 km, if only most sold vehicles are considered it would be higher (200 km), in order to calculate the MDFC, a conservative approach should allow every model to have a range above the MDFC.By contrast, a method inducing the least infrastructure investment would consider the average.The vehicle that sets the minimum range in the category at the point of the research is the Chevrolet Spark, with a 132 km range.Battery capacity losses on the EV fleet on the roads would be also a limiting factor to increase the MDFC, even if the average range were increased.In order to calculate the MDFC considering a safety margin, both for weather and for other motives, including road and traffic conditions, we propose the calculation formula as in Equation ( 1): where "R m " represents the value of the EV range, for the calculation of the MDFC.The "R m " to be used is the minimum range corresponding either to the lowest value for existing and planned category vehicles (conservative "R m ") or to the average range of accumulated vehicle stock in a country (real "R m ").The second option, at the moment of the research, implies a higher MDFC, implying a lower investment for the BHCI."M w " is the weather margin, "k" is the simultaneity factor; and "M f " is the flexibility margin.These terms of the equation are explained in the following paragraphs.Weather conditions, both cold (increased vehicle drag/resistance, battery performance and the use of heating) and hot (primarily air conditioning), affect EV performance.Although further operational data will be required to model the range influence, current simulations show that air conditioning/heating is the single largest auxiliary load on any vehicle by an order of magnitude [28]; the worst case simulation resulted in range reduction of 40%.The paper presented solutions for reducing these auxiliaries, by means of advanced glazing and ventilation.Other research, found weather conditions not only affect battery performance, but also choice of transportation, supporting the idea of simultaneity factor for calculating MDFC [29].Real operational fleet performance data could be retrieved from Fleetcarma [30], with ranges been reduced up to 40% (at −7 °C), and not reaching 40 km in lowest temperatures (−20 °C), also influenced mainly by the cabin heating use but also the acceleration patterns.For example, fleet supervision data shows examples that smooth driving and an unheated cabin provided 2.4 times the range compared with aggressive driving and a heated cabin.With hot temperatures, range is also reduced due to air conditioning use, but the component efficiency is not as much influenced negatively as in cold temperatures.Testing of the MiEV EV [31] showed the maximum range reduction was 46% for maximum air conditioning and 68% for maximum heating. It is relevant to highlight that conventional heaters have a coefficient of performance (COP) of 1, but using heat pumps would reduce the burden on the efficiency, also reducing the effect of the heating load on the range decrease.We propose the values for M w summarized in Table 3, based on the current findings, for the purpose of the research.Thus, we have selected the 40% reduction when winter temperatures are frequently below 0 °C and 50% reduction when temperatures are below −10 °C, following the results from the literatures [28,30].Although the research [31] mentioned previously found higher values (68%), this value has been considered an extreme value when heating was used constantly at maximum.Concerning the air conditioning, the maximum has been selected as 30% for the hottest climates, following the values from the mentioned experiments and experience, but lower to the maximum found (46%), where air conditioning was used at maximum constantly.Additionally, improvements in the battery and power train efficiency with temperature protection, as well as vehicle glazing, ventilation and comfort are expected, meaning the weather margin would have to be reviewed and updated.In order to consider the driving conditions during the whole year, both the heating and the air conditioning requirements have to be taken into account.Hence, the worst case scenario would be a region where there are extreme cold temperatures in winter and high temperatures in summer, which affects the range (implying a 50% weather margin) and also the battery life cycle. The flexibility margin has to include the road and traffic conditions, the driving style and charging patterns.Another input in the margin is the highway influence, as EVs official range (for example, US Environmental Protection Agency (EPA) or New European Driving Cycle (NEDC) is higher than range for full highway trips.Highway range will be lower than the mentioned official cycles because of the speed.M f will therefore need to be at least the range reduction due to highway speed.Supported by the manufacturer's highway range data on ideal conditions, highway speed of 89 km/h will imply less than a 10% range decrease, but 105 km/h reduces the range approximately 20%.The flexibility margin is set here to be 80% to account for this reduction, and could be increased by road, traffic, driving style and charging patterns.In order for the infrastructure to be of use for at least 10 years of life-time, possible increases in speed limits could also be considered.Table 4 summarizes the estimation for these factors.The simultaneity factor "k", considers the coincidence of weather conditions and flexibility margin.If behavioural factors defer EV driving when weather conditions are cold or most highway trips are when weather conditions are more appropriate (for example city to coast trips in summer), the simultaneity factor will have a reduced value.Suggested value, when weather margin is not necessary (null or 10%), then all flexibility has to be taken into account, k is set to 1.With high weather margins, above 40%, simultaneity has to consider that with harsh conditions, the average speed is limited because of user precaution. Values for the estimated worst case scenario, considering minimum range is 123 km, with 50% weather margin, 120 km/h average highway speed limit by users, aggressive driving patterns, high-traffic, and no full-charging habits, and a simultaneity factor of 1, give a result for the MDFC of 12.3 km.Limit conditions and behaviour, such as these are not reasonable, and would entail a non-efficient infrastructure at present.However, modularity of fast charging stations, that don't require economies of scale as gas stations, allows a more distributed deployment, once the basic infrastructure is met.Accordingly, with widespread adoption of EVs, a MDFC of 12.3 km is not an unreasonable scenario in years to come. Actual research and real testing data does not allow a more precise calculation of the underlying factors influencing MDFC, so they need to be estimated, and the selection of their values should not be arbitrary.These estimations are based on available research, experimental and real road experiential data.On the other hand, the model doesn't allow obtaining a certain desired value for the MDFC.Besides, additional experimental data is required to base flexibility margin on improved evidence, but this framework for a systematic approach already allows the calculation. BHCI The BHCI is defined as the number of fast charging stations that are necessary for any single EV during its travel through a country from any given location to any other within the country making use of these charging stations.This original concept is proposed and defined, as a planning tool for EV enabling infrastructure.According to the previous explanation, the distance between two consecutive charging stations must be inferior to the MDFC along the existent national road infrastructure.As shown on Figure 4, fast charging stations allow highway range, but also secondary coverage in stations away from the main state highway would be necessary to satisfy the MDFC requirements.The whole economic investment required for a BHCI to be deployed has to take into account the initial investment, as well as the operation and maintenance costs, during the lifetime of the infrastructure.Return on investment from the charging infrastructure will mainly depend on the facility utilization, as has been already concluded [11]. The investment costs for the infrastructure are simplified in Table 5.Prices will probably decrease with time, associated with technological evolution and economies of scale.Specifically considering a country-wide deployment, the scaling factor reduces the global spending compared to investments for single units.The costs estimated in this work are those considered by Schroeder and Traber [11], since they seem conservative and sufficient for the purpose of this research.However, market price of a fast charging station requested to manufacturers during the research was around 25,000 € for material costs.The current price is 40% lower than the conservative value used in the estimations (only the maintenance for super-fast has been corrected to 10% of material cost).In order to value the investment, it is necessary to consider the operational expenses, but also the operational revenues.Operational revenues depend on the usage of the chargers and the selling price, and would require further estimations for the usage patterns. Here, the Level III DC, without regard to the charging standard being considered, is the solution chosen for investment calculations.The charging time achieved with this fast charge solution allows 80% battery charging in less than 20 min, so comparable to conventional combustion engine cars.The cost-benefit analysis for fast charging stations, as mentioned in the introduction, is mainly dependent on the EV adoption rate; however, other factors such as the usage rates have been also identified to influence this analysis [11].Besides, for the infrastructure as a whole, and disregarding the economic benefits of the fast charging station operator, the added value of the fast charging infrastructure includes the reduction of the polluting emissions coming from the vehicles, the reduction in energy imports and the reduction in transportation costs for the users of the vehicles. Due to the fact that there is need for a grid connection, the fast charging stations may be located in existing gas stations; they can be deployed close to a grid connection or otherwise being built as an off-grid renewable charging stations or by using charging vehicles (Figure 5 illustrates both concepts).Charging vehicles would service vehicles in case of loss of charge.Besides, when seasonal overload of certain infrastructure is expected, fast charging vehicles may act as extra fast charging stations, so the operational costs are kept at reasonable levels in periods of high charging demand.This would be possible because no physical station has to be built just for those periods of high demand with the associated power costs and other fixed expenditure.A consideration for locating synergistically the fast charging stations is to use electrified rail infrastructure, utilizing the installed grid infrastructure.In most cases the highway infrastructure goes parallel to the railway infrastructure, therefore, electrified highway mobility could rely on existing electrified transport, as it is the case of the train. The use of a Geographic Information System (GIS) would be a next step to locate the charging stations definitively, next to existing electric infrastructure.Real location would bring the BCHI figures to a deployment phase, but for the investment calculation presented here, is not of additional relevance. Examples of EV Infrastructure Development The deployment of fast charging infrastructure within a countrywide planning is already a fact in various countries, with different estimates of the necessary distance between fast charging stations.Examples of these deployments are summarized below: • Estonia will install, during 2014, 165 fast chargers, in a government lead infrastructure planning for a country wide development.The MDFC in highways has been set to 60 km.Estonia has 1.3 million inhabitants and a surface of 45,227 km 2 , therefore, the ratios of chargers are one point of fast charge per 7878 people and per 274.1 km 2 .A map with the planned infrastructure is shown in Figure 6.From the MDFC calculations, if we consider frequent cold temperatures, and low hot temperatures, weather margin result is 40% according to the proposed model.With speeds of 90 km/h in interurban roads and 110 km/h in highways, the flexibility factor is estimated as 15% for this variable.Not considering additional flexibility factors, and a simultaneity factor of 0.9 in order to consider speed reductions together with bad weather conditions.The resulting MDFC is 57.19 km, when using the minimum range of 123 km, similar to the distance planned for the country.Furthermore, if the standard for the fast chargers were the "Chademo", the minimum EV range would be 160 km, corresponding to the MiEV model, as the model with lowest range in the standard, with a resulting MDFC of 74.4 km in this case. • In the USA, the "West Coast Electric Highway" consists of fast charging stations located every 40-80 km along the Interstate 5 and other major roadways.In this case, the calculated MDFC is between 67.6 km for areas with winter temperatures close to 0 °C, high traffic, and speed limit, and 111.9 km in areas with low speed limit and mild weather. • In Germany, since 2011, there are 13 fast charging stations placed at highway gas stations, between Hamburg and Dortmund. • In France, the project "Energy Corridor Alsace" starts with six charging stations, and is part of a wider, CROME cross-border project with Germany, proving vehicles may travel across the border and recharge in both countries [33]. • In Norway, the company Ishavskraft has deployed fast charging stations between Oslo and the Swedish border, where the distance between chargers does not exceed 50 km [34].The calculated MDFC, taking into account the speed limit is 80 km/h and that temperatures below −20 °C are not infrequent, is 61.5 km.With a MDFC of 50 km as was decided, EVs could still reach the charging stations with reductions of range up to 60% caused by low temperatures. • Japan is the country with the highest density of fast charging stations, with over 1500 units, and in the case of highway stations, the "Japan Charging network Lim" services these stations [35]. • In The Netherlands, Fastned has decided to install fast chargers with a MDFC of 50 km, 200 units that will serve 16 million people [36].The chargers per capita compared to the Estonian project is 10 times lower, nevertheless the density of chargers will be higher, one every 207 km 2 versus one per 274 km 2 .The calculated MDFC for this case would have the same values as for the Estonian highways, with a resulting value of 57 km. From these examples it can be noticed that the MDFC remains close to 50 km in most cases, although the weather margin would differ between these countries, for example with a difference of up to a 100% between Norway and the areas with least weather margin in the western coast of the USA.One consequence of small MDFC is that even city cars with fast charge could be able to use the infrastructure; another consequence is that the initial investment could be reduced if the conditions point to a higher MDFC.By using the calculation presented in Section 3, the precise charging stations that would be necessary could be planned in order to promote the use of EVs for interurban trips. Application on the Spanish Highway Infrastructure Spain has a strong reliance on road transportation and a dependence on energy imports above the EU average.The transport sector consumes most primary energy (26%), and causes 28% of the CO 2 emissions [37].Meanwhile, the electric system accommodates above 30% of renewable content, therefore, electrifying transport would imply a higher efficiency and lower emissions. In order to reduce its emissions, increase its renewable share of energy consumption, improve its energy efficiency and reduce its energy import bill, the electrification of the transport in Spain is essential, as declared in the "Movele Plan" to promote EV adoption, and the "Electric Vehicle Impulse Strategy" defined by the government [38].One of the reasons for the low adoption of EV in Spain is the use of personal cars for interurban and central-to-coast journeys, involving long distances, as can be inferred from the mean daily indexes of vehicle intensity, calculated by the Spanish so-called "State Highways Network" [39]. Additionally, fast charging points located in cities will probably be used only for emergency charging, when slow charging services at home, office or parking lots would not be able to allow the desired mobility.Additional use of intra-urban fast charging points could be the provision of charging services for EV when needed during the day.The calculation of the BHCI, aims at enabling interurban electric mobility, where fast charging (or battery swapping) is the only viable possibility. Considering the ratios of the electrification of the Spanish territory and the availability of gas stations (Figure 7), it is evident the feasibility of slow charging electric posts in more than two orders of magnitude higher than gas stations.The number of gas stations in Spain is 9425, while the electrical meters are 27.5 million, 2920 times greater, according to the National Statistics Institute of Spain.Spatial density is one gas station per 18 km 2 , compared to 147 connections for electricity supply per km 2 .Per capita ratios reach figures of one gas station for 3765 people compared to one electricity meter for every 1.56 people.The relevance of these ratios is clear in the availability of possible slow chargers, but doesn't ensure mobility, primarily for interurban transport, as it has been mentioned in the introduction of this paper.Paradoxically, until 2001, the minimum distance between gas stations in highways in Spain was set by the regulation to 20 km, but this was later eliminated in order to allow new investors to install more facilities.What is essential in the actual stage of the EV adoption is the availability of a minimum number of fast charging stations to ensure there is enough charging service for EV users in the country, that is, to develop the necessary BHCI.In order to achieve this goal, the MDFC could be set as mandatory in the regulation, either supplied by the state or any private entrant. If Spain were to install fast chargers, with similar ratios of density per square kilometer or per capita to The Netherlands and Estonia, between 1517 (considering Spain's extension) and 590 (considering per capita) chargers would be required.The necessary capital costs of the infrastructure would be between 83.5 million Euros and 32.4 million Euros, with a yearly operating cost of between 6 million and 2.3 million.To compare these figures, the Movele plan [38] allocated 10 million Euros, where 8 million where for vehicle acquisition and 1.5 million for slow charge infrastructure deployment.The actual PIVE plan, intended to promote the purchase of efficient vehicles, has involved 75 million in its first edition, and 150 million have been allocated for on the second edition and the third edition, in development at the time of this research, has a budget of 70 million Euros.However, such amount of money for the initial investment on the infrastructure is not considered necessary, therefore, we will calculate the minimum BHCI by means of more appropriate methods (using the MDFC). Based on the MDFC, the location of the fast charging points has to be designed to allow country-wide electrical transportation.As for Spain, the MDFC components are estimated for the whole country as: R m : 123 km-Minimum range in Category III; M w : 25% 30%-High temperatures are common in summer, but cold temperatures below 0 °C only in some areas; M f : 20% for speed limit, as there is an extensive 120 km network.Additional factor would take into account a 5% margin for driving patterns and traffic; k: 0.8-Speed limit is lower in areas with non-ideal road conditions and weather, and major interurban mobility takes place with good weather conditions. The resulting value as per Equation ( 1) is between 61.5 km and 67.65 km, the latter value for mild weather regions.Temperature margin has to be considered higher for regions with more severe winters (Castilla y León in Spain for example), to obtain a more precise MDFC. The links among all the province capitals, through the state highway network have a minimum distance of 44 km (between Valladolid and Palencia) and a maximum distance of 297 km (from Cáceres to Madrid).Having a fast charger in every province capital could be an option, resulting in 50 fast charging stations, plus additional chargers among the capitals with a distance between them greater than the MDFC.Calculating these intercity charging stations, the requirement is of 44 additional chargers.Such an investment would be the "major city fast charging network" (Figure 8) that could be used for interurban and also intra urban mobility, and would require a capital expenditure (CAPEX) of approximately 5.1 million Euros, with operational expenditure (OPEX) of 376 thousand Euros per year.The drawback is the non-linearity of the routes and the deviations that it would entail, even if it was placed beside the highway infrastructure.This fact is the reason for calculating the BHCI using the topology of the main state highways to maximize the range.However, building an infrastructure (by public initiative, for example) does not impede the deployment of a parallel privately owned group of charging stations in the cities. Another basic calculation for the BHCI, without taking into account the topology of the existing highways, is to calculate the needed infrastructure based on the total state highway network, as in Equation ( 2).This approach may give an estimate of the quantity of fast charging points, where T n is the total network length in km, and the resulting BHCI is the number of required stations. For the case of Spain, with a 26,037 km State Highway Network, the resulting values are between 384 and 423 charging stations.The quantity is lower than the ratio-calculated infrastructure based on smaller countries, nevertheless meaning an infrastructure worth 21-23 million Euros and with an annual operating cost of between 1.5 million and 1.69 million Euros.Another proposal for the planning of the deployment is through a highway-topology calculation of the BHCI, as mentioned above.Figure 9 represents the state highway network with the high value of MDFC of 67.65 km range, with a radial approach, starting in Madrid, and following the different state highways.The selection would ensure the possibility of routes from Madrid to any destination on the state highways.The required investment for a minimum infrastructure allowing the mobility in the whole highway infrastructure of Spain following the radial MDFC method implies the deployment of 76 fast charging stations.The investment in this case would be 4.18 million Euros and a yearly maintenance of 304 thousand Euros. If the topology follows the network, with individual range the location is done more precisely and the number of charging stations is higher, as seen in Figure 10.The figure also gives an impression of the secondary network covered with the range from the charging stations.Only a small portion of the country, apart from the state highways, estimated in less than 10%, would be out of range from a fast charging station.Ninety-nine fast charging stations are calculated using the individual range, as the distance between radial MDFC may be higher on the national highway network.The investment using this method requires 5.44 million Euros and an operational cost of 396 thousand Euros per year.The summary of different infrastructure calculations is summarized in Table 6, where the approaches can be compared.An example on the infrastructure utilization can be considered with the use of a Nissan Leaf, to cover the distance between Madrid and La Coruña (591 km).With fast charging stations every 67 km, and a range of 200 km, with normal weather conditions and charging up to 80%, three stops would be necessary, each of below 20 min, and with a total time close to seven hours.A conventional combustion vehicle could complete the same route with normally two stops (respecting required rest periods for safety), but would probably not save more than half an hour.The difference between both technologies is therefore no longer substantial, underlining the importance of having highway fast charging infrastructure to enable more sustainable transport. The previous results, excluding the total state highway calculation and the ratio-based, highlight the fact that country wide e-mobility would be available for a budget close to 5 million Euros.Furthermore, if we compare these results with other methodologies, such as those involving the number of fast charging stations per vehicle, the necessary investment could be even lower when considering the actual vehicle stock; however, it turns to be higher when the calculations are based on the expected units to be using the infrastructure in the following years.If the calculation is made by using methods such as the one described by Xu [22], the MDFC would be minimized and the investment would be higher, because of the model minimizing the distance traveled by the EVs (as in inter-urban charging infrastructure).The budget resulting from our research represents a small part of the state supported vehicle acquisition programs in place in Spain nowadays.Relevance must be given to the fact that given a certain use of the infrastructure, the infrastructure can become profitable.Moreover, not only the positive consequences of emission reduction, reduced energy dependence and energy efficiency are to be considered, but also the possibility of direct profitability of the fast charging service. An example of the cost-benefit analysis for the proposed infrastructure can be calculated from one of the BHCI estimations.If we select 99 charging stations, with an investment of 5.44 M€ and an operating cost of 396 k€ per year, considering electric consumption costs in the station of 0.13 €/kWh and the sales price to be 0.3 €/kW h, the number of charges per year and per station can be calculated for covering operating costs or for achieving a 10 year return on capital investment.These calculations are shown in Table 7.In order for the number of charges to be possible, at least eight vehicles per day would have to charge in every station.Considering that most of the state highways have more than 10,000 vehicles per day as mean daily index, with a small percentage of the vehicles being electric the results of the calculations could be feasible.The benefit for the vehicle users is evident with regard to the cost of transport, given an efficiency of 8 kW h/100 km compared to, for example 7 L/100 km of a combustion vehicle.With 0.3 €/kW h of electric sales price and a price per liter of 1.4 €/L (based on actual market prices in Spain), the cost per km is 75% less with the EV (2.4 €/100 km versus 9.8 €/100 km). The deployment of charging stations should follow a prioritization, based on the daily mean index of vehicles measured by the state highways.The methodology for prioritization requires further research and modeling to complement the actual methodology. Taking into consideration the cost reduction curve for the charging stations, the cost of building the infrastructure will evolve following a learning curve.At the date of this publication, the investment costs according to material prices are already 40% lower than the estimates from 2012 [11].Once demand and vehicle use has ramped up sufficiently, state owned or state supported infrastructure will move to a privately managed infrastructure. Conclusions This paper estimates a country wide infrastructure to allow full interurban EV by calculating the MDFC.The values influencing the MDFC are presented and calculated.Besides, from the analysis undertaken, we can extract a simple and clear conclusion; the required fast charging infrastructure does not imply a high upfront investment compared to other EV incentive plans.For the case of Spain, less than 5 million Euros would allow a countrywide fast charging electrical infrastructure, requiring less than 100 fast charging stations.Although the focus is Spain for the calculations, a general applicability of the methodology is possible.The paper also compares the MDFC values with existing country wide deployment projects of fast charging infrastructure, obtaining similar if slightly higher values (>10%).These projects have a more conservative MDFC, the risk of the EVs not reaching the next fast charging station is minimized, but effective investment could have been reduced selecting the calculated MDFC. The building of the BHCI is proposed to be promoted as an enabling infrastructure, state-owned or subsidized by the government to charging management retailers.The return on investment will not be in the short run, unless the EV adoption ramps-up above expectations, but would complement the benefits of promoting interurban electrical mobility.Reduction in energy dependence, energy trade unbalance, and consequently less trade deficit to be financed, are the main values for the country. Together with fiscal levers as proposed by Turcksi et al. [24], financial support, and emission restrictions that have the goal of making road transport more sustainable, an additional proposal, related with the content of the research, for energy policy is to favor highway electric mobility by reducing toll in highways, to foster the e-mobility for interurban routes. The proposed model aims at setting the infrastructure with objectivity, despite the current uncertainty on how weather, flexibility and simultaneity of factors influence range.Further research on these factors, in parallel with the implementation of technology enhancements for EV, may improve the methodology presented in the paper.For this matter, increased EV adoption and consequent increase in experiential data for interurban transportation will prove valuable. Figure 1 . Figure 1.Process for highway fast charging infrastructure planning.EV: electric vehicle; MDFC: Maximum Distance between Fast Charge; and BHCI: Basic Highway Charging Infrastructure. Figure 2 . Figure 2. Range categories for fast charge enabled EVs. Figure 4 . Figure 4. Range on highway infrastructure and coverage. Figure 5 . Figure 5. Off-grid charging station and fast charging service vehicle concepts. Figure 7 . Figure 7. Map of gas stations density in Spain [40].Reprinted/Reproduced with permission from [40].Copyright 2013 Ministry of Industry, Commerce and Tourism of Spain. Figure 8 . Figure 8. Major cities fast charging network and distances. Figure 9 . Figure 9. State highway network and MDFC range, radial topology model. Figure 10 . Figure 10.State highway network and MDFC range, individual range model. Table 1 . Fast charge enabled EV models, year and range.Based on manufacturer's public information. Table 2 . [27]cle units range and global accumulated sales.Based on International Energy Agency (IEA) 2013[27], manufacturer statistics and international import statistics. Table 3 . Weather margin (M w ) factor values for weather conditions. Table 4 . Flexibility margin estimated values and multipliers. charges per station & year for 10 year ROI 2,793 charges per station/year
10,187.8
2014-02-27T00:00:00.000
[ "Engineering", "Environmental Science" ]
Effects of forward disorder on quasi-1D superconductors We study the competition between disorder and singlet superconductivity in a quasi-1d system. We investigate the applicability of the Anderson theorem, namely that time-reversal conserving (non-magnetic) disorder does not impact the critical temperature, by opposition to time-reversal breaking disorder (magnetic). To do so we examine a quasi-1d system of spin 1/2 fermions with attractive interactions and forward scattering disorder using field theory (bosonization). By computing the superconducting critical temperature ($T_c$), we find that for non-magnetic disorder the Anderson theorem also holds in the quasi-1D geometry. On the contrary, magnetic disorder has an impact on the critical temperature, that we investigate by deriving renormalization group (RG) equations describing the competition between the disorder and the interactions. Computing the critical temperature as a function of disorder strength, we see that different regimes arise depending on the strength of interactions. We discuss possible platforms where to observe this in cold atoms and condensed matter. I. INTRODUCTION Competition between superconductivity and disorder is a very important and fundamental problem.By nature, superconductivity would naively be expected to be robust to disorder.One important question, addressed from the early days of superconductivity [1,2], is whether the presence of disorder in the normal phase can impede the occurrence of the superconducting phase transition or drastically change its critical temperature.The result, known under the name of Anderson's theorem [2] is that for a singlet superconductor obeying the Bardeen-Cooper-Schrieffer (BCS) mechanism, non-magnetic impurities have no effect on the critical temperature, while magnetic impurities decrease the critical temperature, effectively "destroying" superconductivity.This behaviour was explained very clearly by Anderson through a symmetry argument.As long as the disorder still respects time-reversal symmetry (as non-magnetic disorder does), it is possible to form a pair of eigenstates that are the timereversal partners of each other, instead of the usual k and −k states forming a Cooper pair in the pure system.However, as soon as the disorder breaks time-reversal symmetry (as magnetic disorder does), there is no way of forming an equivalent of the Cooper pair, and superconductivity is destroyed.For magnetic disorder, formulas giving the reduction of Tc with the concentration of disorder were given [1]. This dramatic difference in behaviour extends to other types and pairing natures of superconductivity, where both kind of disorder can significantly decrease the critical temperature.The robustness of the superconducting transition to disorder has thus been seen as a probe of the nature and pairing symmetry of the superconducting order parameter in materials as varied as heavy fermions [3], organic superconductors [4], high Tc superconductors [5] and multiband superconductors [6].The combined problem of disorder and interactions has also been studied theoretically in different settings including the 3D Hubbard model framework (both repulsive and attractive) to compare the competition between different orders in the presence of disorder [7,8], or 2D superconductors [9,10]. However, the situation was realized to be more compli-cated than predicted by simple applications of the BCS meanfield equation to disordered systems.This is particularly the case when dimensionality allows the disorder to have a strong effect, such as e.g.leading to Anderson localization [11].In such cases, since the very nature of the eigenstates is affected by the disorder, the symmetry argument does not suffice, and even non-magnetic disorder can potentially affect T c .It was indeed shown to be the case for systems made of coupled 1D chains with attractive interactions.In such a case non-magnetic disorder is able to destroy even s-wave pairing [12], or in some regime even enhance it [13]. One could argue that such results are the direct consequence of the existence of the rather drastic phenomenon of Anderson localization, quite efficient in one dimension, but much easier to reduce strongly in higher dimensional cases.Even in one dimension, since Anderson localization is intimately linked to the presence of backscattering due to disorder [14][15][16], one could expect to recover an Anderson theorem if such backscattering is suppressed and if only forward scattering exists from the disorder. Such questions have become timely since recently cold atomic systems have provided excellent experimental realizations in which both disorder and interactions could be controlled [17,18].In such systems, localization of one particle in quasiperiodic or speckle potentials [19][20][21][22], disordered interacting bosons [23,24] and fermions [25,26] have been realized.More generally, cold atoms have proven to be excellent systems to probe or think of combined effects of interactions and disorder or quasiperiodicity in a large variety of situations [27][28][29]. In this paper, we thus examine the effects of non-magnetic and magnetic disorder on a system made of coupled one dimensional fermionic chains with attractive interactions.To avoid or minimize effects due to Anderson localisation, we restrict ourselves only to long wavelength disorder having Fourier components much smaller than 2k F , where k F is the Fermi wavevector of the chains.In Sec.II, we present the model, the bosonized formalism and the observables that we will look at in order to compute the critical temperature.In Sec.III, we examine non-magnetic disorder and find that forward non-magnetic disorder has no impact on the FIG. 1. Fermionic tubes of spin 1/2 particles which can tunnel between the tubes.The particles experience a contact attractive interaction U (see text), leading to a singlet superconducting ground state.The tubes can be continuous, or have their own lattice.In which case the model is the attractive Hubbard model with anisotropic hopping t ∥ along the chains and t ⊥ between the chains.We are in a situation where t ⊥ ≪ t ∥ .Additionally, along the chains, there is either a random chemical potential µ i or a random magnetic field h z i . critical temperature.In Sec.IV, we instead study the case of magnetic disorder.First, we look at the case where we neglect the spin gap in the spin sector of the Hamiltonian, which simplifies the problem and allows us to treat it almost completely analytically.We find that disorder here weakens the superconductivity until it destroys it.Secondly, we treat the case of a finite spin gap.To deal with the corresponding "sine-Gordon" term in the Hamiltonian, we use a renormalization group (RG) technique.In this case, while we still find that disorder will ultimately lead to the destruction of superconductivity, we also see that depending on the strength of interactions we get two very distinct regimes arising from the competition of the disorder with the spin gap due to the interactions.In Sec.V, we discuss several aspects of our results and also how they could be practically implemented in a cold atom or condensed matter realization.Finally, a conclusion can be found in Sec.VI and more technical details can be found in the appendices. A. General microscopic model We consider a fermionic spin 1/2 model with attractive contact interaction U < 0, made of several 1d tubes arranged in a 3D lattice.The system is schematically shown in Fig. 1.The ground state of the pure system is thus a singlet superconductor with, in particular, a gap in the spin excitations [30].The 1D tubes can be continuous or with a lattice, both cases can be treated similarly. We consider two types of disorder.One is non-magnetic disorder where a random chemical potential couples to the total density where c † σ,n (x) creates a fermion with spin σ at point x on chain n.The corresponding disorder term is The second type is a random magnetic field coupling to the spin density along z leading to Both the chemical potential µ n (x) and random magnetic field h z µ (x) are taken to be uncorrelated from chain to chain.For the correlation of the disorder along the chains, we wish to avoid the dominant effects of Anderson localization, which is produced by the backscattering on the disorder, and which is anomalously strong in one dimension.As discussed in Ref. 12 and 13, this has a drastic effect even on plain vanilla singlet superconductors.We thus restrict the disorder to have only Fourier components much smaller than 2k F , where k F is Fermi wavevector of one chain.We will discuss the consequences of such a restriction on disorder in more details in Sec.V C, but we just note here that such a limitation of the spectrum of disorder is quite natural with speckle disorder [31]. B. Bosonized representation To deal with the interactions in each chain, we use the bosonization technique [30] and introduce collective variables φ ρ,n (resp.φ σ,n ) linked to the fluctuations of charge (resp.spin) density on chain n by These variables are conjugate to variables θ ν,n (with ν = ρ, σ) linked to the phase fluctuations and which obey the canonical commutation relations After bosonizing each chain individually, we obtain the following Hamiltonian: where 〈n, l〉 denotes chains that are nearest neighbors on a lattice.The Hamiltonian of a single chain is The parameters K ρ , K σ , u ρ , and u σ are the Luttinger parameters and contain the effects of the interactions and the kinetic energy.α is linked to the ultraviolet cut-off of our bosonic theory. The forward disorder Hamiltonian reads in bosonized form where γ z and η are 2 random Gaussian fields with corre- characterizing the forward magnetic and non-magnetic disorders, Finally, we have to consider the interchain coupling.The elementary coupling between the chains should be produced by single particle hopping.However, we consider in this paper a different coupling between the chains, namely we retain only the tunnelling of pairs by Josephson coupling.The corresponding part of the Hamiltonian is thus where n, l are the 1D chains indexes, and J the Josephson coupling between the chains.We take here J as an independent parameter.We will come back in Sec.V B on this point.Note that retaining only the Josephson coupling and discarding the single particle tunnelling is usually justified by the presence of a gap in the spin sector (∆ σ ) for attractive interactions [30].We will take (10) as the interchain Hamiltonian for our study, while still noting that in the case of a magnetic disorder, which can potentially break the spin gap the situation is more delicate.For large negative U, one has the usual estimate of the Josephson coupling J ∼ t [32].By taking J constant, we potentially ignore possible effects of having a Josephson coupling reinforced by the disorder if the spin gap decreases. The physical properties of the system are thus controlled by the above Hamiltonian and thus depend crucially on the Luttinger liquid parameters.In order to work with a specific example, we focus in the following to the parameters corresponding to the case of tubes with a lattice that realizes the fermionic spin-1/2 Hubbard model with the Hamiltonian (11) where t i, j is the hopping amplitude from site i to site j, c i,σ , c † i,σ are the destruction/creation operators at site i and spin σ, U is the on-site interaction, and 〈i, j〉 denotes nearest neighbors on a cubic lattice.By quasi 1d system, we mean that the hopping t i, j is small in all but one direction t z = t y ≪ t x .H dis encodes the disorder part of the Hamiltonian. For a clean 1D attractive Hubbard model, the spin sector is gapped and the parameters of the charge sector can be computed perturbatively in U [30,33] and lead to: where the upper sign is for ρ (the lower sign is for σ) and v F is the Fermi velocity v F = 2t ∥ sin(k F ). C. Observables Since our main goal is to compute the effect of the disorder on the superconducting critical temperature, a central part of our calculations is the pair correlation function. We treat the interchain Hamiltonian (10) in mean-field and retain only the terms linear in the fluctuation part around the mean value.Note that here, the order parameter 〈c † ↑,n (x)c † ↓,n (x)〉 is dependent on space for a single realisation over the disorder.However, once we do the average over the different disorder realizations, since the disorder is decoupled between the different chains, we recover an order parameter which will be space invariant. The mean field approximation leads to where z is the number of neighboring chains and Where we will choose the gauge where ∆ is real.The critical temperature is given by the divergence of the pair susceptibility χ(T ), which is given in the mean-field (RPA) approximation by where χ 0 (β) is the uniform and static susceptibility in the absence of interchain coupling at the temperature T = 1/β.The superconducting critical temperature T c is thus given by the condition where Since the Hamiltonian (7) is separated between the charge and spin sectors we obtain where we have used the fact that for distances which are larger than u σ β, the correlations decay exponentially in 1D.Therefore we can neglect this part and integrate on a disk of radius u σ β denoted Γ (β) .Note that this mean-field solution is linked to the quasione dimensional nature of the problem and is different from the usual BCS mean-field calculation that was used to establish the Anderson theorem [1,2].In the latter case one computes the pair susceptibility with the full kinetic energy (thus including the transverse hopping as well) but in an RPA approximation in the interaction U.At T c , the gap is thus automatically zero in the BCS approximation.In the quasione dimensional situation we consider here, the situation is different since even for decoupled chains, for which the T c is zero even in the absence of disorder, due to the quantum fluctuations, a strong spin gap can exist for the spin sector.We discuss more these differences in the Sec.V A. III. NON-MAGNETIC DISORDER Let us first consider the case of non-magnetic disorder.The Hamiltonian to consider at the single chain level, in particular to solve (17) giving T c , is (7) with the disorder (8). Since the charge sector part of the Hamiltonian is purely quadratic, the disorder can be absorbed by a simple redefinition of the field φ(x) [16] This leads to an Hamiltonian for the charge sector of the form: Note that this transformation does not affect the field θ (x), which remains conjugate to the field φ(x).The calculations of the susceptibility, which depends on 〈T τ cos( 2θ ρ (x, uτ)) cos( 2θ ρ (0, 0))〉 H ρ , and the spin part are thus not affected by the disorder either, and yield an identical result with or without forward disorder. An analogous result to the Anderson theorem, namely that a non-magnetic forward disorder has no impact on the critical temperature of superconductivity (T c (D f ,e )/T c (0) = 1) is recovered.This result can also be viewed directly in the fermion language since the transformation (20) correspond to a redefinition (22) where R (resp.L) denotes the right and left movers and in (22) the upper sign refers to R. Similarly to the Anderson theorem, one thus see that it is possible to create new objects that are still related by time reveral symmetry, even in the presence of disorder.When pairing these objects, the disorder totally disappears, leading to the invariance of the critical temperature.However, the forward scattering disorder will still affect other correlations in this model, for example the density-density ones (basically anything which involves the field φ σ ). Note that this result is modified if the backward scattering is present [12].The Anderson localization that it induces leads to an exponential decay of the pair correlation functions and thus compete with the superconductivity.Therefore, one can expect drastically different effects of nonmagnetic disorder on the superconductivity depending on which Fourier components are present.This can, in principle, be tested by changing k F with respect to an upper cutoff in the disorder spectrum. IV. MAGNETIC DISORDER We now turn to the case of magnetic disorder.For this, we employ (7) with the disorder (9).Two significant differences are immediately noticeable compared to the case of non-magnetic disorder.Firstly, the spin sector of the Hamiltonian (7) is not simply quadratic but has a sine-Gordon form.Thus, an analogous transformation to (20) done for the spin sector does not allow to get rid of the magnetic disorder in the Hamiltonian.This reflects the competition between the random magnetic field and the cosine term that creates the spin gap.A corresponding term would exist in the charge sector only if the system is in a Mott state with a commensurate filling [34,35]. Secondly, the pair susceptibility (19) dependson the field φ σ for the spin sector.This contrasts with the charge sector where the dual field θ ρ appears.Thus, performing the afore mentioned transformation introduces a disorder dependence in the pair susceptibility, regardless of the presence of the cos( 8φ σ ) term in the Hamiltonian.This indicates from the start that magnetic disorder affects the correlations and consequently the critical temperature. In the calculation of the pair susceptibility, the charge sector Hamiltonian is quadratic.Therefore, the charge part of the correlations is where r is given by (x, u ρ/σ τ) depending on where we are computing correlation functions (charge or spin sector) and Due to the sine-Gordon form of the spin part of the Hamiltonian (7), the full calculation of the spin sector is more involved, and we analyze it in the next two sections. A. Spin sector with g = 0 Let us start in this section by setting g = 0 in the Hamiltonian.This amounts to a scenario where the spin gap opened by the presence of the cosine term is so small that it can be neglected.This corresponds typically to the case of attractive interactions very small compared to the kinetic energy in the chain since, in such cases, it is well known that the spin gap is exponentially small in the ratio t ∥ /|U|.This simplified model allows to disentangle the effects produced by the magnetic disorder on the pair susceptibility from the robustness of the finite pairing gap. It is important to note that setting g = 0 does not necessarily imply that T c itself is small, since Tc is given by equation (17) and χ o depends mostly on K ρ and K σ .Although in this case we can formally take any value for K σ , we restrict ourselves to K σ = 1, which corresponds to a spin rotation invariant Hamiltonian with g → 0 [30].Our study can be readily extended to any value of K σ when g = 0. Furthermore, as mentioned in the previous section, we still consider, even if we take a zero spin gap, that the interchain coupling is of the Josephson form, with a fixed J. We come back on this approximation in Sec.V B. The dephasing induced by the disorder ( 9) can be readily computed in the case of g = 0.It affects the spin part of the correlations and consequently Tc.For g = 0, a change of variables similar to the one performed in the charge sector is made : The transformation removes the disorder from the Hamiltonian.Unlike the case of non-magnetic disorder, the change of variables ( 24) modifies the susceptibility (19).After performing the ensemble averaging over disorder, these correlations become : The forward magnetic disorder thus leads to an exponential decay of the spin part of the correlation function with a characteristic lengthscale related to the disorder strength.Consequently, one can expect a strong impact of magnetic impurities on T c , roughly resulting in a suppression of the superconducting critical temperature when the thermal length associated with the temperature T c becomes larger that the length associated with the disorder.More quantitatively, to determine T c we solve (17), which with g = 0 is : where Bessel function of the 1st kind (resp.a modified Struve function).For simplicity we have assumed in the above formula that the charge velocity u ρ and the spin one u σ are identical so we can use the same r for the spin and charge sector.These two velocities are in general different [30].The generalization of (26) to the case of two different velocities is straightforward but doesn't change fundamentally the results, albeit at the cost of not having analytically closed expressions, We discuss it in the Appendix C. To compute numerically the integrals (26), one has to be especially cautious.Indeed, while the difference of Bessel and Struve functions is well behaved, these two functions individually diverge exponentially.One has to implement a series expansion of the difference at large argument to compute the integrals accurately.To achieve a good numerical convergence we first perform the integral on r and only in a second time the integral on the angle θ , using polar coordinates for (26). It is also worth noting that although the disorder leads to an exponential decay in space, its time independence preserves a good temporal coherence, leading ultimately after integration over the polar angle to a power-law decay of the correlation at large distances lim x→∞ F (x) = 2 πx .The decay induced by the disorder is thus less dramatic than could have been expected. To understand the results, let us introduce several characteristic lengths.On one hand 2D f ,m is the disorder length, which controls the exponential decay rate in (25).On the other hand, ξ J = v F T c (0) is a length associated to the superconducting state in the absence of disorder.We also define D c as the critical disorder, the amount of disorder where there is no finite T c associated with the superconducting phase transition. Fig. 2 shows the ratio T c /T c (0), the critical temperature normalized by the one without disorder, as a function of ξ J /ξ D for different values of the attractive interaction U, showing the destruction of the superconductive state by the magnetic disorder.The curves depend on the interaction, with the more attractive cases being slightly more robust to disorder.The inset shows the same effect as a function of D/D c .While the curves do not perfectly collapse on each other, they show an excellent scaling in these variables. The limit of g = 0 is thus a simple limit showing clearly the analogy for the quasi-1D situation of the Anderson theorem.Although the non-magnetic (forward) disorder has no effect on T c , the magnetic one (time reversal breaking) impacts the critical temperature.Note that these effects are not connected to Anderson localization since they are produced by forward scattering on the disorder. B. The general case g ̸ = 0 The calculation of the previous section actually overestimates the effects of magnetic disorder, because the full Hamiltonian with a finite attractive interaction creates a spin gap via the cos( 8φ σ ) term.This spin gap, which locks the particles in singlet states, prevents the magnetic field from acting.Note that in the quasi-1D geometry that we consider, the critical temperature is controlled by the interchain Josephson coupling, while the spin gap essentially depends on the ratio t ∥ /|U|.Thus, it is perfectly possible to have a large spin gap and a small T c , contrarily to what happens if one computes the T c in the BCS approximation, for which the spin gap is essentially zero close to T c .This situation is very similar to the case of the attractive higher dimensional Hubbard model, for which a large regime of pseudogap can exist above T c when |U| is large. Determining the effect on T c is more involved in the case g ̸ = 0. We describe in this section a renormalization group method allowing the calculation of the correlations in the spin sector. To renormalize the susceptibility χ o , we follow a similar procedure than the one used in [36].The correlation function R σ is given for g = 0 by For r ≫ α, we have x 2 + (u σ |τ| + α) 2 ≈ r 2 .We consider the function with g = 0, H σ = 1/2.For g ̸ = 0, we renormalize the cutoff α until it reaches r in the perturbative expansion of (28) in powers of g.This multiplicative renormalization procedure allows to define a function I σ (dl, g(l)) such that RG flow The algebra can be found in Appendix A. The renormalization equations for the parameters K σ , g, D f ,m read: where I o , I 1 and I 2 are modified Bessel functions of the 1st kind, and L o , L 1 and L 2 are modified Struve functions.Note that for simplicity, we neglect the renormalization of the u σ parameter in our calculation.Indeed we don't expect the renormalization of the speeds to be large, and we also don't expect it to lead to new physical phenomena. In the limit of no-disorder D f ,m → 0, we recover the usual RG equations for K σ , g because F (0) = 1.Here, we directly see that the disorder and the g term are competing against each other.The g term controls the decrease of the disorder (32).On the other hand, the effect of the disorder on g is more subtle.The RG equation for g (31) depends only on K σ , whereas if K σ is smaller than 1, g is relevant, and irrelevant otherwise.However, the disorder acts on the RG equation for K σ (30) and slows down its decrease (F (x) is always smaller than 1) !It therefore indirectly opposes itself to the parameter reaching a regime where g would be relevant (or at least slow down the g divergence if g is relevant).More details on the interpretation of the RG equations are given in the Appendix B. Below the separatrix, as seen in Fig. 3.c, the term in g diverges first.This can be interpreted as the system succeeding in having a spin gap due to the attractive interaction, with the disorder being too weak to destroy this gap.Furthermore, in this regime, the disorder is suppressed by the interactions, and D f ,m goes towards 0 (and D f ,m α doesn't diverge).The same is true for K σ , which also goes towards 0. Since the RG equations are derived perturbatively in g, and similarly to what is done for the case of the simple sine-Gordon flow [30], we stop the flow at g = 1, beyond which the RG becomes unreliable. On the other hand, if we are above the separatrix, the disorder wins and manages to destroy the spin gap.Both D f ,m and K σ go to constants.Even if g diverges, it has no effect on the other parameters of the flow.This regime becomes similar to the study of the previous section Sec.IV A where there was no g term and thus no gap. Correlation functions We compute using the same renormalization procedure, the correlation function R σ (r).This correlation is given by: If g dominates, we have to stop the RG flow when g gets of order 1.Beyond this point, all the correlations related to the spin degree of freedom are frozen to a constant.So our correlation function starts as a power law like function whose exponent is renormalized as we go to larger scales, until the correlation freezes to a constant. If the disorder dominates, we do not stop the RG flow, since the diverging g has no effect anymore on the correlation function and the other parameters.Neglecting g is fully justified beyond the point for which D f ,m α ∼ O(1).Note, however that there is an intermediate regime, due to the term linear in g in the renormalization of R σ (r) that is difficult to control perturbatively, since the renormalized g is large, but the exponential decay that sets in when D f ,m α ∼ O (1) has not yet started.This case is similar to the perturbative treatment of the commensurateincommensurate phase transition [37], and ultimately does not affect the physics of the problem.In this regime we thus qualitatively have a correlation function which start as power law, gets corrected a bit (until g stops "resisting" the disorder) and finally decays exponentially, similarly to its behavior in the model without g. Critical disorder The calculation of the correlation function allows us to determine D f c , the critical disorder at which superconductivity is killed.To practically evaluate it, we take in this section the definition that when T min = 10 −8 in units of α o /u σ we can consider that this is equivalent to having completely killed the superconductivity.Furthermore, in this case, given the sudden changes in the values of χ o (0, 0) as we transition from one regime to the other with large interactions, the best that we can numerically do is approach D f c from below. In Figure 4, we compare the separatrix of our RG flow to the critical disorder for different interactions U.While for small interactions, there exists a region where the system remains superconducting even if disorder dominates the RG flow, for large interactions the critical disorder coincides with the separatrix/ the change of regimes.This can be interpreted as follows : when the spin gap is very small, the competition between disorder and superconductivity is primarily governed by the competition between the random magnetic field, which acts as a random chemical potential for each spin species separately, and the Josephson term that favors a q = 0 like pairing in a singlet state.When the gap is large, the random magnetic field must first destroy the gap to be effective, but then it very efficiently outcompetes the (small) Josephson coupling term. Critical temperature The susceptibility of one chain χ o (q = 0, ω = 0) is given by: where r 2 = x 2 + (u σ τ) 2 and we neglect the factors of α inside r.We use polar coordinates with u σ τ = y.This leads to: After integrating over the angles, we obtain: (38) where the parameters K σ , D f ,m outside the integrals over l are the parameters at the beginning of the RG procedure.This expression is similar to the one for the g = 0 model, with the main differences being the appearance of a linear term in g due to the specific correlation function we are considering and the fact that the parameters K σ , D f ,m are here renormalized by the RG flow. The behaviour of χ o (0, 0) varies depending on the scale and the regime (interaction or disorder dominated) we are looking.At short scales, in both cases, the integrand of ( 38) is proportional to the power law r 1− 1 K ρ −K σ , since F (r) goes to 1 for small r.At large r, χ o (0, 0) behaves differently depending of which parameter dominates.If g dominates, since R σ (r) is now a constant, the integrand of (38) becomes proportional to r 1− 1 K ρ , and this power law always increases with r.Therefore, there is always a finite critical temperature in this regime.On the other hand, if disorder dominates, the integrand of (38) behaves as r − 1 K ρ −K σ , which decays rapidly enough to ensure convergence for the interactions considered.This implies that there may not always be a finite T c in this regime.This abrupt change in behaviour impacts the critical temperature.One should note that this sudden change of behaviour is at least partly due to our treatment in the RG procedure of the two regimes above and below the separatrix as two completely different regimes.This approach allows for a simple estimation of the complicated integral while retaining the main features of the solution.A more complete treatment would smoothen the curve. Finally, we compute the critical temperature by solving (17).Fig. 5 shows two different regimes in the critical temperature.To numerically compute those integrals, it is useful to apply different treatments depending on the RG regime.Unlike the case g = 0, it is better to perform first the integration over the angles θ and then integrate over r.For small enough interactions, we recover a very similar behavior to the g = 0 case.However, for large interactions, the decrease of T c is dramatically slowed by the presence of the finite gap, and then T c drops rapidly to zero, as explained qualitatively above. Fig. 6 plots the same quantity as a function of the ratio of the two characteristic lengthscales ξ J (characterizing the superconducting phase in the pure case) and ξ D (the lengthscale of the exponential decay for the disordered case).As FIG. 6. T c as a function of the strength of magnetic forward disorder for different values of U. The lenghts ξ J (resp.ξ D ) characterize the superconducting order coming from the transverse coupling J (resp. the exponential decay due to disorder).This distinguishes two regimes depending on whether the spin gap of an isolated chain is large or small. shown in the figure, when the gap is small (small U) the superconductivity is suppressed when these two lengths are approximately equal.In contrast, when interactions are large and there is a well formed spin gap, the disorder must first overcome the spin gap, independently of the scale ξ J which characterize the transverse coupling.This leads to the slow and somewhat linear decrease of T c .Once the gap is gone the disorder is at that point large enough to also overcome the contribution coming from the transverse Josephson coupling. A. Comparison with the isotropic BCS solution From the previous section, we see that for the quasi-1D situation, we obtain the equivalent of an Anderson theorem, originally derived close to the T c in an isotropic situation with a solution of the BCS equations.A weak random chemical potential does not affect T c or the pairing correlation functions below T c .However, a disorder breaking the timereversal symmetry, such as a random magnetic disorder, has a more complex effect.Essentially, such a disorder appears in the pair correlation functions and thus will have an effect on the superconductivity as indicated in Fig. 5 and Fig. 6. We however consider a situation where T c and pairing are essentially controlled by the strength of the interchain (Josephson) coupling, with the interactions inside a chain being essentially arbitrary.Thus, it is perfectly possible to have a strong spin gap ∆ σ while having at the same time a small T c .In this case, as shown in Fig. 6, the magnetic disorder must first destroy the spin gap before it can influence T c or the pair correlations.For an isotropic case, such a situation would also occur on a lattice, with e.g. an attractive Hubbard model at large |U|.In such a situation, the spin gap scale is essentially |U|, while the kinetic energy of the pairs becomes 4t 2 /|U|, leading to a small condensation temperature and thus to a small T c .It would be interesting to study the effect of magnetic disorder in such a system to see if similar effects than the ones observed here would be found. B. Interchain hopping and Josephson coupling In the model considered in the previous sections, we assumed that the chains were coupled by a Josephson coupling allowing for the hopping of singlet pairs across the chains.As discussed in Sec.II B, most microscopic realizations actually contain single particle hopping between the chains.In the presence of a spin gap, due to the attractive interaction, the single particle hopping is suppressed and replaced by the Josephson coupling we have considered in this paper. For the non-magnetic disorder, keeping only the Josephson coupling poses no problem since the spin gap is preserved by the disorder.The results derived in the previous section are thus directly applicable to systems with single particle hopping, and we do not expect any important difference between the two models. For the magnetic disorder, on the contrary, the spin gap is first destroyed by the disorder and it is thus a challenging and important question to know how the results we obtained would apply when the system has single particle hopping to start with.A detailed solution, particularly by renormalization techniques that have been used to tackle the competition between the single particle hopping and the particle-particle or particle-hole hopping [30], is clearly beyond the scope of the present paper and will be left for a future publication. However, one can expect the general results derived here with the Josephson coupling to be largely valid.Indeed, the main additional effect for magnetic disorder will be the destruction of the Josephson coupling, leading to singlet pair hopping.This should naively make the destruction of the singlet superconductivity even more efficient than in a model where the Josephson coupling is kept constant.We can thus expect naively an even stronger effect of the magnetic disorder, making the contrast between the magnetic and non-magnetic disorder even more marked.Another interesting possibility when looking at a model containing single-particle tunneling is that there might also appear some intermediate region in disorder, where the reduction of the spin-gap due to the disorder could lead to an increase in the coupling J.This increase would "oppose" the decrease of the pair-susceptibility and therefore might increase T c . An interesting possibility for the case of single particle hopping is the potential to stabilize other phases in presence of the magnetic disorder.One order parameter that would be robust to the magnetic disorder is or the equivalent one with a sine.This order parameter corresponds to the x or y component of a triplet order parameter. A corresponding pair-hopping term is also generated by the single particle hopping but is subdominant for an attractive interaction.Indeed, since the field φ σ orders, the correlations of the field θ σ decrease exponentially fast.However, in presence of the magnetic disorder, the cos( 8φ σ ) term in the single chain Hamiltonian ( 7) is essentially killed, and the θ σ are the only correlations without exponential decay.Since K ρ > 1 due to the attractive interaction, the θ ρ correlations are favored in the charge sector.This would lead to the interesting possibility of replacing the singlet superconducting phase with a triplet one when the magnetic disorder becomes large enough.Of course, this phase will have a lower T c but should survive even at relatively large magnetic disorder.This could be a practical possibility to stabilize a triplet superconducting phase even in the presence of contact attractive interaction, e.g. in a cold atom realization. C. Possible implementations To test for the the effects investigated here, cold atomic systems provide a natural potential realization.Several key ingredients needed could be realized in such systems.The coupled 1D structures of fermions with attrative interactions can be readily realized, either in systems made of several tubes [38] or in systems with quantum microscopes [39,40]. One of the required key ingredients is a disorder that would be mainly forward.This also can be realized either with the natural limitation provided by a speckle disorder [31] or in systems such as quantum microscopes by generating the disorder via DMD and tuning the Fourier transform of the disorder so that Fourier components close to 2k F are absent. Measuring T C itself might not be experimentally easy to realize, but a simpler measurement could be provided by the decay of the pair correlation functions along the tubes, which are a direct measure of the existence of superconductivity in the system.In that respect, for quantum microscopes, since the easily measured quantity is the density, it could be useful to make use of the relation that exists between the attractive and repulsive Hubbard models [41].Such a transformation maps the attractive model into the repulsive one and the random chemical potential into a random magnetic field and vice versa.The observables are directly related by a particle-hole transformation on spin down only [41].In particular, a singlet order parameter would map onto an antiferromagnetic spin order along the x or y direction. In the language of the repulsive Hubbard model, one would thus conclude from the results of the previous sections that a random magnetic field along z would essentially not affect the correlation function of the antiferromagnetic fluctuations along x or y of a half filled system (one particle per site), leading to a T C for antiferromagnetic order in the plane that is essentially unchanged.This corresponds to the Anderson theorem for the non-magnetic disorder.On the other hand, putting a random chemical potential along the tubes would drastically destroy such correlation, corresponding to the destructive effects of the magnetic disorder on single superconductivity that we have found in the present study.The competition discussed earlier between the magnetic disorder and the spin gap in the attractive side becomes the competition between the Mott gap and the random chemical potential on the repulsive side.It is necessary for the random chemical potential to be stronger than the Mott gap to locally dope the system.Once this is reached, however, the spin-spin correlation in the x − y plane gets very rapidly destroyed. In condensed matter systems, one would have to use highly anisotropic systems.Organic superconductors are a good candidate [4], and there may also be a possibility of investigation in some 2D systems where high anisotropy has been reported, such as CrSBr [42].The easy part here is identifying superconductivity since simple resistance measurement (and Meissner effect) are routine experiments.However, the hard part in this kind of experiments would be controlling the disorder to keep the backscattering much smaller than the forward scattering along the chains.One possibility would be, like for two dimensional semiconducting systems, to place the impurities far from the conducting chains.This, however, would have the drawback to also lead to quite correlated potentials from one chain to the next. VI. CONCLUSION In this work, we extended the Anderson theorem for superconductivity, which states that non-magnetic impurities do not impact a BCS superconductor, meaning they do not change its critical temperature, while magnetic impurities have a drastic effect.We considered a quasi-1d system with forward scattering disorder, coupled by a Josephson cou-pling favoring singlet superconductivity.We showed that for such a system, the non-magnetic forward disorder leaves T C and pair correlations essentially unchanged.On the other hand, magnetic disorder has a significant impact on the system.Once such disorder overcomes the spin-gap, it starts destroying the pair correlations and hence the superconductivity very efficiently.Interestingly, the correlation function that seems to survive the magnetic disorder and still decays slowly is the x y part of the triplet superconducting correlation (with a random magnetic field along z). We also discussed various possible tests of these predictions in condensed matter and especially in cold atomic gases.Quantum microscopes provide an ideal system to test for the predictions of this paper, using an implementation for a repulsive Hubbard model with one particle per site.In that case, magnetic disorder would leave the x y antiferromagnetic spin-spin correlations essentially unchanged, while a non-magnetic disorder would rapidly destroy such correlations once it is able to suppress the Mott gap. Several extentions of our work would be interesting.In particular, since chains are coupled by single-particle tunnelling and not just by the pair tunnelling, other pair couplings can be generated.This raises to the question of which instability could be dominant once the singlet superconductivity has been destroyed.The most likely candidate is a triplet superconducting pairing.Whether such pairing, which dominates for a single chain, could be effectively stabilizes in the 2D or 3D case is an intereting question and challenge.Indeed if this is the case, it would provide a route to realize triplet superconducting phases with purely contact interactions.These questions will be examined in future studies. Appendix A: Computing the RG equations In this appendix, we describe in more details how to compute H σ (r a ) and derive the RG equations (30,31,32,35). We start from : 1 4 e iϵ 1 2φ σ (r a ) e iϵ 2 2φ σ (0) e −S φ σ (A1) where S φ σ is the full 1d action of the spin sector after integrating out the θ degrees of freedom, and Z φ σ is the partition function associated to it.We first absorb the disorder terms of the Hamiltonian in the definition of the field φ σ and replace them by φσ (see also the main text).We then expand the action to the 2nd order in g.For simplicity of notation, we will drop the σ of φ σ and the m of D f ,m in the following equations. Integrating over the configurations leads to the expansion in powers of g: The average on H o φ is an average on the quadratic part of the Hamiltonian expressed in terms of φ. Disorder averages To perform the disorder averages in the above expression, we rewrite the integrals with the help of the Heaviside function.Completing the square and simplifying the result leads for the first disorder average to: where Z D = dγe The second disorder average (linear term in g) similarly leads to: • e −2([min(x a ,0−x)]θ (−x)θ (x a −x)+[x−max(0,x a )]θ (x)θ (x−x a )) (A2) For the third term (second order in g), we obtain: • e (min(x 2 ,x a )−max(0,x 1 ))θ (x a )θ (x 2 )θ (min(x 2 ,x a )−x 1 ) • e Further algebra and some tricks The remaining configuration averages can be easily computed since H o φ is quadratic.In this case, we use the fact that 〈e A 〉 = e 1 2 〈A 2 〉 and the relation [30] We can then combine the connected and disconnected terms of the second order expansion since they are similar in all of their terms expect for the cross terms between (0, x a ) and (x 1 , x 2 ) which appear only in the connected term.This leads to a term of the form: . . . From (A3) we obtain: which allows us to obtain the renormalization equations for the interaction terms and for the disorder separately. The interaction renormalization equations We apply the procedure described in [30] to the second term of (A6) multiplied by (A5).In this part of our equations, no disorder (D f ) appears.Rewriting r 1 and r 2 as center of mass (R = (X = (x 1 + x 2 )/2, Y = u σ (τ 1 + τ 2 )/2)) and relative coordinates r, we recognize gradients of F 1 : (∇ R F 1 (r a − R)).In the expansion in small r up to second order, the ∇ 2 X − ∇ 2 Y term renormalizes the velocity u σ .We neglect this contribution since the change of velocity does not affect the physics of the problem in an essential way.On the contrary, we retain ∇ 2 X + ∇ 2 Y , which can be simplified using Using polar coordinates the full term becomes: where F has been defined in (33) Looking at the contribution of α+dα α d x where we define α = α o e l , we obtain the RG equations for K σ (30) and g (31). Disorder renormalization equations We now look at the first term of (A6) multiplied by (A5).We discuss the case x a > 0 since the other case can be treated similarly.Since for for large disorders, the 2 exponentials which we don't expand would suppress completely this term, we can expand the exponentials containing the "crossed" disorder terms and the exponential contaning the F 1 terms at first order. We now also have to separate the integrals on x 1 and x 2 to handle all cases arising from the Heaviside functions.Those split in 4 categories (our chains have size 2L): Since the term is suppressed exponentially in |x 1 − x 2 |, the fourth category of integrals is negligible since the minimal interval between x 1 and x 2 is L, which is half the size of the system.The most important contribution comes from from the first term, which covers most of the |x 1 − x 2 | "small" region, and gives us the renormalization that we use.Finally, the second and third term are "marginally relevant" in the sense that while they have also possibilities of having small |x 1 − x 2 |, they have only one point where x 1 and x 2 coincide.This last terms would give rise to less relevant terms in the renormalization equations. Another way of looking at it is that we are looking at the effect of having disorder on two points, x 1 and x 2 , on the correlations related to two fixed point, x a and 0. Since the effect of x 1 and x 2 is suppressed if they are far from each other, given the splitting of our integral, the importance of each contribution is related to the amount of possibilities to have x 1 and x 2 close together. We then go to the center of mass and relative coordinates and perform first the integral on the center of mass coordinates first, starting with the integral on the center of mass "time".We are now looking at: which then leads to: (A9) After converting to polar coordinates and performing the integration, we end up with: In this last expression, the second term leads to a new RG equation which is less relevant.The first term instead, when re-exponentiated leads to the RG equation for the disorder (32). Correlation function renormalization The last element that we need comes from the linear term in g.By splitting the integral on d x to handle the Heaviside functions, we find that the disorder term can be rewritten uniformly as: e .Finally, combining all of this terms together, we derive the RG equation of the correlation function (35). Appendix B: Details on the interpretation of the RG equations We can play around a bit with our RG equations ( 30), (31), (32) by noticing that in all of them the disorder strength D f ,m is accompanied by α.This suggests making a change of variable D = D f ,m α (the quantity which we compare to g to decide who wins the RG). This leads to the following RG equations : In this form the competition between g and D is evident. Both would be diverging exponentials if the other is set to 0 and both are contained (or even suppressed for the equation of D) by the other.The main question which remains at this stage is which one of the two will diverge first. Since F and G are complicated functions which have simple power law behaviours at large argument, we can also expand them at large argument.Particular attention should be given that this means that D is large, and therefore D ̸ = 0.So this expansion describes well the case where the disorder wins while one has to be more careful if g wins the RG.We then get the following expressions for the RG equations : where α = α o e l .Here we can see clearly the competition between disorder and g if we do the same variable change as just above.However, another change of variable illustrates another aspect of our RG.If we redefine our parameter g → g = g α , we get for the RG equations : Since this expressions are valid at large D f ,m α, the disorder equations shows us that the parameter D f ,m is in practice frozen.But the main interest here is the second equation, the one for g.We see that g is relevant for K σ > 0.75, which is reminiscent of the case of a backward disorder that would exist only in the spin sector [30].For the following, we treat here the charge velocity and spin velocity as different.The susceptibility of one chain χ o (q = 0, ω = 0) is then given by : where r 2 ρ = x 2 + (u ρ τ) 2 and r 2 σ = x 2 + (u σ τ) 2 , we neglect the factors of α inside r ρ/σ .We make the change of variables u σ τ = y, which when passing in polar coordinates leads us to : As we can see in Fig. 7, the difference between the cases where we consider u σ = u ρ or where we take their real values coming from their definition ( 12) is minimal.So the qualitative behavior can be quite well described analytically by the expression (26) without taking into account the difference of the speeds, while if one wants to be slightly more quantitative, one can refer to the expression tracking the difference between u ρ and u σ . Model g ̸ = 0 In the same spirit as above, the susceptibility is now given as a function of two different r : r ρ and r σ .and we can again see in Figure 8, that the qualitative behaviour is the same in both treatments of the speeds for both regimes of the RG. FIG. 2 . FIG.2.T c /T c (0) as a function of the ratio of characteristic lengths for different interactions U.The inset shows the same quantity as a function of the disorder normalized by the critical value of the disorder D c at which superconductivity is destroyed. FIG. 3 . FIG. 3. a) RG flow of our model, in red, the separatrix between the two different regions (disorder dominating vs g dominating); b) Example of the flow for parameters for which the disorder dominates; c) Example of the flow in the g dominated regime. Fig. 3 . Fig. 3.a shows the RG flow in the D f ,m α − g plane.We separate it in two zones, delimited by a separatrix (red line), defined by which one of those quantities (in absolute values) reaches 1 first. FIG. 4 . FIG.4.The black line is the separatrix between the 2 RG flow regions, where either the interactions or the disorder dominate.The red points correspond to the critical disorder for a given U/(g), separating the regions with and without superconductivity. FIG. 5 . FIG. 5. T c normalized as a function of the strength of magnetic forward disorder normalized over the critical disorder for different values of U. FIG. 7 . FIG.7.Plots of the behavior of the critical temperature by respect of disorder comparing both treatments of u ρ /u σ as described in the main text and in this appendix. K σ + 1 K ρ u σ β c α o d r r 1 −K σ − 1 KFIG. 8 . FIG.8.Plots of the behaviour of the critical temperature by respect of disorder comparing both treatments of u ρ /u σ as described in the main text and in this appendix in both regimes of the RG.
12,735.8
2024-03-19T00:00:00.000
[ "Physics" ]
Interview with Dennis Pearl DP: That would be 1969. I was a sophomore at Berkeley, protesting the Vietnam War and discovering that I was no longer the smartest guy in the class like I was used to be in high school. I had a terrific AP calculus teacher in high school and figured I’d be a math major, go on to graduate school, and become a math professor. Unfortunately, I had a pretty boring instructor in my linear algebra class at Berkeley and the same guy was going to teach the next course in the series, so I decided to shop for something else. My brother was a senior there and convinced me to take an introductory statistics class with him taught by Erich Lehmann. By the second week I was hooked! My math classes were about understanding elegance while statistics was about understanding evidence and it just seemed that the world needed more people that could do the latter. At that point, I didn’t think much about education except noting which qualities I liked in an instructor and which I didn’t. I liked organized professors like Lehmann, witty professors like Elizabeth Scott, professors that kept their class involved in discussions like David Freeman, professors that had the class construct their own knowledge like Jerzy Neyman, and friendly professors like Kjell Doksum. Then, there was David Blackwell, who was in a class by himself, as the quintessential explainer of complex things and who also had all of the traits mentioned above. The Berkeley Statistics Department had a lot of great teachers, and all of them gave me a feel for the relevance of the subject. I had the opposite feelings about the math instructors I encountered and so I became a Stat major. AR: My, that’s quite a list of professors that you had as an undergraduate. What did you do after you earned your bachelor’s degree, and how did you make that decision? DP: That would be 1969. I was a sophomore at Berkeley, protesting the Vietnam War and discovering that I was no longer the smartest guy in the class like I was used to be in high school. I had a terrific AP calculus teacher in high school and figured I'd be a math major, go on to graduate school, and become a math professor. Unfortunately, I had a pretty boring instructor in my linear algebra class at Berkeley and the same guy was going to teach the next course in the series, so I decided to shop for something else. My brother was a senior there and convinced me to take an introductory statistics class with him taught by Erich Lehmann. By the second week I was hooked! My math classes were about understanding elegance while statistics was about understanding evidence and it just seemed that the world needed more people that could do the latter. At that point, I didn't think much about education except noting which qualities I liked in an instructor and which I didn't. I liked organized professors like Lehmann, witty professors like Elizabeth Scott, professors that kept their class involved in discussions like David Freeman, professors that had the class construct their own knowledge like Jerzy Neyman, and friendly professors like Kjell Doksum. Then, there was David Blackwell, who was in a class by himself, as the quintessential explainer of complex things and who also had all of the traits mentioned above. The Berkeley Statistics Department had a lot of great teachers, and all of them gave me a feel for the relevance of the subject. I had the opposite feelings about the math instructors I encountered and so I became a Stat major. AR: My, that's quite a list of professors that you had as an undergraduate. What did you do after you earned your bachelor's degree, and how did you make that decision? DP: Oops-I only had Neyman as a grad student-but his teaching style was unique. He just stayed seated and asked random students to go to the board and work things out. If you were lost, he called on someone else-but after a few sessions every student worked like crazy to be prepared! At the end of spring quarter in 1971, I had the units to graduate and I had no doubt that I was going to grad school in statistics-but felt like I needed to slow down and possibly take some time off (I had only just turned 20 years old). So, I postponed my graduation until after fall quarter and luckily was selected to take part in an NSF-supported research program organized by an engineering professor that turned out to be a life-changing experience. It involved eight senior undergrads: two from math, two from statistics, two from engineering, and two from ecology. Our task was to model fire and its effects. We had access to four farmers in California who were doing controlled burns on their property. The math students worked out differential equations on how the fire might spread; my teammate and I in statistics designed the experiments to collect data and then worked to analyze it; the engineering students devised the instrumentation to measure stuff; and the ecology students made predictions about the effects on the non-plant organisms in the burn area. At the end of the summer, I was picked to go to the national AAAS meetings in Philadelphia to present our findings, where I heard talks by Carl Sagan and met Margaret Mead. That summer I stopped thinking about statistics as a branch of mathematics and realized that what I wanted in life was a career as a Statistical Scientist, with the emphasis on science. In my final quarter as an undergrad (Fall 1971), I asked David Blackwell if he would supervise me in a reading course to attempt to model the spread of fire using ideas from game theory, so I was able to continue that experience with my teaching idol. From January until grad school started for me in September 1972, I spent a little time working to have enough money to travel around Europe, and I met my wife in Jerusalem (so my second life-changing experience in one year). AR: That was quite a year! I hope you haven't averaged two lifechanging experiences per year since then, because that would probably be too much change. You stayed at Berkeley for graduate school, right? Did you consider going elsewhere? DP: Yes-I stayed in Berkeley for another 10 years and never considered anyplace else for grad school. I was a T.A. for Freeman, Pisani, and Purves (2007) as they wrote their classic introductory text. I became involved in studies of molecular evolution when the field was just new. I worked on a congressionally mandated study of the effects of ozone depletion on human health (my dissertation was about a stochastic model of skin cancer), on studies of law school admission policies toward women and minorities, and on studies of the workings of the subconscious brain. ResearchGate.net tells me I'm at about 7500 citations now, and probably a good quarter of them came from my work as a grad student. So things were pretty interesting in my teaching, research, and collaborative work, while during those years our three children were born and my wife and I were reasonably active politically. I guess I slowed down to averaging perhaps a little under one life-changing event per year! AR: Was teaching a strong interest of yours at that point, or just something you had to do in addition to your research? Did you have opinions about the FPP text/course then; did you have input into its development? DP: Well, I loved teaching and wanted to be better at it, but I wasn't really thinking systematically about it. We would have weekly TA meetings and Freeman, Pisani, and Purves-but especially Freeman-would talk about the big ideas of the week and tell us how to remove the mathematics and just relay the core concepts of the discipline in English. I participated in the discussions, but I was just learning from them and not really contributing. After I had taught the class once, I started speaking up more, about the details of the logistics of running the recitation sessions, to let the new TAs know what seemed to be working for me. It wasn't really until I got to Ohio State, and was coordinating TAs of my own, that I started to take a more scholarly approach to teaching. AR: Did you go to Ohio State directly from Berkeley? Did you also consider nonacademic opportunities? DP: Yep-straight to Ohio State. OSU Statistics had been given five new positions and the chair of the department, Jagdish Rustagi, was visiting Berkeley and Stanford. He offered me a position based only on the recommendation of my advisor (Elizabeth Scott) and a ten-minute conversation with me. I never thought about nonacademic opportunities, and the OSU position sounded great, so we moved to Columbus without so much as an on-site interview. Things worked differently then. AR: I assume that research was the top priority for new faculty, but what were your teaching assignments in the first few years, and how did you approach them? DP: For my first couple of years at OSU my teaching centered on graduate courses in discrete data, probability courses for engineers, and coordinating the large FPP-based intro course. For the first two, I worked hardest at developing examples that would engage the students and at trying to make classroom discussions go beyond the same four or five top students. For the intro course, I worked hardest at coordinating the TAs, holding regular meetings with them, developing activities for them to do in recitation, and being sure they had what they needed to do a good job. By 1984, I started to work on changing from a lecture/recitation format to a lecture/lab format. I created an "honors" section of the course and wrote a lab activity manual for it. I wrote a grant to start the first computer lab for teaching in the Arts & Sciences at OSU and won that, so I was doing computer labs starting in 1985. Paul Velleman had just created DataDesk and I wanted to use it a bit for exploratory analyses, but mostly for doing simulations to illustrate concepts. Statistical Buffet AR: Two of your education projects at Ohio State for which you received national recognition and acclaim were the Electronic Encyclopedia of Statistics Examples and Exercises and the Statistics Buffet. Which came first, and please tell us about that project. DP: The Encyclopedia project (EESEE-pronounced "easy") came first by about a decade. I worked with Elizabeth Stasny and Bill Notz at OSU and with Paul Velleman at Cornell on a series of NSF projects to build technology resources for instructors. At OSU, we created EESEE that included examples with background information, the protocol, the dataset, student exercises, and instructors notes. The collection grew to several thousands of pages of materials as we continued to work on it for about 20 years. David Moore had W.H Freeman include it as an electronic supplement to his textbooks, so we were able to sustain the project a good deal after the NSF funding ended. (There's a Flash version from midway in the project that is still freely accessible at www.macmillanlearning.com/catalog/static/ whf/eesee/eesee.html.) Meanwhile, Paul had a great vision for where things were going technologically and translated our content into a web product he called DASL (Data And Story Library). AR: What was the genesis of the Statistics Buffet project? DP: I guess the genesis of that project would be in the successes and failures of the 1990s. Jeff Witmer was using some of my labs at Oberlin in the late 80s and showed them to Dick Scheaffer, so I joined the advisory board for Dick's Activity-Based Statistics NSF project. There I met Joan Garfield, George Cobb, Bob Stephenson, Judith Singer, Jim Landwehr, Ann Watkins, and Don Bentley, and a couple other stellar people I apologize to for not remembering at the moment. I became plugged into the broader Stat Ed community and I learned a great deal about what works and what doesn't from that outstanding group. Back at OSU, Mark Berliner and I wrote the university's General Education Data Analysis requirement, and we included the idea that acceptable courses had to have a statistical modeling component and also feature the use of technology in analyzing data or illustrating principles. Those went into effect in 1991, and our annual enrollment swelled from 1000 to 3000 students per year in the course I coordinated. I also joined the committee that designed a new teaching building for the mathematical sciences and was able to create a large computer lab complex to meet our course needs. So, by the mid-90s I had an infrastructure in place with facilities, systematic T.A. training, and systematic data collection about the course; we had hands-on, simulation, and data analysis labs; a nice repository of examples; and a bunch of neat ideas that I saw coming out of the growing Stat Ed community. The good news was that students as a whole were enjoying the class more and seemed to be learning more, with the success rate having risen from about 70% in 1991 to 80% in 1995. The bad news was that, no matter what I tried, there always seemed to be that 20% of the class that did not succeed (grade C-or lower), give or take 5% from section to section. I continued to tweak things for the remainder of the decade and had some small successes and failures, but the overall picture stayed the same. Then around 2000, Joe Verducci was running our summer T.A. training course and pointed me to Richard Felder's work on learning styles. After reading a couple of his papers, I began to feel that optimizing for the group average could only take us so far and that we needed to be thinking more about optimizing for the individual, so I came up with the broad outline of an individualized education plan that I called the Statistics Buffet and pitched it to the Provost's office in a local grant competition. AR: Before I ask how the Statistics Buffet worked, I'm curious about the GE requirement you mentioned. Most colleges and universities that I know of have a quantitative GE requirement for which either a mathematics or a statistics course will suffice. But you instituted a GE requirement specifically for data analysis at OSU? Did that take a political battle, and how did you win? Did many departments offer courses that would satisfy the requirement? DP: I think in many places such fights are between Mathematics and Statistics interests, who are both vying for credit hours. In our case, Bostick Wyman in the Mathematics Department was either chair or vice-chair of the universitywide committee and was an ally of Statistics. He approached Mark and me, saying that he was arguing for a Gen Ed Data Analysis requirement and asked us to write up what that should look like. To our surprise, they just took our paragraph and plugged it in without any pushback. For about a decade very few other departments offered competing courses, so we had more than 90% of the students (Psychology taught a course for their majors, and Chemistry and Physics were exempted provided they included data analysis as a component of their lab courses). There's been lots of leakage since then, as data analysis became a more and more important component of many areas, and the Gen Ed was also substantially revised when OSU changed to the semester system five years ago. AR: Very interesting, thanks. Now back to the Statistics Buffethow did that work? DP: In one lab section, students would be generating the data for their regression lab by doing a quick hands-on experiment (e.g., timing how long it takes, Y, to complete a connect-thedots puzzle of different lengths, X), while in the room next door a dataset from the EPA might be used for the lab. Or in a sampling distribution lab, one room has students generating their data by hand and comparing results from student-to-student, while the room next door has students using simulated data from an applet. For many students, generating data by hand helps them form a better connection to statistical concepts, while other students find such activities to be disconnected "busy work." In the Buffet model, students had a choice as to whether they'd like their labs to include more hands-on data generation activities versus more simulation-based activities. In lecture, some students would be doing some kind of group activity each session like coming to a solution with a few others in adjacent seats, while in the lecture hall next door students would face individual contemplation of questions (e.g., a clicker question). Our Buffet model gave students a choice as to whether they preferred more group activity in lecture. Similarly, students could choose whether they preferred a weekly on-line problem-solving format or an instructor-facilitated problem-solving format. Logistically, students signed up for a lecture and lab time as usual. Then, in the first week of the term they took Felder's learning styles inventory and Tuckman's school strategies inventory and were given advice by software as to what choices we thought would be best for them-but they could ignore that advice and choose whichever learning path they wanted amongst the 2 3 D 8 choices. Since we had two lectures scheduled simultaneously (and three labs scheduled simultaneously), we could just assign students to the appropriate rooms. This structure was wrapped in the Savion and Middendorf (1994) EEGP learning model designed to enhance comprehension and retention by providing, for each of our explicit course learning objectives an Example in the students' own life, an Example outside that sphere to show that the concept extends to many circumstances, the Generalization or concept at hand, and Practice in applying the concept. Meanwhile, course assessments were the same for all students to keep things "fair." For example, even though students did different labs and had a different lecture experience, we assigned a weekly report that was the same for all students-it simply asked "How was learning objective xyz illustrated in your lab, how was it illustrated in your lecture, and how was it illustrated in the homework?" That also helped to show students how different components of the course tied together. AR: How did you assess whether the Buffet was effective, and what did you learn from that? DP: We primarily used a variety of within-course measurements. The Buffet was piloted in spring 2002 in half of the course sections and then became the standard for all sections in fall 2002. The same final exam was used for the Spring, Fall and Winter quarters before the pilot, during the pilot, and for the year following. Scores (total and by various subsets of learning objectives) were examined in conjunction with data on student preparedness drawn from OSU's data warehouse like High School GPA, OSU GPA, ACT/SAT percentiles in Math, along with other student characteristics such as gender, age, and race. Overall, students did about 5% better under the buffet model than under the previous model (half a letter grade). We were happy to find that grades on tests and class assignments were independent of Buffet choices and that the measured benefit of the Buffet model was independent of preparedness, gender, age, and race. Further, the 5% overall improvement was only about 3% for learning objectives taught the same way in all sections compared with a little over 7% for the learning objectives that were taught differently in different sections. The improved test scores translated into a rate of C-or worse (D, F, or Withdrawal) that went from 20% down to 12% and then continued to go down linearly until it reached around 6% in 2010. That's important because that meant several hundreds of students didn't have to repeat the course and pay the extra associated tuition or suffer an extension in time to graduation. Also, section-to-section variability in the success rate was reduced from about 5% pre-Buffet to 1-2% afterward. Beginning in the 2005-2006 school year until I left OSU in 2014, the course was also assessed using a more structured set of nationally normed inventories like the Statistical Thinking And Reasoning Test (START) from Joan Garfield's group at University of Minnesota, Candace Schau's Student Attitudes Towards Statistics (SATS) inventory, and Morgan Earp's Statistics Anxiety Measure (SAM). But those were used more to inform other projects. AR: Does the Buffet approach continue to this day at OSU? And do you know of other institutions that have adopted it? DP: When I left OSU in 2014, the department hired Michelle Everson, so the course was turned over to very capable hands. I think it is important to completely redesign every course about once a decade, and I 'm sure she's doing a great job in building on my efforts. The Buffet redesign was supported by a grant from the Pew Program in Redesign run by the National Center for Academic Transformation (NCAT). NCAT is a very effective organization in helping programs, institutions, and systems use technology to simultaneously increase learning and reduce costs (see www.thencat.org). Anyways, the Buffet model is one of the several redesign strategies they advocate and, as a result, it has been adopted at some other institutions (e.g., in Chemistry at Missouri University of Science and Technology; in Psychology at Chattanooga State; and in Nutrition at the University of Southern Mississippi). Education Research AR: I want to come back to ask about your leaving OSU later, but for now let me shift gears a bit. You were a member of the group that produced the report "Using Statistics Effectively in Mathematics Education Research." Can you tell us about your involvement in that project? DP: That was another Dick Scheaffer project (the final report is at www.amstat.org/asa/files/pdfs/EDU-UsingStatisticsEffective lyinMathEdResearch.pdf). He brought together an interesting group of math educators, assessment and measurement people, educational statistics folks, and statistics educators to discuss the use of statistics in math education research. I contributed to several sections and to the follow-up meetings we had with NSF and others, but I think Ingram Olkin contributed the most interesting idea. He had worked on a very influential report back in the 1960s that succeeded in getting statistical issues more systematically addressed in medical studies. Because of him, that report (and ours) was written as guidelines on how to report studies rather than trying to dictate so-called best practices. Our report was also careful to lay out the different stages of an educational research program and how statistical issues should be aligned with the purpose(s) of each stage. AR: Am I right that you have extensive experience working with clinical trials? Do you think the statistical/scientific principles of clinical trials are applicable to mathematics and statistics education research? Do you see any particular challenges or difficulties with applying ideas of clinical trial ideas to education research? DP: Yes-I have done a lot of collaborative work in the medical area, especially in cancer studies, ranging from basic research to translational to clinical investigations. Good statistical practice applies everywhere, but obviously different applications have different traditions and different challenges arising from the subset of study designs and infrastructure available for use. With respect to medical versus education research traditions, I think both can learn from each other. Medical research always has its eye on the main target-improving human health-even when the research at hand is at the very basic level trying to understand how stuff works. I think sometimes education research "fails to see the forest for the trees" and ignores how things fit into an overall program that builds generalizable knowledge. On the other hand, good basic research in education will carefully define the concepts, the constructs used to approach them, and the measurement properties of the variables examined. Medical research sometimes "fails to see the trees for the forest" and often creates proxy variables from lab assays that they mistakenly examine for years before realizing they aren't really studying what they think they're studying. Now our country spends about the same amount per person per year for education and medicine-but expenditures on medical research are about a hundred-fold higher than expenditures on education research-so that's one big challenge. Medical research has then created an amazing infrastructure to tap into (e.g., well-stocked labs, continuing databases, ongoing clinical and basic research collaboratives, common baseline measurements, and an army of postdocs and lab technicians) that helps them move quickly to build on successes when they get them. Education needs that, and I've been advocating for focusing on research infrastructure for decades. Luckily, the advent of online learning and automated data collection have dramatically expanded the types of study designs that can be used in education research and has reduced the cost of doing that research, so I see a really bright future for figuring out what works to improve student learning. AR: It seems to me that one big difference between medical studies and education ones is that randomized experiments are much more common in medical studies. And when education studies use random assignment, that's often at the level of groups of students rather than individual students. Do you agree with this, and do you think randomized experiments will become (or perhaps already have become) more common in education studies? DP: That's a little bit of a misconception. Randomized studies in clinical medicine dominate for phase 3 studies, but remember that there are dozens of basic science experiments (lab assay or animal experiments replicated but not commonly randomized) for every phase 1 human trial (safety and feasibility studies not commonly randomized), and five of those for every phase 2 experiment (often using historical controls), and five of those for every phase 3 trial. We hear about the randomized trials because those represent a focal point of a successful research program. Anyway, education research now has many more design options because of the learning that takes place online. Nowadays, we can readily randomize at the student level or at the learning objective level and eliminate instructor effects in education studies that examine online "interventions." There are also better data analytic resources for handling large observational datasets that are having a big effect on medicine and hopefully will play a bigger role in education as we build the databases necessary for such efforts to be fruitful. AR: Boy, your first sentence there is a very nice way to say that I was wrong with the premise to my question! Turning from education research in general to statistics education in particular, what do you see as the most pressing priorities for statistics education research in the next decade? DP: As far as topic priorities for research, I think the report that I wrote with Joan, Bob, Randall, Jennifer, Herle, and Hollylynne about five years ago still holds up pretty well (Pearl et al. 2012), as long as you add in more ties to online teaching and using statistical approaches like simulation-based inference and Bayesian modeling. Anyway, there are many worthy topics, but I would still advocate that the top priority for statistics education research should be in building infrastructure. We need to have a couple of hundred institutions keeping a common set of baseline data on their students and as many as possible of those participating in ongoing administration of various nationally/ globally normed instruments of student learning, attitudes, anxiety, motivation, etc. We should also be conducting probability samples of statistics instructors every year to keep track of the true state of things, and we should be giving an army of undergraduate and graduate students experience in doing research on the collected data. By investing in a culture of assessment, we can do quality studies of all types in half the time without constantly re-inventing every wheel. AR: This sounds terrific, and I'd be delighted to see things transpire as you describe. I see that this report came from a research retreat held at ASA headquarters. Do you know if ASA or any other group has any initiatives currently underway along these lines? DP: Yes-Joan and I received an ASA Member Initiative to have a couple of meetings that got things started on that report, and while she was ASA President Jessica Utts asked a group of us to revise the report in order to modernize it and extend it beyond undergraduate education. The group has only met virtually so far, and I hope we do complete Jessica's task this upcoming year. Regarding the research infrastructure ideas-I received a couple of small NSF grants back in 2007 and 2008 with Kathy Harper, a colleague in Physics Education, to pursue demonstration projects in defining the database areas across all STEM disciplines (the WISER and INQUERI projects). People in different disciplines were excited about the ideas of the project, and we came to a general agreement about the administrative structure-but we had trouble finding the sustainable funding to pull it off. I also received a couple of grants in 2008 and 2010 with Joan Garfield to demonstrate the feasibility of the idea of doing a random survey of statistics instructors (the e-ATLAS projects). For the random survey, Joan's group developed an instrument to study how statistics is taught at the college level, while my group developed the probability-based sampling plan and drew a random sample of about a hundred instructors. We also drew a non-probability sample of about 300 participants so we could gauge the differences between a typical volunteer sample and a true probability-based sample of instructors. Much of this is becoming cheaper to do over time and so I do want to build on these efforts. I agree with you that working with the ASA is the appropriate route to make it sustainable. Consortium for the Advancement of Undergraduate Statistics Education (CAUSE) AR: That's great that progress is being made on this front. Now I'd like to ask directly about a topic that we've skirted thus far: your serving as Director of the Consortium for the Advancement of Undergraduate Statistics Education (CAUSE). How did you come to take on this role, and what appealed to you about it? (I suspect that readers are expecting to see "this was another Dick Scheaffer project" in your response.) DP: Actually, it was Joan Garfield and Deb Rumsey who received funding from ASA (but that was when Dick Scheaffer was ASA President, so the theme continues!) to hold a series of meetings about starting some kind of national undergraduate statistics education effort. I'm including a picture from the July 2001 meeting. We decided on a basic structure with a Director overseeing the organization and three sub-components (Research, Professional Development, and Resources) each supervised by separate Associate Directors. Deb was going to be the Director, Joan would be the A.D. for Research, and Beth Chance and you would share the role as A.D. for Professional Development. Also, we hadn't settled on an AD for Resources though Jackie Dietz (who had just finished her successful stint as Founding Editor of JSE) was considering taking on the job. As things turned out, Deb had a new baby and Jackie declined, so new team members were needed for those responsibilities. Joan called and asked if I would take on the role of Director since I had been active at the meetings. I agreed to try and work out a plan to do that and make it feasible to move the organization forward. I asked Rog Woodard, who was then at OSU and working closely with me on the Buffet project, to be the A.D. for Resources, and asked Doug Wolfe as Chair of OSU Statistics if the Department could provide a course relief for Rog and I to take on those jobs. Everyone agreed and we began to work. AR: What were some of the early challenges, and also some of the early success, for CAUSE? DP: The challenge to me was to put together the financial and staff resources to get things done. I was confident I could do that-but my biggest fear was that I wouldn't be able to get people interested in working on projects. Happily, there was lots of good will toward CAUSE in the statistics education community and I had an ace in the hole-Joan Garfield. Joan knew everyone and was respected by everyone. She gave me the names of folks to ask for help and, for the research oriented things, she asked them herself-to my amazement almost everyone said yes. Now, at that point Lee Zia at NSF had created a wonderful vision for a National Science Digital Library (NSDL) that would be a repository of resources for teaching and learning in the STEM disciplines. So, I worked with Rog Woodard in putting together a proposal for CAUSE to operate the statistics education NSDL and CAUSEweb was born with Rog as the editor and Ginger Rowell as the Associate Editor and with Justin Slauson hired as our web programmer. The NSF reviewers said that the editorial board that we assembled from Joan's recommendations was a "Who's Who of Statistics Education." I agreed. We also had a couple of other big successes in our first five years that helped propel things to the next level. Working with Tom Short, we received a major workshop grant (the CAUSEway program) enabling us to provide a series of about 30 multi-day workshops that were free to the participants. Then, working with Deb Rumsey, we launched the NSF-funded CAU-SEmos program that supported three USCOTS conferences, provided funds for eight faculty-learning communities, and allowed us to hire Jean Scott as our program coordinator to handle the logistics of it all. Jean really became the face of CAUSE over the next eight years. AR: I suspect that this will be the most softball question of the interview: For someone who has never been to the CAUSE website or participated in a CAUSE-sponsored activity, how would you recommend that they get involved, and why should they want to get involved? But let me make this question a bit harder by asking you to address three different groups: those for whom statistics teaching is their primary professional activity, undergraduate instructors in other disciplines such as mathematics or social sciences who occasionally teach statistics, and education researchers interested in statistics education. DP: If you are a nonstatistician teaching undergraduate classes that involve statistical concepts then go to www.CAUSEweb.org to find a unique collection of songs, cartoons, well-researched quotes, and other fun items for teaching elementary statistical concepts; or find ways to connect your teaching to real-world applications though Chance News hosted on CAUSEweb. If you are an education researcher interested in getting involved in statistics education research, go to CAUSEweb to find a searchable annotated index to the literature; national reports providing guidelines for programs, research topics, and methods; and links to appropriate assessment items and inventories in statistics education research. If your career is in teaching statistics, then come to CAUSEweb to hear recordings of nearly 200 webinars and 100 hours of archived virtual conference sessions on the teaching and learning of statistics; come to share your own teaching tips and browse links to collections of on-line and hands-on activities and technological resources for teaching; find out how your students can take part in national undergraduate research competitions (USPROC) or in monthly cartoon caption contests; come to add your name to the map of people in statistics education and find others in your area; come to find out about how to implement new ideas in pedagogy like teaching statistical inference using simulation. If you are in any of these groups, then come to CAUSEweb to sign up for our eNEWS that will keep you informed about professional development opportunities like webinars, workshops, and conferences on teaching statistics and to hear about the activities of the latest statistics education projects funded by NSF. And this is just a sampling. CAUSEweb is meant to be both a portal for anyone interested in the teaching and learning of statistics at the undergraduate level and a link to a great community of like-minded people. AR: You mentioned "fun" items such as cartoons in this response. You've done a good bit of work (although that seems like an odd word choice here) on incorporating "fun" into teaching statistics. Can you summarize what you've done and learned on this topic? DP: Yes-I've "worked" with Larry Lesser from University of Texas at El Paso (UTEP) since the start of CAUSEweb in 2003 on the development of our collection of fun resources for teachers. Then, as part of a CAUSE faculty learning community on teaching with fun, we met John Weber from Perimeter College of Georgia State (GPC) and started to do more systematic research in the area. We received an NSF grant in 2012 (project UPLIFT) to do randomized experiments of the effect of fun teaching on learning and student anxiety and also to remove some barriers that instructors had in using the resources in the CAUSEweb collection (e.g., providing high-quality recordings of songs). Results from our randomized trial were recently published in JSE (Lesser, Pearl, and Weber, 2016). It showed that students who were asked to listen and sing along with songs illustrating specific learning objectives got the correct answer on test items about those objectives an average of 7.7% more often than students who learned without the songs. Also, just last year, we received another NSF grant (project SMILES) that we are quite excited about. We are creating a large group of interactive songs that we are testing in new experiments. The interactive songs work kind of like MadLibs, where students are asked questions and then the words they use in their answers become part of the song via a synthetic voice. We hope that the increased student engagement required will boost the learning gains provided by songs. Moving to Penn State AR: Let's get back to your biographical story. A few years ago you left Ohio State and moved to Penn State. What was your motivation behind that move? DP: I retired from OSU in 2013 just before a series of very negative changes in the Ohio retirement system were to take place. As might be expected, I wasn't very good at being retired and, because I was now an Emeritus Professor, the OSU Statistics Department could no longer provide any support for my efforts with CAUSE. On the other hand, Penn State was looking to build a stronger presence in statistics education and had the resources to do that. They were hiring Kari Lock Morgan as an Assistant Professor, and as part of hiring me, they were able to provide partial ongoing support for USCOTS and for Lorey Burghard, Kathy Smith, and Bob Carey, our new CAUSE staff. Leaving friends and collaborators at Ohio State was a tough decision-but this new opportunity was too good to pass up. AR: Are you teaching classes at Penn State, as well as directing CAUSE and continuing your research program? If so, what kinds of courses are you teaching? DP: I do teach a regular load. I teach an introductory statistical concepts course completely online as well as face-toface. Those courses give me a laboratory for my fun teaching research. I'm also really enjoying teaching fully online and hope to share some of my work in that setting in an USCOTS poster next year. Next, I teach upper division applied probability, which has been another interest of mine over the years-I've collaborated with Ivo Dinov at University of Michigan and Kyle Siegrist at University of Alabama on developing resources for that course as part of the NSF funded Distributome project. Finally, every-otheryear I facilitate a graduate course on Statistics Education (first offering was last spring). That course is part of the graduate program at University of Minnesota and I worked with Bob delMas and Liz Fry on providing a Penn State version where we discuss the articles of the week at PSU for an hour and then hook up virtually with their class when they have invited visitors. Bob and Liz did a great job in setting up an exciting class and our ability to piggy-back off of their hard work allowed us to create a quality experience for PSU students and faculty. AR: I'm tempted to ask about what's different in your teaching now as compared to the beginning of your career. I reserve the right to ask that later, but I'm more interested in hearing what you think has not changed in your teaching over the years. DP: Wow! That really is the tougher question since I started teaching in the pre-computing, pre-internet era. I guess one constant is that I've always taught through applied examples. Even when I've taught Master's and PhD statistics courses, I have never introduced a statistical idea to a class without putting it in the context of an application. If there's no application-I don't see the point and I wouldn't expect my students to either. examples that come straight from the news-especially those with interesting confounders. One regression example that I've used for decades involves an experiment that one of my introductory statistics students did in 1986 to satisfy a class project requirement I had. She put the numbers one to nine on tickets in a hat and had each of 16 volunteers from her dorm select a ticket at random (with replacement) and then drink that many beers (e.g., pick the number 5 and you drink 5 beers). Thirty minutes after consuming the alcohol, a police officer administered a breathalyzer test to determine the blood alcohol content of the volunteer. Finally, she also included the weight and sex of each student in the dataset and had the officer take everyone's BAC at baseline to confirm they were at zero at that point. This was back when the drinking age was 18 in Ohio and it was easier to get approval on the human subjects issues-so please don't try this at home! Anyways, the resulting data is wonderful to use for regression (How does the number of beers X affect BAC Y?), for multiple regression (How does the weight of the subject affect the prediction of Y based on X?), and for inference (Is the affect of drinking on BAC greater for women than for men, even after accounting for weight?). It also makes for good discussions of assumptions since the plot of BAC vs. beers is very linear within the x D 1 to 9 range but would clearly not be linear as you move in either direction and also the fact that students drew the numbers at random means we can assume that X is independent of Gender/Weight. We put the data set into EESEE so it's publicly available. Pop Quiz AR: Now let's begin the "pop quiz" portion of the interview, where I'll ask a series of questions and request that you keep your responses to just a few sentences or less. First, please tell us about your family. DP: I'm lucky to have both of my parents still alive at ages 94 and 95 and still living in their own home in the Seattle area. Barbara and I have been married for 44 years and have three great kids, two great son-in-laws, and five wonderful grandchildren (so far). Our youngest daughter's family in Philadelphia is just a three-and-a-half hour drive or train ride away, but we have to go out to California to see everyone else. So, if anyone out there wants to invite me to give a talk in the Sacramento area, I'm yours! AR: Please name some (nonstatistics) books that you've read recently. DP: I think I'm going to fail the pop quiz since I rarely read much in the way of fiction, except when I read to my grandchildren, which is about every time I see them. A few days ago I read Shel Silverstein's The Giving Tree to my three-year old grandson. He loves trees and gets mad when the boy in the story grows up and chops down the tree to build a house and then a boat. Every kid I know, including me, loves Goodnight Moon by Margaret Wise Brown so I've probably read that a hundred times over the years. AR: Relax, this pop quiz is ungraded. Think of it as a formative rather than a summative assessment. Next, what are some of your favorite travel destinations? Perhaps you could mention one place you've been for professional reasons and one strictly for pleasure. DP: I'll give you two each. On the professional side it would be a tossup between Cape Town, South Africa, where ICOTS was held in 2002, and Kristineberg, Sweden, where I gave a short course in 2008 on statistical phylogenetics hosted by University of Gothenburg. For pleasure-I love the Yellowstone area for its diversity of scenery and wildlife. I'd also like to get back to Greece where we had a great family reunion some years back: mile-for-mile one of the prettiest countries on the planet. AR: What are some of your hobbies outside of statistics and education? DP: I collect progressive political buttons from the 20th century like those from the civil rights, anti-war, feminist, LGBT rights, and environmental movements. I have about 10,000 buttons altogether. My wife and I collect kaleidoscopes-but in that case having 40 is considered a collection. AR: Wow, 10,000 buttons is a lot! Perhaps you could wear a few hundred of them at the next USCOTS. Next, please tell us something about yourself that is likely to surprise JSE readers. DP: I grew up riding a unicycle instead of a bike and I was pretty good at it as a teenager-for example, being able to ride backward while pedaling with one foot and going off of a curb. Now, I just go forward and backward and try to avoid breaking any bones. I do still do my unicycle challenge in every undergraduate class I teach. If at least 5% of the class gets a perfect score on a midterm, I will give a lecture while riding my unicycle. I've only had to do that once, right after my 60th birthday in 2011 at OSU-but the offer is still good here at Penn State. AR: Please excuse me while I search youtube for a video of this… Okay, I'm back now, and disappointed that my search was not fruitful. Now I'll ask a fanciful question: You can have dinner anywhere in the world with three companions, but the dinner conversation must focus on statistics education. Who would you invite and where would you dine? DP: Not on youtube, but you can find a picture in the OSU Stat Department newsletter at http://www.stat.osu.edu/sites/default/ files/news/2011StatisticsNewsletter.pdf. For the dinner companions, I guess I would start with Hans Rosling, whose work is always filled with optimism and shows the power of statistics interwoven with interesting stories better than anyone I've seen. Next, I would invite Joan Garfield since she loves fine dining, always helps me focus on the big picture, and we can talk about our grandkids after dinner. Finally, I would add the head of the MacArthur Foundation, so I could pitch my idea for their $100 million grant program for solving an important world problem (Why not give the whole world the tools to evaluate evidence?). I would dine at Chibchas restaurant in Catheys Valley, California. It's a Columbian food place that opened in 1968 and where I ate only once-as part of that research experience on modeling fire back in 1971. I hope your question is fanciful enough that we can go to a restaurant that closed five years ago! AR: Hmm, I guess I'd better make that dinner reservation soon. First let's get even more fanciful. If you could travel to any point in time, past or future, what would you choose, and why? DP: Yikes! Time travel makes this an impossible question. Should I go back in time to have dinner with Abe Lincoln and Mahatma Ghandi and Yitzhak Rabin to tell them to stay out of Ford's Theater, the Birla House, and Kings of Israel Square? It's probably not a good idea to mess with the timeline, so I'd like to take my wife out to dinner in 56 years for our 100th anniversary to that restaurant in Paris that scans your taste buds as you enter and then provides the perfect meal for your individual palate. I'll go with the Nobel Peace Prize winners for the previous two years as guests. I guess I'm just more curious about the future than the past. AR: Getting back to reality, what has been your favorite course to teach in your career? DP: Instead of a typical qualifying exam, the PhD Students in the College of Medicine at OSU were charged with writing a small grant proposal and that proposal was to include a statistical methods section. I developed a course in biostatistical collaboration where our students worked with the Medical College students in writing those methods sections and helped to make sure that statistical issues were well handled in every aspect of the proposals (about one stat student per five students in their program). I taught our students about various biomedical assays and their statistical properties, how to collaborate with medical scientists in writing successful NIH grants, and worked with them in providing solutions to some of the more difficult statistical issues that arose in the 30 grants we helped write per year. It was a lot of fun. AR: The last question in the pop quiz consists of four questions with which I collect data from students. The binary question is: Do you consider yourself an early bird or a night owl? The nonbinary categorical question: On what day of the week were you born? (You might consult www.timeanddate.com.) A discrete variable comes from asking: How many of the seven Harry Potter books have you read? And finally a question about a continuous variable: How many miles from where you were born do you live now? (You might consult www.distancefromto.net.) DP: I am an early bird who was born on a Saturday but never read any Harry Potter books and now lives 2262 miles from where I was born. Concluding Thoughts AR: Some of our colleagues are very optimistic about statistics at this point in time, because data abound more than ever and so the ability to draw insights from data is more important than ever. Others are somewhat pessimistic about the prospects for statistics, in light of the emergence of the field of data science in which many non-statisticians are generating data analyses. What's your view of the health of our discipline in the year 2017? DP: I'm squarely in the optimists camp. Today many statisticians are developing new software and new computational algorithms. Does that imply the end of computer science? No-it enhances computer science, shows them new areas of application, and gives them new folks to collaborate with. Today many computer scientists are generating data analyses. Is that the end of Statistics? No-I see that as enhancing our field, giving us new areas of application, and new folks to collaborate with. AR: What do you see as the most pressing need in statistics education for the next 5-10 years? DP: I'll stick with the basics: Infrastructure and Community. Infrastructure for both instructors and researchers as I mentioned earlier. We also need to be better at connecting people to the community of statistics instructors on a more global scale and do a better job at professional development for teachers with little/no statistical training. The advent of successful virtual meetings like eCOTS shows the possibilities with its simultaneous face-to-face regional components to match its global electronic component. I'd like to see many more virtual meetings on a variety of topics and at a variety of organizational levels (i.e., not just prepared conference style sessions-but also brain storming and help-oriented sessions). I also love the virtual poster sessions we have at eCOTS-I can see doing those on an almost continuous basis. AR: Of all of your activities and accomplishments in statistics education, which one are you most proud of? DP: I guess that would have to go to building up CAUSE, since that has had the most effect. AR: Thanks very much for providing me with this interview, Dennis. I think JSE readers will learn a lot from reading about your career and your ideas for statistics education. What advice do you have for JSE readers who are fairly new to statistics education? DP: Go to www.CAUSEweb.org. Participate in a webinar. Go to USCOTS in May 2017 and attend a workshop as well as the whole conference. Arrange a DataFest at your school or take your students to one nearby. Have your students participate in the caption contest or in USPROC. Share your favorite resource with others. Give us feedback on what you think are the most important projects we should be doing. Sign up for the CAUSE eNEWS list. Attend some of the education sessions at JSM. Join the ASA Stat Ed Section and/or the Teaching Statistics in the Health Sciences Section. And keep reading JSE! And thanks to you Allan-I hope you get a chance to go to some of those fanciful dinners you arrange with each JSE issue!
12,467
2017-01-02T00:00:00.000
[ "Mathematics" ]
Exploring brusatol as a new anti-pancreatic cancer adjuvant: biological evaluation and mechanistic studies Pancreatic cancer is highly resistant to chemotherapeutic agents and is known to have a poor prognosis. The development of new therapeutic entities is badly needed for this deadly malignancy. In this study, we demonstrated for the first time that brusatol, a natural quassinoid isolated from a Chinese herbal medicine named Bruceae Fructus, possessed potent cytotoxic effect against different pancreatic adenocarcinoma cell lines. Its anti-pancreatic cancer effect was comparable to that of the first-line chemotherapeutic agents such as gemcitabine and 5-fluorouracil, with a more favorable safety profile. In addition, brusatol showed a synergistic anti-proliferative effect toward PANC-1 and Capan-2 cell lines when combined with gemcitabine or 5-fluorouracil. The results of flow cytometry suggested that brusatol combination treatment with gemcitabine or 5-fluorouracil was able to cause cell cycle arrest at G2/M phase, and accentuate apoptosis in PANC-1 cells. Moreover, brusatol deactivated gemcitabine/5-fluorouracil-induced NF-κB activation. Western blot analysis and qRT-PCR results showed that brusatol significantly down-regulated the expression of vimentin and Twist, and markedly stimulated the expression of E-cadherin, the key regulatory factors of the epithelial-mesenchymal transition process. Furthermore, treatment with combination of brusatol and gemcitabine or 5-fluorouracil significantly reduced in vivo tumor growth when compared with treatment of either brusatol or gemcitabine/5-fluorouracil alone. Taken together, these results have amply demonstrated that brusatol is a potent anti-pancreatic cancer natural compound, and the synergistic anti-pancreatic cancer effects of brusatol and gemcitabine/5-fluorouracil observed both in vitro and in vivo are associated with the suppression of epithelial-mesenchymal transition process, indicating that brusatol is a promising adjunct to the current chemotherapeutic regimen. INTRODUCTION Pancreatic cancer (PanCa), one of the most deadly human malignancies, has a 5-year survival rate of less than 1%. PanCa is the 4 th most common cause of the cancerrelated death in the USA [1], where in 2014, about 46,420 people were diagnosed with PanCa and approximately 39,590 people died of this disease [2]. In China, the past two decades have witnessed a 6-time increase in the incidence rate of PanCa, possibly owing to the increasing prevalence of Western diets-induced obesity. Today, PanCa has become the 6 th leading cause of cancer-related mortality in China [3]. Research Paper the most common strategies for the treatment of advanced and unresectable PanCa. However, the median survival time is still less than 6 months for patients on these treatments [4], due mainly to drug resistance [5]. To overcome chemo-resistance, combination chemotherapy strategies are usually adopted to achieve better cancer cell killing with fewer systemic toxicity [6]. To enhance the efficacy of current chemotherapeutic agents such as GEM and 5-FU for PanCa, new adjuvants which can circumvent GEM/5-FU-associated chemo-resistance are badly needed. In recent years, herbal medicines or natural compounds, either used alone or combined with conventional chemotherapeutic agents, have been shown to have beneficial effects on diverse cancers [7]. In our previous work, we found that the alcoholic extract of Bruceae Fructus (Ya-Dan-Zi in Chinese), a Chinese medicinal herb commonly used for the treatment of cancer, possessed significant cytotoxicity against several PanCa cell lines [8]. Brucein D, a quassinoid found in abundance in this herb, has been shown to inhibit the activity of NF-κB and suppress the growth of several PanCa cell lines via inducing cellular apoptosis without causing overt organ toxicity [9,10]. Brusatol (BR), a natural quassinoid diterpenoid isolated from Bruceae Fructus, exhibited the most potent in vitro antipancreatic tumor action among all the isolated quassinoids [11]. Furthermore, it was reported that brusatol acted as a unique inhibitor of the Nrf2 pathway that sensitized various cancer cells and A549 xenografts to chemotherapeutic drugs, suggesting brusatol might be a promising candidate for combating chemo-resistance and has the potential to be developed into an adjuvant chemotherapeutic agent [12]. However, the chemosensitizing effect of brusatol on PanCa has not been explored. It has been known that epithelial-mesenchymal transition (EMT) involving key regulators such as Twist and E-cadherin, is an important mechanism underlying chemotherapy resistance [13,14]. Targeted inhibition of the EMT process without eliciting systemic toxicity using combination chemotherapeutic agents may lead to better tumor cell killing in PanCa. Based on the anti-PanCa and chemosensitizing effect of brusatol, we hypothesized that brusatol could sensitize the current first-line chemotherapeutic agents GEM and 5-FU to PanCa via inhibition on the EMT process. This work was therefore initiated to explore the potential of brusatol as a novel anti-PanCa adjuvant for GEM and 5-FU using different PanCa cell lines in vitro, and in vivo via orthotopic xenotransplantation PanCa mice model. Moreover, the underlying molecular mechanisms were delineated. Our results indicated that brusatol deactivated NF-κB activation and arrested PanCa cell growth at least in part via inhibition of Twist and stimulation of E-cadherin expression. The synergistic inhibition by brusatol and GEM or 5-FU observed both in vitro and in vivo suggested that brusatol may be a promising adjunct to current chemotherapeutic regimens. Brusatol inhibits proliferation and potentiates the inhibitory effects of chemotherapeutic agents in PanCa cells The cytotoxic effects of brusatol, GEM and 5-FU on the cell viability of human PanCa cell lines PANC-1, Capan-1, Capan-2, SW1990, and the non-tumorigenic human gastric cell line (GES-1) are shown in Figure 1A-1C. The results showed that brusatol markedly suppressed the cell proliferation of all the four tested PanCa cell lines in a dose-and time-dependent manner, with IC 50 values in the range of 0.33-8.47 μg/mL. The potency of brusatol was significantly higher than those of GEM (IC 50 : 2.83-78.78 μg/mL) and 5-FU (IC 50 : 1.48-85.11 μg/mL). However, brusatol only exerted mild cytotoxicity on GES-1 cells, with IC 50 value > 68.90 μg/ mL. In contrast, GEM and 5-FU were more toxic to GES-1 cells, with IC 50 values being 5.92 μg/mL and 12.34 μg/ mL, respectively. These results unambiguously indicated that brusatol possessed potent in vitro anti-PanCa effect. For single-agent treatment, GEM or 5-FU (0.1, 1, 5, 10, 100, 500 μg/mL) alone produced a dose-dependent inhibition on the growth of PANC-1 and Capan-2 cells when cultured for 72 h. When GEM or 5-FU was combined with brusatol at a constant concentration ratio of 10:1, by adding 0.01, 0.1, 0.5, 1, 10 and 50 μg/mL brusatol, respectively, the cell growth inhibition was greatly enhanced ( Figure 1D). The combinationindex (CI) is a mathematical method commonly used to measure the pharmacological interaction of two drugs [15][16][17][18]. Isobologram analysis showed that the CI for every combination treatment was < 1 (Figure 1E and Supplementary Table 1), indicating significant synergistic effects of these combination treatments. It is clear from the results that combination of GEM or 5-FU with lower doses of brusatol elicited significantly greater inhibition on the cancer cell growth than either agent alone. Since brusatol showed a synergistic antiproliferative effect when combined with GEM or 5-FU in PANC-1 and Capan-2 cell lines, and PANC-1 and Capan-2 cells were more sensitive to the combination treatment and relatively easier to culture, they were therefore used for the subsequent mechanistic studies. Brusatol induces apoptosis and causes G2/M cell cycle arrest in PanCa cells Quantification of DNA fragmentation in PANC-1 and Capan-2 cells was performed using Cell Death Detection ELISA PLUS Kit. Cells were treated with DMEM medium (control), GEM (10 μg/mL), 5-FU (10 μg/mL), brusatol (2 μg/mL) or combination treatment for 48 h. The results revealed that brusatol caused significant DNA fragmentation after exposure for 48 h. Noticeably, brusatol combined with GEM or 5-FU induced more cellular apoptosis than the GEM or 5-FU alone group ( Figure 2A). Quantitative flow cytometric analysis showed that brusatol exerted a dose-and time-dependent apoptogenic effect on PANC-1 cells ( Figure 2B). At 1 and 2 μg/mL, brusatol caused 10.9% and 15.5% of the cancer cells to undergo apoptosis within 24 h, respectively. Extending the treatment time to 48 h resulted in a marked augmentation of cellular apoptosis, with 36.8% and 48.1% of cellular apoptosis after 1 and 2 μg/mL brusatol treatments, respectively. In stark contrast, non-treated cells showed normal cell viability without significant cell death. To further elucidate the mechanism of growth inhibition and whether the cell cycle change upon brusatol monotherapy and combination treatment, PANC-1 cells were treated with 2 μg/mL brusatol for 24, 48 and 72 h, and exposed to brusatol alone and in combination with GEM or 5-FU for 48 h. The results indicated that brusatol alone induced a time-dependent G2/M arrest ( Figure 2C), and brusatol combined with GEM or 5-FU also resulted in a pronounced accumulation of cells in G2/M ( Figure 2D). Brusatol treatment decreases the expression of anti-apoptotic protein and deactivates chemotherapeutic agents-induced NF-κB activation As shown in Figure 3A and 3C, immunoblotting results clearly showed the down-regulation of NF-κB p65 in PanCa cells after exposure to brusatol, and the combined treatment with chemotherapeutic agents significantly augmented the brusatol-mediated inhibition of NF-κB p65. In addition, brusatol down-regulated the expression of two anti-apoptotic proteins Bcl-xL and PCNA in both PANC-1 and Capan-2 cells. Combination treatment of brusatol with GEM or 5-FU also markedly attenuated the expression of Bcl-xL and PCNA ( Figure 3B, 3D and 3E). The immunoblotting results are strongly indicative that brusatol could suppress the expression of anti-apoptotic proteins in favor of promoting cellular apoptosis. These findings were congruent with cell growth inhibition as observed in MTT assay, amply vindicating that the greater cell growth inhibition observed with the is a quantitative measurement of the degree of drug interaction. CI < 1 indicates synergism, CI = 1 indicates additive effect, while CI > 1 signifies antagonism. × represents that CI values were generated over a range of 40%-95% growth inhibitory effects. www.impactjournals.com/oncotarget combination treatment was closely associated with the induction of cellular apoptosis. Effects of brusatol on the EMT process in PanCa cells EMT is an important mechanism associated with chemoresistance. In the present work, the expression of E-cadherin, vimentin and Twist, three characteristic factors of the EMT process, was detected by Western blotting after brusatol alone or combination treatment for 48 h. The results showed that brusatol markedly increased the E-cadherin expression, while significantly decreased vimentin expression. As shown in Figure 4A and 4B, brusatol combined with chemotherapeutic agents induced stronger E-cadherin protein expression in PANC-1 cells, with 3.3-fold over that of GEM and 2.7fold 5-FU, respectively. In contrast, the expression of vimentin ( Figure 4A and 4C) and Twist ( Figure 4A and 4D) decreased significantly after combination treatment when compared with the control. Effects of brusatol on E-cadherin and Twist expression in PANC-1 and Capan-2 cells were further analyzed by real-time PCR. The results revealed the significantly increased E-cadherin mRNA expression and decreased Twist expression in both the brusatol monotherapy and combination treatments, as compared with the untreated control ( Figure 4E and 4F). The results implied that brusatol alone or in combination with chemotherapeutic agents could increase the expression of E-cadherin, while suppress the expression of Twist and vimentin, thus inhibiting the EMT process, and ultimately leading to the chemosensitizing effect of brusatol. Brusatol and its combination with chemotherapeutic agents significantly inhibits the tumor growth in human pancreatic orthotopic xenograft tumor mouse model Based on the promising in vitro results concerning the anti-PanCa effect of brusatol and its combination with GEM or 5-FU, we undertook the in vivo studies to investigate whether brusatol alone, or in combination with chemotherapeutic agents, could inhibit the growth of pancreatic tumor in an orthotopic animal model. The lentivirus expressing EGFP and luciferase were used to infect PANC-1 and Capan-2 cells. We observed under a fluorescent microscope for the expression of EGFP at different time-points after infecting the cells with lentivirus and then calculated the infection efficiency. When the multiplicity of infection (MOI) was 100, the infection efficiency of lentivirus was above 80% after incubation for 72 h in PANC-1 and Capan-2 cells. Following 2-week screening by puromycin, the infection efficiency was maintained stably in the range of 80-90% ( Figure 5A). The results of luciferase activity assay showed that lentivirus could steadily infect PANC-1 and Capan-2 cells and the infection efficiency was satisfactory. In addition, there was no significant difference in morphology, growth rate and tumor formation rate between parent and CMV-EGFP-linker-Luc-transfected cells (data not shown). Hence, the cell lines expressing both EGFP and luciferase could serve as a promising tool for real-time monitoring of tumor growth in vivo. For the construction of orthotopic PanCa mouse model, PANC-1 and Capan-2 cells (stably transfected with EGFP and luciferase) were injected into the pancreas as described in the Methods. The experimental protocol is depicted in Figure 5B. As shown in Figure 5C and 5G, the luciferase fluorescence made possible the real-time and sequential whole-body imaging of tumors. Moreover, noninvasive quantitative measurements of the externally visible fluorescent area enabled the construction of the in vivo tumor growth curves ( Figure 5D and 5H). Small primary tumor lesions on day 7 after transplantation were observed in all mice by the real-time whole-body imaging. The differences in imaging were not significant on day 14. However, imaging conducted on days 21, 28 and 35 confirmed the significant growth of primary tumor in the control group. While 1 or 2 mg/kg brusatol group, and its combination with GEM or 5-FU groups showed a marked reduction in the tumor growth in a dose-dependent manner when compared with the control. The size of xenograft tumors formed in mice of 2 mg/kg group was markedly smaller than that of the 1 mg/kg brusatol-treated mice. On days 28 and 35, in both PANC-1 and Capan-2 orthotopic xenograft tumor models, brusatol (1 or 2 mg/ kg), and its combination with GEM or 5-FU were shown to significantly inhibit the tumor size as compared with the corresponding control ( Figure 5C, 5D, 5G and 5H). Figure 5E and 5I showed the effects of brusatol and chemotherapeutic agents on the tumor weight and volume as measured at the end of the experiment by autopsy. Results indicated that brusatol, GEM, 5-FU and their combinations all significantly reduced the tumor volume and weight as compared with the corresponding control in both orthotopic xenograft tumor models ( Figure 5E, 5F, 5I and 5J). Brusatol at 1 mg/kg was also found to exhibit significant anti-tumor activity albeit the effect was less potent when compared with that of 2 mg/kg brusatol. The combined treatment of brusatol at 2 mg/kg with chemotherapeutic agents significantly reduced the tumor volume and weight, and the tumor reduction was more pronounced than either agent alone. In vivo toxicity test of brusatol To evaluate the potential toxicity of brusatol, acute toxicity assessment was performed.Nude mice treated with brusatol intraperitoneally at the high dose (2 mg/kg) for 28 consecutive days did not show any treatmentrelated side effects and signs of toxicity. The body weight of the brusatol-treated and combination-treated nude mice showed no significant changes when compared to that of the GEM/5-FU alone treatment. A larger accrual in the body weights of the control mice observed might be caused by more rapid tumor growth in control group ( Figure 6A). In addition, no significant differences were observed in the levels of plasma enzymes including ALT, AST, LDH, CK and Cr ( Figure 6B) between the treatment groups and the control. Furthermore, brusatol induced no treatment-related abnormality concerning the gross anatomy and histological morphology. The results indicated that brusatol at the high dose of 2 mg/kg exerted no overt toxicity to the liver, heart and kidney tissues in the tumor-bearing mice. Expression of E-cadherin and Twist in the orthotopic xenograft tumor tissues The Twist and E-cadherin expression levels in pancreatic tumor tissues were measured. We found that both brusatol and 5-FU alone groups upregulated the expression of E-cadherin in PANC-1 and Capan-2 orthotopic xenograft tumor tissues ( Figure 7A-7D). The result confirmed our in vitro data that brusatol treatment upregulated the E-cadherin levels in PanCa cells. Similarly, we found that brusatol alone decreased the Twist expression, as compared with the control group. The differences in the expression of E-cadherin and Twist between different brusatol groups and the control group were statistically significant. In addition, in both PANC-1 and Capan-2 orthotopic xenograft tumor tissues, brusatol combined with GEM or 5-FU group showed stronger staining for E-cadherin, and weaker Twist staining when compared with the chemotherapeutic agent alone and the control group, and the combination treatment was observed to produce the most obvious effect (P < 0.01). DISCUSSION Brusatol, a quassinoid found in abundance in Bruceae Fructus, was believed to be one of the major active principles responsible for the anticancer effect of Bruceae Fructus. In the present study, pioneering effort was devoted to investigate the in vitro and in vivo chemosensitizing effect of brusatol toward GEM and 5-FU, and to unravel the potential underlying molecular mechanisms. We have demonstrated in the present study that brusatol significantly inhibited the proliferation of human PanCa cells in a dose-and time-dependent manner, and brusatol at low concentration was able to significantly inhibit the proliferation of PanCa cells without affecting the viability of normal gastric epithelial cell. Besides, the anti-PanCa effect of brusatol was more potent than that of GEM or 5-FU alone, which also displayed great cytotoxic effect to normal gastric epithelial cell. It was also observed that brusatol showed a synergistic antiproliferative effect toward both PANC-1 and Capan-2 cell lines when combined with GEM or 5-FU, with the CI values being in the range of 0.2-0.8, indicative of a strong synergistic action. Furthermore, it was found that brusatol monotherapy and combination treatment caused cell cycle arrest at G2/M phase, induced apoptosis, and inhibited several transcription factors and various biomarkers linked to survival, proliferation, and metastasis such as NF-κB p65, Bcl-xL, and PCNA in PanCa cells. These results suggest that brusatol could accentuate the apoptotic effect of GEM or 5-FU by inhibiting anti-apoptotic proteins in PanCa cells. The observed down-regulation of antiapoptotic proteins was believed to contribute to the synergistic inhibition of PanCa cell growth exerted by the combined treatment of brusatol and GEM or 5-FU. Based on these promising in vitro results, we therefore sought to evaluate the in vivo efficacy of brusatol in PanCa growth, using orthotopic models of PanCa derived from the highly malignant PANC-1 and Capan-2 cell lines. Our results suggested that daily brusatol administration for 28 days significantly decreased pancreatic tumor growth in both PANC-1 and Capan-2 cell orthotopic mouse models, without causing significant weight loss, mortality or other noticeable abnormalities on the heart, liver and kidney of the experimental animals. Furthermore, it was also shown that brusatol significantly synergized with GEM/5-FU and inhibited the tumor growth (tumor size, tumor weight and Lucsignal intensity) of mice more significantly than treatment with either single agent, while caused no observable side effects, further supporting the in vitro anti-PanCa activity. These findings strongly indicated that the enhanced anticancer efficacy and reduced cytotoxicity could be achieved through optimized combination treatment. One of the major culprits involved in the development of drug resistance and prognosis of the disease is EMT, which refers to the absence of epithelial phenotype and presence of mesenchymal characteristics. The activation of EMT process is reported to facilitate embryonic development, tissue formation as well as tumor invasion and metastasis [19]. The hallmark of EMT is the loss of the epithelial homotypic adhesion molecule E-cadherin and the gain of mesenchymal markers such as vimentin [20]. In various human cancers, E-cadherin loss is related to poor prognosis, tumor progression and metastasis. Twist is a basic helix-loop-helix (basic Helix-loop-Helix bHLH) transcription factor located in autosomes. Several reports have indicated that Twist is the key regulator of the EMT process associated with tumor cell proliferation, differentiation, metastasis, invasion and anti-apoptotic process, with clinical implication of poor prognosis and metastasis in many kinds of cancers, such as breast, esophageal, gastric and prostate cancer. Twist is also an inhibitor of apoptosis [21][22][23][24]. Over expression of Twist often promotes cell colony formation, prevents apoptosis, and induces drug resistance and metastasis of tumor, with a consequence of an increased tumor invasiveness and poor prognosis [25]. In our experiments, brusatol monotherapy or combination treatment significantly increased the expression of E-cadherin and suppressed the expression of vimentin and Twist in both PANC-1 and Capan-2 cells, and the observation was believed to be associated with the inhibition of PanCa growth in vitro and in vivo. This is the first report to show that brusatol was able to suppress PanCa growth and enhance the anti-PanCa effect of GEM and 5-FU in an orthotopic mouse model. Based on our promising in vitro and in vivo results, brusatol could greatly sensitize PanCa to GEM and 5-FU while it exhibited a more favorable safety profile. The result indicated that brusatol might reduce the dose of chemotherapeutic agents while exerting superior beneficial effect for the treatment of PanCa. The synergistic inhibition by brusatol and chemotherapeutic agents observed both in vitro and in vivo, together with the absence of overt toxicity, strongly indicated that brusatol is a promising adjuvant to current chemotherapy regimen for this deadly human malignancy. However, to corroborate this potential, further experiments should be performed on this model to monitor the mouse survival benefit treated with the combinations. Taken together, our present work laid a solid foundation for further in-depth studies to evaluate the overall survival benefit, long-term safety, pharmacokinetics in animal models including the genetically engineered pancreatic cancer mouse model. The results from this work also provided justification for conducting clinical trials in future to evaluate the safety and effectiveness of this natural product on pancreatic cancer. The development of brusatol into an anti-PanCa adjuvant would add new therapeutic dimensions to the current limited approach in the management of this most deadly malignancy in human. Cell lines and reagents Human pancreatic cancer cell lines (PANC-1, Capan-1, Capan-2 and SW1990) and non-tumorigenic human gastric epithelial cells were obtained from the ATCC (Manassas, VA, USA). The cell lines were authenticated by short-tandem repeat analysis. Cell lines were initially expanded and cryopreserved within 1 month of receipt and were typically used for 3 months; and at which time, a fresh vial of cryopreserved cells was used (2013)(2014)(2015). These cells were maintained in culture as an adherent monolayer in DMEM or IMDM supplemented with 10% FBS. Brusatol (CAS: 14907-98-3) was isolated from Bruceae Fructus in our laboratory, and its structural identity was confirmed by comparing its NMR and HRMS data with those published previously [12]. Its purity was determined to exceed 98% by HPLC analysis. All cell culture reagents were purchased from Invitrogen (Grand Island, NY, USA). Gemcitabine (GEM) (Gemzar, Eli Lilly, USA) and 5-Fluorouracil injection (5-FU) (China Food and Drug Administration, China; approval number H12020959) were procured through the Pharmacy of the Clinical Center, Shenzhen People's Hospital, Guangdong Province, China. Cytotoxicity assay, synergistic effects, cell death detection, apoptosis detection, cell-cycle analysis and plasma-specific enzyme level measurement The above assays were performed using kits from various manufacturers. Details are provided in the Supplementary Methods. Western immunoblotting and real-time PCR Western immunoblotting technique was used to analyze the expression of Twist, NF-κB, PCNA, Bcl-xl, vimentin and E-Cadherin. Real-time PCR technique was used to analyze the expression of Twist and E-Cadherin. Details are provided in the Supplementary Methods. Construction of PanCa cell lines stably expressing CMV-EGFP-linker-Luc To monitor the in vivo PanCa growth, PANC-1 and Capan-2 cells were stably transfected with the luciferase gene. CMV-EGFP-linker-Luc-PGK-Puro-L.V. (constructed by Obio Technology, Shanghai, China) was a lenti-virus and could express green fluorescent protein and luciferase [26]. Lentiviral was transferred into PANC-1 and Capan-2 cells (5 × 10 4 cell/well in 24-well plates) for a 24 h infection in the presence of polybrene (final concentration 5 μg/mL; Sigma-Aldrich). After 24 h, the culture medium was then replaced by fresh medium. After culture for a further 72 h, cells were incubated with puromycin (final concentration 2 μg/mL) for a period of 14 days for selection. Expression of EGFP was observed by fluorescent microscopy and luciferase expression was confirmed using the Dual Luciferase Assay (Promega), and emitted light was directly proportional to the cell number. In vivo studies Male BALB/c nude mice (6 weeks of age) were supplied by the Laboratory Animal Services Centre, CUHK. Animals were bred and maintained in a pathogenfree condition with sterile food and water ad libitum in specifically designed air-controlled rooms with a 12-h light/dark cycle. The care and use of the animals were in compliance with the institutional guidelines, and the experimental procedures were approved by the Animal Experimentation Ethics Committee of CUHK (Ref. 14/086/MIS). Human PanCa cell orthotopic xenograft nude mouse model was established using in situ injection method as described previously [27]. Capan-2 and PANC-1 cells, which were stably transfected with EGFP and luciferase as described above, were resuspended in PBS and kept on ice until injection. Male nude mice were anesthetized with ketamine/xylazine (100/10 mg/kg), a small left abdominal flank incision was made, and the pancreas was carefully exposed. Capan-2 (2 × 10 6 cells) and PANC-1 cells (5 × 10 6 cells) were injected into the tail of the pancreas with a Hamilton syringe. The pancreas was then returned to the peritoneal cavity, and the abdominal wall and the skin were closed with 6-0 Dexon sutures. One week after tumor transplantation, mice were anesthetized and images of the tumors captured using a Carestream In Vivo Imaging System (MS FX PRO, USA). It was found that all tumors were of similar size. Mice were then randomized into 7 groups of 6 mice each: the control group, GEM group, 5-FU group, two brusatol treatment groups of different dosages (1 or 2 mg/kg), GEM and brusatol (2 mg/kg) combination group, and 5-FU and brusatol (2 mg/kg) combination group. The tumor-bearing mice were periodically injected i.p. with D-luciferin (150 mg/kg, Life Technologies, USA) and anesthetized once a week. After D-luciferin injection (10 min), luminescence was measured using a Carestream In Vivo Imaging System (MS FX PRO, USA), which could be used to obtain X-ray and concurrent bioluminescence images. By combining bioluminescence imaging (BLI) with digital x-ray, the system's highly improved sensitivity allowed us to precisely locate, identify and monitor tumor morphological changes [27,28]. Quantitative analysis of the optical signal capture was carried out with the Carestream MI software v5.0.5.29 (Carestream Health, Inc., Woodbridge, CT, USA). Fluorescence intensity from subcutaneous and intra-peritoneal tumors was measured by creating an automatic ROI threshold to 30% of each tumor's maximum intensity and the mean intensity of the area was examined. Fluorescence intensities were normalized to the peak angle of detection, and real-time determination of tumor burden was done by quantifying fluorescent surface area [29]. Mice were imaged on 7, 14, 21, 28 and 35 days after tumor implantation. Treatment was continued for 5 weeks and animals were sacrificed 1 week later. Whole blood was obtained by cardiac puncture. Primary tumors in the pancreas were excised and the final tumor volume was measured as V = (a×b×c)/2, where a indicates the length, b the width, and c the depth of tumor [30]. Half of the tumor tissue was formalin-fixed and paraffin-embedded for immuno-histochemistry and routine H&E staining. The other half was snap frozen in liquid nitrogen and stored at −80°C. Histology and tissue analysis Formalin-fixed tissues were embedded in paraffin and cut into 6-μm sections. Sections were evaluated by H&E staining and immunohistochemical analysis using antibodies specific for E-cadherin and Twist. The stained TMA slides were assessed independently by two experienced pathologists in a blinded manner. Each slide was scored semiquantitatively on the basis of percentage and intensity of the stained normal or neoplastic epithelial cells. The percentages of stained cells were scored using previously described methods [31,32]. Details of histological analysis are provided in the Supplementary Methods. Statistical analysis Results were expressed as mean ± SEM. Data were analyzed by the t test or ANOVA and results were considered significant at P < 0.05. A significant interaction was interpreted by a subsequent median effect principle of the Chou-Talalay method.
6,461
2017-05-10T00:00:00.000
[ "Biology", "Chemistry" ]
Convergence of Simulated Annealing Using Kinetic Langevin Dynamics We study the simulated annealing algorithm based on the kinetic Langevin dynamics, in order to find the global minimum of a non-convex potential function. For both the continuous time formulation and a discrete time analogue, we obtain the convergence rate results under technical conditions on the potential function, together with an appropriate choice of the cooling schedule and the time discretization parameters. Introduction Simulated annealing has always been an important method to find the global minimum of a given function U : R d −→ R, especially when U is non-convex. Classical studies on the simulated annealing have been mainly focused on an algorithm based on the overdamped Langevin dynamic: where (B t ) t≥0 is a standard d-dimensional Brownian motion and (ε t ) t≥0 is a temperature parameter that turns to 0 as t −→ ∞. Notice that, with constant temperature ε t ≡ ε and under mild conditions on U , the process X in (1.1) is the standard overdamped Langevin dynamic and has the invariant measure µ * o,ε (dx) ∝ exp (−U (x)/ε) dx. With small ε > 0, samples from µ * o,ε would approximately concentrate around the global minimum of function U , which is the intuition of the simulated annealing algorithm. Since the introduction of the simulated annealing algorithm by Kirkpatrick, Gellatt and Vecchi [16], many works have been devoted to the convergence analysis of (1.1); see e.g., Geman and Hwang [11], Chiang, Hwang and Sheu [5], Royer [28], Holley, Kusuoka and Stroock [14], Miclo [21], Zitt [32], Fournier and Tardif [9], Tang and Zhou [30], etc. It has been shown that, the cooling schedule ε t should be at least of the order E log t as t −→ ∞ for some constant E > 0, in order to ensure the convergence of X t to the global minimum of U as t −→ ∞. Intuitively, this cooling schedule allows the diffusion process to have enough time to escape from the local minima and at the same time to explore the whole space in order to find the global minimum of U ; finally, the annealing process will "freeze" at the global minimum of U as ε t −→ 0. We would like to mention in particular the recent paper by Tang and Zhou [30], where the authors derived a convergence rate result of (1.1), where a fine estimation of the log-Sobolev inequality in Menz and Schlichting [20] for invariant measure µ * ε,o with low-temperature (small ε > 0) has been crucially used. Moreover, they have also analyzed a corresponding discrete time scheme of (1.1) and obtained a convergence rate result. Notice that in practice it is the discrete time scheme which is implemented to find the optimizer of U . Motivated by the above works, we will study in this paper the simulated annealing based on the kinetic (underdamped) Langevin dynamic, that is, the process (X, Y ) = (X t , Y t ) t≥0 defined by where θ > 0 is a fixed constant and (ε t ) t≥0 is a cooling schedule satisfying ε t −→ 0 as t −→ ∞. Moreover, we will also study a discrete time scheme of (1.2). More precisely, consider a sequence (∆t k ) k≥0 of time steps and define the discrete time grid 0 = T 0 < T 1 < · · · by The discrete time scheme will be defined on the grid (T k ) k≥0 . For ease of presentation and the convergence analysis later, we will write this scheme as a continuous time process (X, Y ) = (X t , Y t ) t≥0 by using the time freezing function η(t) := ∞ k=0 T k 1 {t∈[T k ,T k+1 )} . Then, the discrete time scheme process (X, Y ) is defined by Notice that the above scheme is the second-order scheme, rather than the Euler scheme of (1.2). This scheme can be explicitly re-written on the time grid (T k ) k≥0 and hence is implementable (see (2.3) and (2.4) for details). This second-order scheme has also been introduced and studied for standard underdamped Langevin dynamic, i.e., for (1.2) with constant temperature ε t ≡ ε 0 ; see, e.g., Cheng, Chatterji, Bartlett and Jordan [4], Zou, Xu and Gu [33], Gao, Gürbüzbalaban and Zhu [10] and Ma, Chatterji, Cheng, Flammarion, Bartlett and Jordan [18]. For the kinetic simulated annealing process (X, Y ) in (1.2), a convergence result without convergence rate has already been established by Journel and Monmarché [15]. In the present paper, we aim at obtaining some convergence rate results for both the simulated annealing process (X, Y ) in (1.2) and the discrete time scheme (X, Y ) in (1.3). To the best of our knowledge, we are the first to study the convergence of the kinetic simulated annealing algorithm in the discrete time framework. Let us also mention that, by cooling the parameter θ instead of ε in (1.2), Monmarché [22] studied an alternative kinetic simulated annealing process and derived a convergence rate for it (see Remark 2.10 in the following for a detailed comparison of his work and ours). The remainder of the paper is organized as follows. We first introduce some notations. In Section 2, we state the assumptions and our main results, and provide the main idea of the proofs, together with some discussions on the related literature. The proof of the convergence rate of (1.2) is given in Section 3, and the convergence rate of (1.3) is given in Section 4. Notations. (i) Denote by C ∞ (R d ), or simply C ∞ , the collection of all smooth (i.e., infinitely differentiable) functions f : R d −→ R. For f ∈ C ∞ , let ∇f, ∇ 2 x f, and ∆f denote the gradient, Hessian, and Laplacian of f , respectively. For a smooth vector field v : R d −→ R d , ∇ · v denotes the divergence of v. For vectors a, b ∈ R d , a, b is their inner product and |a| = a, a is the Euclidean norm of a. For two matrices M, N ∈ M d×d (R) their Frobenius inner product is defined as M, . For functions f and g defined on R + , the symbol f = O(g) means that f /g is bounded when some problem parameter tends to 0 or ∞. Main Results and Literature We will state our main convergence rate results and then discuss the main idea of proof as well as some related literature. Main Results We first provide some conditions on the (potential) function U : R d −→ R. Without loss of generality, we assume that min x∈R d U (x) = 0 throughout the paper. and all its derivatives have at most polynomial growth. The gradient ∇U is L-Lipschitz for constant L > 0. Moreover, U is (r, m)-dissipative in the sense that for some positive constants r > 0 and m > 0, (ii) The function U has a finite number of local minimizers and ∇ 2 U is non-degenerate at the local minimizers. Moreover, U has at least one non-global minimizer. Remark 2.2. (i) In the literature of the standard kinetic (underdamped) Langevin dynamic (i.e., (1.2) with constant temperature ε t ≡ ε 0 ), the dissipative condition on U is a standard Lyapunov condition to ensure the ergodicity of the process; see e.g., Eberle, Guillin and Zimmer [8] and Mattingly, Stuart and Higham [19]. The Lipschitz condition on ∇U is also usually imposed to obtain quantitative exponential convergence rate of the law of standard kinetic Langevin dynamic to its invariant measure; see, e.g., [8,10,18]. In particular, this condition ensures that the process (X, Y ) in (1.2) is well defined. (ii) The Lipschitz condition on ∇U , together with the dissipative condition, implies that there exists a positive constant K such that For a proof, see, e.g., Raginsky, Rakhlin and Telgarsky [27,Lemma 2]. Let m U and M U denote respectively the set of local minima and the set of global minima of U . We then define a constant E * > 0, the so-called critical depth of U , by The initial distribution of (X 0 , Y 0 ), denoted as µ 0 = L (X 0 , Y 0 ), has a C ∞ density function p 0 . Moreover, the initial Fisher information (ii) The function t −→ ε t is positive, non-increasing and differentiable. Moreover, for some time t 0 and a constant E > E * , one has ε t = E log t for all t > t 0 . Remark 2.4. To obtain the convergence of the simulated annealing using overdamped Langevin dynamic in (1.1), it is standard to consider the cooling schedule ε t = O 1 log t as t −→ ∞; see, e.g., [11,5,28,14,21,30]. For the kinetic simulated annealing process (1.2), this cooling schedule is also assumed in Journel and Monmarché [15] to obtain a convergence result without convergence rate. Moreover, it is proved in [15] that the convergence may fail for a faster cooling schedule. In Monmarché [22], for an alternative kinetic simulated annealing process with cooling schedule on parameter θ, a similar cooling schedule is also assumed. Moreover, our technical conditions on U in Assumption 2.1 are also generally motivated by those in [22]. Let us now provide a first convergence rate result on (X, Y ) defined by (1.2). By a time change argument, we can easily reduce (1.2) to the case with θ = 1. For this reason, we will always assume θ = 1 in the rest of paper. Also, recall that the condition min x∈R d U (x) = 0 is assumed throughout the paper. Theorem 2.5. Let Assumptions 2.1 and 2.3 hold true. Then, for any constants δ > 0 and α > 0, there exists some constant C > 0 such that Remark 2.6. (i) In [15], the authors used localization arguments to obtain a convergence result for (1.2) with conditions on the potential function U weaker than ours. They, however, did not derive the convergence rate. (ii) The convergence rate in Theorem 2.5 is the same to that in Miclo [21] and Tang and Zhou [30] for the simulated annealing using overdamped Langevin dynamic, and also to that in Chak, Kantas and Pavliotis [3] for the simulated annealing using generalized Langevin process. While higher order Langevin dynamics are often used in MCMC as accelerated versions compare to the overdamped Langevin dynamic (see for instance [18,10,23]), this is not the case in the simulated annealing problem. Indeed, it has been observed that the convergence behavior of the annealed process will be mainly determined by the potential function U , but not the process used; see for instance the discussion in [22, Remarks after Theorem 1]. We now present the convergence rate result for the discrete time scheme (X, Y ) as defined in (1.3). for all x, y ∈ R d , for some constant L > 0. And that the time step size parameters (∆t k ) k≥0 satisfies lim k→∞ T k = ∞ and lim sup k→∞ ∆t k T k < ∞. ( 2.2) Then for all constants δ > 0 and α > 0, there exists a constant C > 0, such that Remark 2.8. (i) The additional Lipschitz condition on ∇ 2 x U will be essentially used to control the discretization error in the discrete time scheme (1.3). (ii) We need that ∆t k −→ 0 as T k −→ ∞ to control the (cumulative) discretization error in the scheme (1.3). At the same time, ∆t k should not decrease too fast, so that T k −→ ∞ as k −→ ∞ and thus (X T k ) k≥0 can reach the global minima of U . This explains the condition (2.2). (iv) In Tang and Zhou [30], for the discrete simulated annealing based on the overdamped Langevin dynamic (1.1), the authors required the step size ∆t k satisfying which is a little stronger than our condition (2.2). Of course, (2.2) is only a sufficient condition to ensure the convergence rate result. Notice that (1.3) is a linear SDE on each time interval [T k , T k+1 ], so we can solve it explicitly. Namely, given the value where B t is the Brownian motion in (1.3). Therefore, we can implement (X, Y ) on the discrete time grid (T k ) k≥0 in an exact way. More precisely, by abbreviating ( with mean zero and covariance matrix Main Idea of Proofs and Related Literature Recall that for two probability measures µ, ν ∈ P(R d ) (or in P(R d × R d )) such that µ ν, the relative entropy KL(µ|ν) and the Fisher information I(µ|ν) are defined by For the simulated annealing process (1.1) using overdamped Langevin dynamic, let us denote µ o,t := L (X t ) and µ * o,ε ∝ exp(−U (x)/ε)dx the invariant measure of standard overdamped Langevin dynamic with constant temperature ε. To deduce their convergence rate result, Tang and Zhou [30] analyze the evolution of KL(µ o,t |µ * εt,o ). A key step consists in obtaining for a good constant ρ(ε t ) (depending on ε t ). The equality in (2.6) follows the so-called de Bruijn's identity, which is the entropy dissipation equation for the standard overdamped Langevin dynamic; see for instance Chapter 5.2. in the monograph Bakry, Ledoux and Gentil [2] or Otto and Villani [24] for a more general setting. The inequality in (2.6) follows from the log-Sobolev inequality (LSI). This step is also the classical way to deduce the exponential ergodicity of the standard overdamped Langevin dynamic; see for instance Arnold, Markowich, Toscani and Unterreiter [1] and Pavliotis [25]. For the kinetic simulated annealing process (1.2) (with θ = 1), let us denote µ t := L (X t , Y t ) and by the invariant measure of standard kinetic Langevin dynamic with constant temperature ε (i.e. (1.2) with ε t ≡ ε). It is, however, no longer helpful to use the relative entropy KL(µ t |µ * εt ) for the convergence analysis. Indeed, because the Brownian motion only appears in the y-direction in (1.2), the time derivative of the relative entropy only gives a part of the Fisher information: Thus, we can not use the LSI to proceed as in [30] for the overdamped simulated annealing. This is also the main reason why we cannot use the relative entropy to deduce the exponential ergodicity of the standard kinetic Langevin dynamic, for which a successful alternative method is the so-called hypocoercivity method, initiated in Desvillettes and Villani [6], Hérau [12,13] (see also Villani [31] for a detailed presentation). As in Monmarché [22], we will adopt the hypocoercivity method to study our kinetic simulated annealing processes in (1.2) and (1.3). More precisely, for two probability measures µ, ν ∈ P(R d × R d ), we consider the distorted relative entropy H γ (µ|ν) defined by where γ > 0 is a constant. To deduce the convergence rate result of (1.2), we will choose γ to be a function of ε t , and then analyze the evolution of H γ(εt) (µ t |µ * εt ): and (2.10) The term (2.9) is from the (instantaneous) evolution of H γ(εt) (µ t |µ * εt ) along the kinetic diffusion (1.2) for fixed temperature ε t and the term (2.10) arises from the influence of (instantaneous) invariant measure µ * εt and γ(ε t ) on H γ(εt) (µ t |µ * εt ). For the term (2.9), with a carefully chosen ε t −→ γ(ε t ) and by adapting a computation strategy in Ma, Chatterji, Cheng, Flammarion, Bartlett and Jordan [18] for standard kinetic Langevin dynamic, we obtain where c(ε t ) = γ −1 (ε t ) and ρ εt > 0 is the constant in the log-Sobolev inequality (LSI) satisfied by µ * εt . Remark 2.9. As in [22] and [30], it is crucial to use a fine estimation on the constant ρ εt in the LSI for µ * εt (under Assumption 2.1), in particular for the case with small ε t . We refer to Menz and Schlichting [20], especially, Section 2.3.3 therein, for this issue. The treatment of the term (2.10) is classical (see e.g. Holley, Kusuoka and Stroock [14], Miclo [21], Tang and Zhou [30] in the case of the overdamped simulated annealing, or Monmarché [22] in the case of an alternative kinetic simulated annealing). First, we will apply the Lyapunov function technique as in Talay [29] to obtain a uniform bound on the moment of (X t , Y t ) in (1.2) for all t ≥ 0. Using this uniform moment bound, together with the explicit expression of µ * εt for all t ≥ 0, we can directly compute that, for some sub-exponential ω(ε), where ε t is the derivative of t −→ ε t . The term ω(ε t )|ε t | at the right-hand-side(r.h.s.) of (2.12) is positive. In order to make H γ(εt) (µ t |µ * εt ) decrease along the time and to get an explicit decay rate, we need to make sure that the term c(ε t )ρ εt in (2.11) is greater than the term ω(ε t )|ε t | in (2.12). This requires a careful choice of ε −→ γ(ε) as well as some conditions on the cooling schedule t −→ ε t as in Assumption 2.3. Remark 2.10. (i) The above sketch of proof, as well as the use of the hypocoercivity method with the distorted entropy (2.7), is similar to that in Monmarché [22] for a different kinetic annealing process. More precisely, [22] studies the process with ε s −→ 0 as s −→ ∞. As our cooling schedule is on the diffusion parameter, we need to design another function ε t −→ γ(ε t ) to compute the distorted entropy H γ(εt) (µ t |µ * εt ). More importantly, [22] reformulated the problem into a Bakry-Emery framework to establish a contraction inequality similar to (2.11), while we apply a different computation strategy adapted from [18]. In particular, as shown in [18], this computation strategy can be adapted to the discrete time setting for convergence analysis of numerical schemes, a subject that is not addressed in [22]. (ii) From a numerical point of view, our kinetic annealing process (1.2) seems to be more convenient for discrete time simulation than that in (2.13). Indeed, as ε s −→ 0, the Lipschitz constant of the coefficient functions in SDE (2.13) will explode, so the corresponding time discretization error in the numerical analysis can be much harder to control. For the convergence analysis of the discrete time process (X, Y ) in (1.3), the main idea will be quite similar. Denote µ t := L (X t , Y t ). For ease of presentation, let us . Then, with the same distorted entropy function defined by (2.8), we need to compute the difference, for each k ≥ 0, For the term (2.14), we can adapt the computation in [18] for the standard kinetic Langevin process. Concretely, we can interpolate ] satisfies a Fokker-Planck equation with a (partially) frozen coefficient function. We can then apply the same computation strategy as in the continuous time setting to obtain a (asymptotic) contraction estimation. Similarly, the term (2.15) can be handled by interpolating together with a uniform moment estimation on (X, Y ). Finally, with a good choice of the time step size parameters (∆t k ) k≥0 , we can combine the estimations of (2.14) and (2.15) to obtain a convergence rate result for the process (X, Y ) on the discrete time grid (T k ) k≥0 . In particular, the above analysis on H γ(ε k ) (µ k |µ * k ) extends the hypocoercivity method for discrete time Langevin dynamic in [18] to the setting with time-dependent coefficient. We also notice that most existing papers on the numerical aspects of kinetic type equations by hypocoercivity method mainly focused on the discretization of corresponding Fokker-Planck equations, as in Dujardin, Hérau and Lafitte [7] and Porretta and Zuazua [26]. Convergence of the continuous-time simulated annealing In this section, we will present the convergence analysis of continuous-time kinetic annealing process (1.2) in order to prove Theorem 2.5. As discussed in Section 2.2, our main strategy consists in studying the evolution of where L > 0 is the Lipschtiz constant of ∇U in Assumption 2.1. For this purpose, we will study separately the two terms at the r.h.s. of (2.8). For simplicity of presentation, we will assume that θ = 1. Denote z = (x, y) as a single variable and Z = (X, Y ) as the process satisfying (1.2). For each t ≥ 0, denote by µ t = L (Z t ) the marginal distribution of Z. Similar to [22,Proposition 4], under the smoothness and growth conditions of U in Assumption 2.1, µ t has a strictly positive smooth density function on R 2d , denoted by p t . Recall also that, for each ε > 0, the invariant probability measure of (1.2) with constant temperature ε t ≡ ε is denoted by µ * ε , i.e., with some renormalized constant C ε > 0, Preliminary analysis of continuous kinetic annealing process Let us first recall a fine result on the log-Sobolev inequality from [ Next, we give a moment estimation on the solution (X t , Y t ) t≥0 to (1.2). Proof. Inspired by [22,Lemma 7], we consider the Lyapunov function with the constant r > 0 as given in Assumption 2.1. For any smooth function , which corresponds to the generator of the diffusion process (1.2) at temperature ε > 0. Then Further, by Assumption 2.1, for any constant c 1 > 0. Choosing c 1 = r 2 , and by (2.1) as well as the fact that β < 1 2 , it follows that for some positive constants c 2 , c 3 independent of ε. Using again β < 1 2 and that U is of quadratic growth (see (2.1)), we obtain that for some constant c 4 > 0. Therefore, for some positive constants c 5 and c 6 independent of ε > 0, we have Notice that the Lyapunov function R(x, y) does not depend on ε > 0, so We can then apply the Grönwall Lemma to conclude that (E[R(X t , Y t )]) t≥0 is uniformly bounded. To conclude, we notice that (see also [22,Lemma 7]) there is some constant C > 0 such that Therefore, Evolution of distorted entropy with fixed temperature We now compute ∂ µ,t H γ(εt) (µ t |µ * εt ) in (2.9). Although the computation is mainly adapted from Ma, Chatterji, Cheng, Flammarion, Bartlett and Jordan [18], we nevertheless provide most of details for completeness. For simplicity, for a function f : Recalling that p t (resp. p * εt ) represents the density function of µ t (resp. µ * εt ), we also write Notice that the marginal distribution p t satisfies the kinetic Fokker-Planck equation , we can rewrite (3.1) as the following equivalent form: To simplify the computation afterwards, we follow the idea in [18] to further add a divergence-free term ε t ∇ y log p t −ε t ∇ x log p t into the equation. By direct computation, it follows that where v t is the vector flow that transports µ t towards µ * εt : Let h t := pt p * ε t and S := Then the distorted relative entropy H γ(εt) (µ t |µ * εt ) can be written as is the first order variational derivative of H γ(εt) (µ t |µ * εt ) at µ t . By Proposition 2.22 in [17], we have Then using (3.3) and (3.4), a simple calculation leads to For (3.5), it equals to (3.8) In addition, (3.6) can be simplified as For (3.7), we write it as the sum of the following three terms: Further, by [18, Lemma 9], we have Combining the above with (3.8) and (3.9), we derive , we obtain can get rid of the term involving with the second-order derivatives and obtain that Note that the r.h.s. of the above inequality resembles the Fisher information . In order to use the log-Sobolev inequality satisfied by µ * εt (Proposition 3.1) and connect it to the distorted relative entropy (3.4), we will prove in Lemma 3.3 that, for sufficiently large t ≥ 0, where c(ε t ) := 1 γ(εt) = εt 4(1+L 2 ) and ρ εt is given in Proposition 3.1. It follows that Proof. The proof is very similar to that of [22,Lemma 8], so we only present the main idea here. By the choice of γ(ε t ) and c(ε t ), we have that (ε t , c(ε t ), ρ εt , γ(ε t )) −→ (0, 0, 0, ∞) as t −→ ∞. Thus, it is sufficient to prove that for sufficiently large t ≥ 0, We first compute the characteristic equation of M t − 1 2 I 2d : By diagonalizing ∇ 2 x U , with (∇ 2 x U ) i denoting the i-th eigenvalue of ∇ 2 x U , we obtain d quadratic equations on λ in the form 2ρε t I 2d for sufficiently large t. However, this is sufficient to obtain the convergence of our kinetic annealing process (1.2), as long as we can ensure the distorted relative entropy H γ(εt) (µ t |µ * εt ) is finite in any finite time interval (see Lemma 3.8 below). We also want to point out that the choice of γ(ε t ) and c(ε t ) is not unique to ensure that M t satisfies (3.10). To conclude this subsection, we summarize the above computation in the following proposition: and ρ εt is given in Proposition 3.1. Proof. We follow the proof of [22,Lemma 15]. In the following, the constant C > 0 may change from line to line. Proof of Theorem 2.5 For any t ≥ 0, letZ t = (X t ,Ỹ t ) be a random variable (on the same initial probability space) following distribution µ * εt . Then, for any δ > 0, where we have used the Csiszár-Kullback-Pinsker inequality, the definition of the distorted relative entropy (2.7), and the fact that γ(ε t ) = 4 εt (1 + L 2 ) ≥ 1 for sufficiently large t ≥ 0. We then conclude the proof by Lemma 3.7 and Proposition 3.9. For the estimation of P(U (X t ) > δ), we have the following classical result (see e.g. In particular, let ε t satisfy Assumption 2.3. Then for sufficiently large t > 0, we have To provide an estimation on H γ(εt) (µ t |µ * εt ), we first prove that it is uniformly bounded on any finite time horizon [0, t]. , and notice that Then, by the log-Sobolev inequality, it suffices to prove that I(µ t |µ * εt ) is uniformly bounded on any finite horizon. Proposition 3.9. Let Assumptions 2.1 and 2.3 hold and choose γ(ε t ) = 4 εt 1 + L 2 . Then, for any α > 0, there exists C > 0, such that for sufficiently large t, Proof. First, by Lemma 3.8, H γ(εt) (µ t |µ * εt ) is finite for any t ≥ 0. Next, by (2.8), Proposition 3.5 and Lemma 3.6, for sufficiently large t ≥ 0, we have Further, by Proposition 3.1, we have ρ ε = χ(ε) exp − E * ε with some sub-exponential function χ(ε). Then, by Assumption 2.3 and Proposition 3.5, for sufficiently large t ≥ 0, we have that for all α > 0, According to Proposition 3.2, E U (x) + |Y t | 2 is uniformly bounded, so we obtain Thus, there exists some positive constantsc 1 andc 2 such that for sufficiently large t, Because E > E * , for the terms of H γ(εt) (µ t |µ * εt ) in the r.h.s. of the above inequality, the first term dominates the second term when α > 0 is small. As a result, for some positive constants c 1 and c 2 , we have Set α < 1 2 1 − E * E , then for sufficiently large t 0 and for all t ≥ t 0 , a Grönwall type argument yields that The proof is then completed. Convergence of the discrete-time simulated annealing Following the sketch of proof in Section 2.2, we provide in this section the convergence analysis of the discrete time kinetic simulated annealing process (X, Y ) in (1.3) and prove Theorem 2.7. Without loss of generality, we assume that the Lipschitz constant of ∇U (x) satisfies L ≥ 1. We also stay in the context with θ = 1 for the proof. Denote z = (x, y) and Z := (X, Y ). By the explicit solution of (X t , Y t ) in (2.3), we can see that the marginal distribution µ t := L (Z t ) is denoted by p t has a strictly positive and smooth density function p t (z). Also recall that µ * ε (dz) = p * ε (z)dz with p * ε (x, y) ∝ exp − 1 ε (U (x) + |y| 2 2 ) dz. For the ease of notation, we abbreviate Preliminary analysis of discrete kinetic annealing process We first provide a uniform boundedness result on the moment of (X, Y ). To this end, we introduce two constants where r > 0 is the constant given in Assumption 2.1. One can check thatβ ≤ 1 2 ,β < r andβ 2 r/3 < 1. Proof. Similar to the proof of Proposition 3.2, we introduce the Lyapunov functioñ R(x, y) = U (x) + |y| 2 2 +βx · y for all z = (x, y) ∈ R d × R d . Then, to prove (4.1), it is equivalent to prove that E R (X k , Y k ) k≥0 is uniformly bounded. Denote θ k := 1 − e −∆t k ≤ ∆t k , so that the discrete time scheme (2.4) can be rewritten as where (D x (k), D y (k)) is a Gaussian vector in R d × R d with mean zero and covariance matrix Σ k given by (2.5). Then, by the Lipschitz property of ∇ x U , we derive Further, we compute directly that Therefore, by the definition ofR, it follows that Using the fact thatβ ≤ 1 2 , θ k ≤ ∆t k ≤ 1 L ≤ 1, and the expressions of Σ ij (k) in (2.5), we have Thus, by (r, m)-dissipativity of U and the L-Lipschitz of ∇ x U , there exists some constant b such that for arbitrary c. Now choose c = r 2 . Notice thatβ < r by its definition, then Using (2.1), we derive We thus obtain Rearrange the above, we derive, for some constant C > 0, that By Proposition 4.1, we can easily obtain that E |Y t | 2 t≥0 is also uniformly bounded. Moreover, we have the following fine estimation on the increment of X: Proof. For t ∈ [T k , T k+1 ], notice that X t = X k + t T k Y s ds. It then follows by Cauchy-Schwarz inequality that for some constant C independent of k ≥ 0. One-step entropy evolution with fixed temperature In this subsection, we aim at estimating the difference term (2.14), which is the one-step evolution of H γ(ε) (µ t |µ * ε ) with fixed ε. We first provide the Fokker-Planck equation satisfied by µ t on each interval [T k , T k+1 ]. Recall that p t denotes the density function of µ t . For t ∈ [T k , T k+1 ] and z k ∈ R d × R d , denote by Proof. Conditioned on the σ-field F T k := σ(Z t : 0 ≤ t ≤ T k ), Z follows a linear SDE with deterministic parameters on [T k , T k+1 ] (see (1.3)). Thus, the conditional density function z −→ p(z | z k ) satisfies the Fokker-Planck equation Notice that v t,k is independent of z k . We can then complete the proof by integrating both sides of the above equality with respect to z k under the measure µ k (z k ). For t ∈ [T k , T k+1 ], we write the distorted relative entropy Then, for t ∈ [T k , T k+1 ], by Lemma 4.4, we have In the above, the first order variational derivative of H γ(ε k ) (µ t |µ * k ) is given by where we use (∇ z ) * = (∇ * x ,∇ * y ) := −∇ x · + 1 ε k ∇ x U · , −∇ y · + y ε k · to denote the adjoint operator of ∇ z with respect to µ * k , in the sense that for any f, g ∈ L 2 (µ * k ), We then follow the same computation steps as in (3.5), (3.6) and (3.7) to obtain that the term (4.2) is equal to , term (4.3) can be directly computed as Consequently, (4.3) is equal to Using with (4.4) and Lemma 4.5, we derive that Lemma 4.6. It holds that Proof. Using a · b ≤ ε k 4 |a| 2 + 1 ε k |b| 2 and p t|T k (z|z k )p k (z k ) = p t (z)p T k |t (z k |z) where the backward conditional density function p T k |t (z k |z) := p(Z k = z k |Z t = z), and by Jensen's inequality, we can show that The ∇ y logh t term can be treated similarly. The proof then completes. Lemma 4.7. Assume that ∆t k ≤ 1/L for every k and ∇ 2 x U is L -Lipschitz. Then Proof. Without loss of generality, we assume L ≥ 1. Using the same proof as the one of [18,Lemma 10], we can show that where (t) = e t−T k − 1 − (t − T k ). As a result, On the one hand, where the last inequality is the case due to the L -Lipschitz of ∇ 2 x U . On the other hand, As a result, we derive, Comparing the above inequality with (4.8), to complete the proof of the lemma, we only need to prove E µ k J t (X k ), SJ t (X k ) F ≤ 144d 3/2 L 2 ∆t 2 k . (4.10) By Cauchy-Schwarz inequality and matrix norm inequality, we have It is straightforward to see that Because ∇ 2 x U 2 ≤ L, which is due to the L-Lipschitz of ∇ x U , and because ∆t k ≤ 1/L, we have (t) ≤ ∆t 2 Consequently, As a result, J t (X k ) 2 ≤ √ 2 max 2L 2 ∆t 2 k , 6L∆t k = 6 √ 2L∆t k . This, together with (4.11), immediately leads to (4.10). Recall that ρ k is the constant in the LSI satisfied by µ * k in Proposition 3.1, and γ(ε k ) = 4 ε k 1 + L 2 . Denote c(ε k ) := 1 γ(ε k ) = ε k 4(1+L 2 ) and As the discrete time analogue to Lemma 3.3, we have the following inequality forM k . The proof is omitted because it is similar to that of Lemma 3.3. We finally present the main result in this subsection. Proposition 4.9. Let Assumptions 2.1, 2.3 hold. Suppose in addition that ∇ 2 x U is L -Lipschitz and that ∆t k ≤ min η * 2 , 1 L for sufficiently large k ≥ 0. Then, there exist some constants C 1 > 0 and C 2 > 0 such that, for sufficiently large k ≥ 0 and for all α > 0, By computing d dt e C t T − ( E * E +α ) k H γ(ε k ) (µ t |µ * k ) and then integrating it from T k to T k+1 , we obtain where C 1 > 0 and C 2 > 0 are some constants independent of k. The proof then completes. The term P U (X k ) > δ can be handled by Lemma 3.7 to obtain the desired convergence rate. For the convergence of 2H γ(ε k ) (µ k |µ * k ), we can apply Proposition 4.9 and Lemma 4.11 to show that there are some positive constants c 1 , c 1 , c 2 , and k 0 ≥ 0 such that Finally, by Lemma 4.12 which will be presented and shown momentarily, we know that H γ(ε k 0 ) (µ k 0 |µ * k 0 ) is finite. Then, there exists a positive constant C > 0 such that This completes the proof. We finally provide a discrete time analogue of the result in Lemma 3.8, which is needed in the proof of Theorem 2.7. Lemma 4.12. For every k ≥ 0, the distorted entropy H γ(ε k ) (µ k |µ * k ) is finite. Proof. As in the proof of Lemma 3.8, it suffices to prove that the Fisher information I(µ k |µ * k ) is finite for every finite k ≥ 0. Again, we use C to denote a generic positive constant whose value may change from line to line. For t ∈ [T k , T k+1 ], withh t = p t p * k , recall that Using a similar computation to the one in the calculation of H γ(ε k ) (µ t |µ * k ) in Section 4.2, we can obtain that d dt I(µ t |µ * k ) = ∇ z δI(µ t |µ * k ) δ µ t , v t,k p t (z) dz Following the same steps as in (3.5), (3.6) and (3.7), we further obtain that ∇ z δI(µ t |µ * k ) δ µ t , v t,k p t (z) dz (4.16) Further, by similar arguments to the ones in Lemma 4.5, and with A t (z) as given by (4.6), we have Following the same arguments as in Lemma 4.6 and 4.7, we can further obtain the inequalities which implies that I(µ k+1 |µ * k ) ≤ e (1+L)∆t k I(µ k |µ * k ) + C∆t 3 k . (4.18) Finally, following similar computation steps to the ones in Lemma 4.11, we obtain that, for all ε > 0, The above, together with (4.18), implies that where the r.h.s. of the last inequality in the above is clearly finite for every k ≥ 0.
9,233
2022-06-13T00:00:00.000
[ "Physics", "Computer Science", "Mathematics" ]
C-Terminal Binding Protein: Regulator between Viral Infection and Tumorigenesis C-terminal binding protein (CtBP), a transcriptional co-repressor, significantly influences cellular signaling, impacting various biological processes including cell proliferation, differentiation, apoptosis, and immune responses. The CtBP family comprises two highly conserved proteins, CtBP1 and CtBP2, which have been shown to play critical roles in both tumorigenesis and the regulation of viral infections. Elevated CtBP expression is noted in various tumor tissues, promoting tumorigenesis, invasiveness, and metastasis through multiple pathways. Additionally, CtBP’s role in viral infections varies, exhibiting differing or even opposing effects depending on the virus. This review synthesizes the advances in CtBP’s function research in viral infections and virus-associated tumorigenesis, offering new insights into potential antiviral and anticancer strategies. Introduction C-terminal binding protein (CtBP), identified by Boyd et al. during research on the human adenovirus type 5 E1A gene, is a transcriptional co-repressor that binds to the C-terminal region of the E1A protein [1].Alternative RNA splicing and post-translational modification generate multiple CtBP isoforms with distinct temporal and spatial expression patterns, suggesting isoform-specific functions.The CtBP family controls cellular processes by acting as transcriptional corepressors and cytoskeletal regulators [2].Overexpression of CtBP is often associated with processes such as epithelial-mesenchymal transition (EMT) and self-renewal of tumor stem cells [3,4], which promotes the development and progression of a variety of tumors, such as hepatocellular carcinoma, prostate cancer, gastric cancer, breast cancer, etc. Vertebrates possess two homologous CtBP genes, which encode distinct proteins: CtBP1 and CtBP2.These proteins are frequently overexpressed in cancers and function as oncogenic transcriptional coregulators, which may promote cancer development and progression, leading to tumor invasion, which is associated with poor prognosis [5].Recent research suggests that CtBP regulates the replication of multiple viruses and contributes to the development of virus-associated tumors.This review aims to examine the role of CtBP in viral replication and virus-associated tumorigenesis, with the goal of identifying novel targets and strategies for preventing and treating viral diseases and virus-associated tumors. Overview of CtBP The CtBP family represents a group of evolutionarily conserved transcriptional corepressors that are indispensable for normal development in vertebrates and non-vertebrates.These proteins collaborate with DNA-binding transcription factors to regulate the expression of target genes, thereby influencing various cellular biological processes [1].CtBPs Viruses 2024, 16, 988 2 of 16 are abundant in, but not restricted to, the nucleus, where they function as transcriptional co-repressors, whereas in the cytoplasm they control membrane translocation [6].As transcriptional corepressors, it has been extensively demonstrated that CtBPs interact with diverse transcription factors to form transcriptional complexes, recruiting a variety of chromatin-modifying enzymes and ultimately leading to the repression of gene transcription associated with embryonic development, cell differentiation and division, and tumorigenesis [6][7][8][9][10].Early studies demonstrated that CtBP can modulate cellular senescence and regeneration in primary fibroblasts by suppressing the transcription of tumor suppressors [11].It was further established that CtBP acts as a negative regulator of tumor suppressors p16INK4A, E-cadherin, phosphatase, and tensin homolog (PTEN) [11,12], while also serving as a downstream target of tumor suppressors INK4A/Arf, APC, and HIPK2 [13,14], leading to cell apoptosis.The CtBP family encompasses multiple isoforms due to alternative RNA splicing and post-translational modifications [15].These isoforms exhibit distinct temporal and spatial expression patterns, suggesting functional specificity among them [16]. CtBPs exert their functions by recognizing the Pro-X-Asp-Leu-Ser (PXDLS) motif within DNA-binding proteins.They can dimerize and potentially influence gene expression by connecting his-tone deacetylases (HDACs) to DNA-binding factors [9].CtBP proteins interact not only with PXDLS-containing proteins but also with those lacking this motif, such as HDAC1, HDAC2, and HDAC5 [8].Additionally, they can not only interact with the polycomb protein hPc2 and form a complex with CtIP (CtBP-interacting protein) [17] but also play a role in Golgi apparatus maintenance [18].One CtBP family member, rCtBP1/BARS50 (brefeldin A ADP-ribosylation substrate 50), has been shown to possess acyltransferase activity, contributing to Golgi apparatus formation and maintenance [6,19].Furthermore, CtBP dysfunction has been implicated in altered cell adhesion and neurodegenerative diseases [20,21], among others (Figure 1).expression of target genes, thereby influencing various cellular biological processes [1].CtBPs are abundant in, but not restricted to, the nucleus, where they function as transcriptional co-repressors, whereas in the cytoplasm they control membrane translocation [6]. As transcriptional corepressors, it has been extensively demonstrated that CtBPs interact with diverse transcription factors to form transcriptional complexes, recruiting a variety of chromatin-modifying enzymes and ultimately leading to the repression of gene transcription associated with embryonic development, cell differentiation and division, and tumorigenesis [6][7][8][9][10].Early studies demonstrated that CtBP can modulate cellular senescence and regeneration in primary fibroblasts by suppressing the transcription of tumor suppressors [11].It was further established that CtBP acts as a negative regulator of tumor suppressors p16INK4A, E-cadherin, phosphatase, and tensin homolog (PTEN) [11,12], while also serving as a downstream target of tumor suppressors INK4A/Arf, APC, and HIPK2 [13,14], leading to cell apoptosis.The CtBP family encompasses multiple isoforms due to alternative RNA splicing and post-translational modifications [15].These isoforms exhibit distinct temporal and spatial expression patterns, suggesting functional specificity among them [16].CtBPs exert their functions by recognizing the Pro-X-Asp-Leu-Ser (PXDLS) motif within DNA-binding proteins.They can dimerize and potentially influence gene expression by connecting his-tone deacetylases (HDACs) to DNA-binding factors [9].CtBP proteins interact not only with PXDLS-containing proteins but also with those lacking this motif, such as HDAC1, HDAC2, and HDAC5 [8].Additionally, they can not only interact with the polycomb protein hPc2 and form a complex with CtIP (CtBP-interacting protein) [17] but also play a role in Golgi apparatus maintenance [18].One CtBP family member, rCtBP1/BARS50 (brefeldin A ADP-ribosylation substrate 50), has been shown to possess acyltransferase activity, contributing to Golgi apparatus formation and maintenance [6,19].Furthermore, CtBP dysfunction has been implicated in altered cell adhesion and neurodegenerative diseases [20,21], among others (Figure 1).CtBP1/BARS acts both as an oncogenic transcriptional co-repressor and as a fissioninducing protein required for membrane trafficking and Golgi complex partitioning during mitosis, hence for mitotic entry.CtBP1/BARS acts both as an oncogenic transcriptional co-repressor and as a fissioninducing protein required for membrane trafficking and Golgi complex partitioning during mitosis, hence for mitotic entry.Filograna et al. investigated N-(3,4-dichlorophenyl)-4-{[(4-nitrophenyl)carbamoyl]amino}benzenesulfonamide, a small molecule inhibitor that disrupts the CtBP1/BARS Rossmann fold, thereby inhibiting both its pro-tumorigenic transcriptional activity and its role in mitotic entry [6].Considering CtBP's association with the transcriptional corepressor and its self-association, Dcona et al. developed a family of CtBP dehydrogenase inhibitors based on the parent 2-hydroxyimino-3-phenylpropanoic acid (HIPP).These inhibitors specifically disrupt cancer cell viability, abrogate CtBP's transcriptional function, and block polyp formation in a mouse model of intestinal polyposis that depends on CtBP's oncogenic functions [22].Another study also suggests that substrate-competitive inhibition of CtBP dehydrogenase activity is a potential mechanism to reactivate tumor-suppressor gene expression as a therapeutic strategy for cancer [23].Moreover, CtBP 1 and 2 possess regulatory disomer-specific 2-hydroxyacid dehydrogenase (D2-HDH) domains that provide an attractive target for small molecule intervention.Crystal structures of CtBP 1 and 2 revealed that MTOB binds in an active site containing a dominant tryptophan and a hydrophilic cavity, neither of which are present in other D2-HDH family members [24].However, ITC experiments show that HIPP binds to CtBP with an affinity 1000-fold greater than that of MTOB, making it a more promising candidate for selective CtBP1/BARS inhibition [24].In conclusion, while CtBP has emerged as a promising target for cancer therapy, its potential for antiviral applications remains largely unexplored.Existing antiviral treatments do not appear to directly target CtBP, but our review suggests several promising cellular pathways associated with CtBP that could be exploited for novel therapeutic development.Focusing on these pathways holds potential for the creation of effective antiviral drugs. Given CtBP's extensive regulatory roles, it is crucial to explore the specific functions of different CtBP isoforms and their interactions with chromatin-modifying enzymes.Future studies should focus on the development of isoform-specific inhibitors to target CtBP-related pathways selectively. CtBP1 CtBP1, a crucial member of the CtBPs family, is encoded by a gene located on human chromosome 4q21.21[25].This protein interacts with a variety of DNA-binding repressors and is involved in the regulation of gene expression.Srinivasan et al. found that the mammalian polycistronic ribosomal (PcG) protein YY1 binds to polycistronic ribosomal response elements in Drosophila embryos and recruits other PcG proteins to DNA.However, in CtBP mutants, recruitment of PcG proteins and corresponding histone modifications does not occur.This study demonstrates that CtBP is required for YY1 DNA binding and PcG recruitment [26].Three years later, Chinadurai discovered that CtBP mediates coordinated histone modification by deacetylation and methylation of histone H3-Lysine 9 and demethylation of histone H3-Lysine 4 [27].CtBP also recruits the small ubiquitin-like modifier (SUMO) conjugating E2 enzyme UBC9 and the SUMO E3 ligase (HPC2), which regulates the SUMOylation of transcription factors.It has also been shown that CtBP1 antagonizes the activity of the global transcriptional coactivator p300/CBP [27].Some other research shows that CtBP1 also mediates the recruitment of chromatin-modifying enzymes, such as HDACs and histone methyltransferases (HMTases), to specific gene promoter regions, thereby repressing gene transcription [28].As mentioned above, the activity of CtBP1 is modulated by various factors such as phosphorylation [29], SUMOylation [30], and NADH binding [31]. Research has demonstrated that CtBP1 is overexpressed in a wide range of cancers and is associated with increased tumor invasiveness and metastasis, playing a role in cancer development and progression through multiple mechanisms [32,33].Interestingly, CtBP has also been shown to play an important role in mouse neurodevelopment.The experimental data of Deng et al. showed that CtBP1 directly binds to and transcriptionally represses the promoters of melanoma cell-associated genes (pyruvate carrier 1 and 2 genes, MPC1 and MPC2), leading to an increase in the level of free NADH in the cell membrane and nucleus, and promoting the proliferation and migration of melanoma cells [34,35].Thus, CtBP1 is not only at the center of tumor metabolism and transcriptional control but also participates in a series of biochemical reactions by regulating intracellular NADH content.Hamada et al. found that CtBP1 expression was detected in the nuclei and cytoplasm of Purkinje cells, in the nuclei of granule cells and molecular layer (ML) cells, and in granule cell axons [36]. Targeting CtBP1's interaction with chromatin-modifying enzymes presents a promising therapeutic strategy.Future research should focus on developing small molecules or peptides that disrupt CtBP1's binding to its partners, thereby inhibiting its oncogenic activity. CtBP2 CtBP2, another critical member of the CtBP family, is encoded by a gene located on chromosome 21q21.3[25].It harbors a highly conserved functional region, the unique N-terminal domain (NTR), which comprises 20 amino acids essential for its localization and activity, locates it primarily in the nucleus, and enables it to specifically recognize and interact with proteins containing the PXDLS motif [37].The C-terminus of CtBP2 forms a distinct domain with NAD + and NADH binding sites, facilitating the dimerization of CtBP monomers into homodimers or heterodimers [38].NAD(H) promotes tetrameric assembly of human CtBP2, and mutants with an unstable CtBP2 tetramer are defective in oncogenic activity [39]. In addition to being associated with normal cell carcinogenesis, CtBP2 has also been implicated in embryonic development, adipogenesis, and angiogenesis [38,40].For example, Li et al. showed that CtBP2 promotes high glucose-induced cell proliferation, angiogenesis, and cell adhesion through the Akt signaling pathway, and that CtBP2 may be a potential target for diabetic retinopathy (DR) prevention [41].Wang et al. used Co-IP to show that CtBP2 interacts with receptor-interacting protein 140 (RIP140).They subsequently found that inflammation, apoptosis, and permeability levels were significantly elevated in CtBP2-overexpressing pulmonary microvascular endothelial cells (PMVECs) in a lipopolysaccharide (LPS)-induced acute lung injury model, but when RIP140 was silenced, these levels were suppressed.Thus, it is suggested that CtBP2 overexpression reversed the inhibitory effect of RIP140 silencing on LPS-induced inflammation, apoptosis, and permeability levels in HPMECs [42].Sekiya et al. found that CtBP2 also has an important correlation between cellular metabolic levels and the pathogenesis of obesity in humans.Their results showed that CtBP2 can monitor cellular redox status and maintain coordinated metabolic homeostasis, and that the dysfunction of CtBP2 may be a key point in the development of obesity [43]. CtBP2's involvement in various physiological processes and its overexpression in cancers make it an attractive therapeutic target.Future research should investigate the development of CtBP2-specific inhibitors and their potential in cancer therapy. CtBP and Tumors In recent years, numerous studies have found that CtBP is closely related to tumors and is highly expressed in breast cancer [44], lung cancer [45], ovarian cancer [46], pancreatic cancer [47], esophageal cancer [48], prostate cancer [49], and other tumors (Table 1).CtBP is involved in various processes such as the Warburg effect of tumor cells, epithelialmesenchymal transition (EMT), and self-renewal of tumor stem cells, which promotes tumorigenesis and development [50]. In addition, CtBP is a target of several tumor suppressors, including the p14/p19ARF tumor suppressor [51][52][53].It has also been found that 4-methylthio-2-oxobutyric acid (MtoB), a substrate of CtBP, can act as an inhibitor of CtBP at high concentrations and produce toxicity to cancer cells.In a human colon cancer cell xenograft model, MtoB treatment reduced tumor burden and induced apoptosis [40].Moreover, as demonstrated by Li et al., high levels of CtBP expression promoted the therapeutic efficacy of cisplatin.The dimerization status of CtBP is a potential marker for predicting the sensitivity of ovarian cancer patients to platinum-based drugs and is also a target for improving the therapeutic efficacy of platinum-based drugs in ovarian cancer [54].In addition, CtBP and its negative regulator ARF were negatively correlated in resected human colon adenocarcinoma [53].CtBP2 interacts with transcription factor 4 (TCF-4) to activate its transcriptional activity causing activation of the downstream target gene β-catenin, which leads to cell migration [55].In addition, recently Wu et al. found that Snail (Sna) acts as a transcription factor involved in EMT and tumor invasion.The deletion of CtBP inhibits Ras/Sna-induced tumor invasion and Sna-mediated invasive cell migration.Their data pointed out that CtBP and Sna may form a transcriptional complex that regulates JNK-dependent tumor invasion and cell migration in vivo [56], suggesting that CtBP may be a useful therapeutic target for human malignancies [44,[57][58][59][60]. Table 1.The correlation between CtBP and tumors and possible mechanisms of action have been demonstrated. Hepatocellular carcinoma Involving in constituting a transcriptional repression complex to repress E-cadherin [57] Abdominal aortic aneurysm Modulating inflammatory responses and disrupting the matrix [58] Prostate cancer Regulating cell proliferation through the c-Myc signaling pathway [59] Gastric cancer The expression of CtBP2 is inversely correlated with the disease-free progression of gastric cancer [60] Breast cancer Inhibiting intracellular cholesterol abundance, thus increasing EMT and cell migration [44] Hepatocellular carcinoma (HCC) is one of the human malignant tumors with high morbidity and mortality [61].HCC has a significantly increased expression of CtBP2 compared with adjacent normal liver tissues [62].CtBP2 overexpression may be closely related to HCC development and progression, providing new clues for HCC diagnosis and treatment.In HCC cells, p19Arf binds to CtBP1 to inhibit cancer cell infiltration [51].Studies have found that the transcriptional co-repressor CtBP1 may be a key factor in the transcriptional repression complex involved in the repression of E-cadherin expression [57].These results showed that CtBP correlated with HCC grade and could be used as a prognostic factor for tumors. Abdominal aortic aneurysm (AAA) is a life-threatening disease that occurs in the aorta with a potential risk of rupture [63].A cohort study by Bai et al. found that the expression levels of CtBP1 and CtBP2 were significantly increased in the AAA mouse model.Microarray analysis of AAA-aorta showed elevated expression of five MMP genes (MMP1a, 3, 7, 9, and 12) and three proinflammatory cytokine genes (IL-1β, IL-6, and TNF-α) [58].Two CtBPspecific inhibitors, NSC95397 and Hipp, inhibited the labile activity of the CtBP in vivo and in vitro and reduced the expression of the MMPs and three proinflammatory cytokine gene expressions [22,64].This suggests the potential pharmacotherapeutic utility of CtBP inhibitors for AAA in modulating inflammatory responses and disrupting the matrix. Prostate cancer is the most common malignant tumor of the male genitourinary system, and also the second most common type of malignant tumor in men, after lung cancer [65].Recent studies have found that CtBP1 is highly expressed and mislocalized in metastatic prostate cancer, and reducing the expression level of CtBP1 can significantly inhibit prostate cancer proliferation and metastasis in vivo and in vitro [66].It was also found that the expression level of CtBP2 in prostate cancer tissues was higher than in normal tissues, which was closely associated with the malignant behavior of the tumor, indicated by elevated serum prostatic specific antigen (PSA) levels, advanced tumor stage (T3), high Gleason score, and positive extraprostatic extension [67].Inhibition of CtBP2 decreased the level of c-Myc protein, as well as that of HSPC111, a direct transcriptional target of c-Myc [59].Another study found that antimony promotes the proliferation of prostate cancer cells by activating the CtBP2-ROCK1 signaling pathway and enhancing the stability of the c-Myc protein.Specifically, CtBP2 transcriptionally regulates the expression of RhoC, a member of the RhoGTPase family, which results in enhanced kinase activity of ROCK1 and promotes the stability of the oncogene c-Myc [68].The above two results showed that CtBP2 can inhibit the development of prostate cancer through the c-Myc signaling pathway, implying that CtBP2 may have potential as a promising therapeutic approach for the treatment of prostate cancer. Gastric cancer, a prevalent malignant tumor of the digestive tract, is one of the most common cancers worldwide and is a common cause of cancer deaths [69].The RB Binding Protein 8 (RBBP8) has been reported to be involved in DNA double-strand break (DSB) repair in various cancers.However, its specific functions and related mechanisms in gastric carcinogenesis have not been systematically studied.Yu et al. discovered that RBBP8 plays a role in chromatin modification by suppressing the histone acetylation level of the P21 promoter through the recruitment of the CtBP co-repressor complex to the BRCA1 binding site [70].Another study pointed out that the expression of CtBP2 is inversely correlated with the disease-free progression of gastric cancer, indicating that CtBP2 plays a significant role in the progression of gastric cancer.In MAP3K8 (mitogen-activated protein kinase kinase kinase 8)-suppressed EBVaGC (Epstein-Barr virus-associated gastric carcinoma) cells, the expression of CtBP2 is significantly reduced.This suggests that MAP3K8 may influence gastric cancer progression by regulating the expression of CtBP2 [60].Moreover, drug resistance is a critical factor affecting the treatment of gastric cancer.The resistance of gastric cancer cells to anticancer drugs, such as cisplatin (DDP), remains a significant challenge to patient recovery.Wu et al. found that CtBP1 may enhance DDP resistance in gastric cancer cells by activating RAD51 expression, suggesting that CtBP1 knockdown could provide a novel therapeutic approach for the clinical treatment of gastric cancer patients [71]. As a common malignant tumor worldwide, breast cancer is an important cause of mortality [72].It was found that CtBP regulates cholesterol homeostasis, mainly by forming a complex with ZEB1 and inhibiting SREBF2 transcription.Moreover, CtBP inhibits intracellular cholesterol abundance, leading to increased EMT and cell migration, and cholesterol negatively regulates the stability of the TGF-β receptor on the cell membrane.They also found that the ability of TGF-β to reduce intracellular cholesterol was dependent on increased recruitment of the SREBF2 promoter by the ZEB1 and CtBP complex.It was ultimately concluded that high expression of CtBP and low expression of SREBF2 and HMGCR were significantly associated with high EMT capacity of the primary tumors and that elevated levels of CtBP in patients' tumors predicted shorter median survival [44]. In summary, CtBP plays an important role in the occurrence and development of cancer, and the study of CtBP function and regulatory mechanisms is of great significance for cancer treatment and prevention. CtBP and Viruses As mentioned earlier, CtBP is a highly conserved transcriptional co-repressor molecule.It was first identified by its interaction with the C-terminus of the adenovirus E1A protein, hence the name carboxy-terminal binding protein [1].Researchers found that all transcriptional regulators that bind to CtBP possess the motif PXDLS [73], suggesting that a variety of proteins containing this motif, such as EBNA3C, a protein expressed by the Epstein-Barr virus, may interact with CtBP [74].However, for viruses such as HIV, although direct interactions between CtBP and the virus have not been particularly emphasized, a role for CtBP in influencing the biochemical processes of these infected cells is evident.General roles in cellular transcription and metabolism suggest that CtBP may influence viral replication and cellular transformation processes. Adenovirus Adenovirus type 2/5 is a DNA virus associated with tumorigenesis that primarily infects terminally differentiated epithelial cells of the upper respiratory tract.Due to low cytokinesis activity, adenoviral infection induces cells to synthesize the raw materials necessary for viral replication by expressing specific proteins [75].E1A is the first viral protein expressed in adenovirus replication, which consists of two major isoforms encoded by two differently spliced mRNAs.Of these, the 12S mRNA encodes the small E1A protein of 243 residues (243R) and the 13S mRNA encodes the large E1A protein of 289 residues (289R) [76,77].Both proteins are highly conserved throughout evolution and contain two exons sharing the conserved region 1 (CR1), CR2, and CR4, with the 289R protein also having a CR3 structural domain consisting of 46 amino acids [77][78][79]. The N-terminus of the E1A protein is essential in the transformation of baby rat kidney (BRK) cells.However, the C-terminal encoded proteins appear to be dispensable of transforming activity [80].However, according to some studies, the C-terminus can inhibit N-terminal oncogenic activity [79].Schaeper U et al. found that a short C-terminal sequence of E1A governs the oncogenesis-restraining activity of exon 2 [81].Kuppuswamy G et al. used the E1A mutants, which lacked a C-terminus or had a mutation in the C-terminus, to determine its transformation activity.Results showed that these mutants exhibited a hypertransformation phenotype in a synergistic transformation assay with the T24 ras oncogene [82].Other studies show that CtBP interacts with the C-terminal region of the adenovirus E1A protein and that this interaction is required for effective activation of the E1A response gene, suggesting that E1A may block CtBP-mediated inhibition [83,84].CtBP is a phosphorylated protein, and its phosphorylation level is regulated during the cell cycle, suggesting that it may play an important role in cell proliferation and immortalization.By studying small deletion mutants within exon 2 of the E1A gene, Boyd et al. found that mutants with residues 225-238 deletion were highly defective in immortalization, implying that the 14-amino-acid region may contain functions important for the negative regulation of tumorigenesis and metastasis.To shed light on this issue, they constructed a chimeric gene encoding a fusion of the C-terminal amino acid of E1A with bacterial glutathione-S-transferase (GST), and analysis of the GST-E1a C-terminal mutant protein showed that the interaction of CtBP with the E1A protein plays a key role in adenoviral replication and oncogenic transformation [1]. The truncated C-terminal sequence of the E1A protein is able to control the tumorigenesis inhibitory activity of exon 2, which binds to the cytosolic phosphoprotein CtBP via the 5 amino acid motif PLDLS (Pro-Leu-Asp-Leu-Ser) [85].This sequence is more conserved in the E1A protein of human adenovirus.To further understand the mechanism underlying the interaction between E1A and CtBP leads to tumorigenesis suppressor activity, Schaeper et al. searched for additional cellular proteins in complex with CtBP and reported the cloning and characterization of the CtIP protein.The CtIP protein binds to CtBP via the PLDLS motif, whereas the E1A exon 2 peptide containing the PLDLS motif disrupts the CtBP-CtIP complex.The results suggest that the tumorigenesis inhibitory activity of E1A exon 2 may be related to the disruption of the CtBP-CtIP complex through PLDLS motifs [81]. CtBP represses transcription by binding to chromatin-modifying enzymes, such as histone deacetylases and KDM1A H3K4 demethylases [86].Meanwhile, CtBP is highly homologous to NAD-dependent D2-hydroxyacid dehydrogenases (D2-HDHs), and the experimental results of Balasubramanian et al. demonstrated that NADH induced a conformational change in CtBP, which enhanced the interaction of CtBP with E1A, suggesting that intracellular levels of NADH during viral infections can recruit CtBP to influence the E1A activity [87,88].When the NADH/NAD + ratio is high in the cell, the NADH-bound dimeric form of CtBP causes global inhibition, thereby maintaining the balance and homeostasis of many cellular processes, under the cell surveillance of p53 and NF-κB.In contrast, in the presence of a low NADH/NAD + ratio, CtBP without NADH monomer blocks p53 function and NF-κB-mediated transcription.In the absence of p53 and NF-kB cellular surveillance, low NADH/NAD + ratios also disrupt homeostatic enzymes and homeostasis, leading to global instability [89].In conclusion, for adenoviruses, CtBP acts as a co-inhibitor and modifies E1A protein expression, which in turn affects viral replication and immortalization of primary epithelial cells [90] (Figure 2).cellular surveillance, low NADH/NAD + ratios also disrupt homeostatic enzymes and homeostasis, leading to global instability [89].In conclusion, for adenoviruses, CtBP acts as a co-inhibitor and modifies E1A protein expression, which in turn affects viral replication and immortalization of primary epithelial cells [90] (Figure 2). Future research should aim to elucidate the precise molecular mechanisms by which CtBP influences adenovirus replication and transformation.This could lead to the development of targeted antiviral therapies that inhibit CtBP-E1A interactions, potentially controlling adenovirus-related diseases. Epstein-Barr Virus The Epstein-Barr virus (EBV) is a member of the gammaherpesvirus subfamily [91].EBV infects resting B-lymphocytes in vitro and drives them to proliferate in the form of a lymphoblastoid cell line (LCL), which has been used as a model to study the development of EBV lymphomas [92,93].EBNA3C (Epstein-Barr virus nuclear antigen 3C) is one of the virally encoded EBV nuclear antigens known to play multiple roles in viral replication, Future research should aim to elucidate the precise molecular mechanisms by which CtBP influences adenovirus replication and transformation.This could lead to the development of targeted antiviral therapies that inhibit CtBP-E1A interactions, potentially controlling adenovirus-related diseases. Epstein-Barr Virus The Epstein-Barr virus (EBV) is a member of the gammaherpesvirus subfamily [91].EBV infects resting B-lymphocytes in vitro and drives them to proliferate in the form of a lymphoblastoid cell line (LCL), which has been used as a model to study the development of EBV lymphomas [92,93].EBNA3C (Epstein-Barr virus nuclear antigen 3C) is one of the virally encoded EBV nuclear antigens known to play multiple roles in viral replication, transformation of infected cells, and immune evasion.EBNA3C can interact with a variety of transcriptional repressor molecules to regulate the transcription of host genes [94].Earlier studies have shown that EBNA3C interacts with many transcriptional repressors and activators that regulate the transcription of host genes when recruited to EBNA3C binding sites [74], among which CtBP acts as a metabolically induced transcriptional repressor, whose dimerization and repressive activity is dependent on NADH binding [95].Ohashi M et al. studied how EBV protein EBNA3C promotes B-lymphocyte transformation.However, they found that, contrary to previously proposed models, EBNA3C does not recruit CtBP to the promoters of these genes, and that the interaction of EBNA3C with CtBP is important for both EBNA3C-mediated activation and repression of host genes [93].Other studies have shown that EBNA3C also interacts with CtBP1 through its C-terminal PLDLS motif and that this interaction appears to be important for some EBNA3C repressions [96]. As previously described, CtBP is a metabolism-sensing transcriptional repressor and can interact with EBNA3C via motif PLDLS.These results suggest that CtBP binds to the promoters of host genes without the presence of EBNA3C and that EBNA3C can interfere with the repressive function of CtBP by interacting with CtBP, leading to EBNA3Cmediated up-regulation of host genes, whereas the interaction of EBNA3C with CtBP may be important for the role of the p300 coactivator in certain EBNA3C-induced recruitment on genes is important [93].Interestingly, the results of this experiment show that EBNA3C does not appear to promote CtBP recruitment, but rather interferes with CtBP inhibitory activity to up-regulate certain host genes, meaning that interactions with CtBP-repressor proteins are more important for the ability of EBNA3C to induce host gene expression than EBNA3Cmediated repression.These findings reveal an important role for CtBP in the regulation of host gene expression by EBNA3C and a novel mechanism for how EBNA3C activates host genes by interfering with the repressive function of CtBP [74, 93,94].Understanding how EBNA3C disrupts CtBP's repressive functions offers a potential avenue for therapeutic intervention.Future studies should focus on developing inhibitors that prevent CtBP-EBNA3C interactions, which could help in controlling EBV-associated cancers. Hepatitis B Virus Due to the hepatophilic nature of Hepatitis B Virus (HBV), the replication process involves the conversion of HBV incomplete closure of double-stranded DNA into covalently closed circular DNA (cccDNA) after entry into the hepatocyte using the cellular DNA repair reaction to form the viral microchromosome [97].Under the regulation of HBx, cccDNA transcribes HBV mRNA and pregenomic RNA (pgRNA) [98].Stochastic integration of HBx genes with the host genome may lead to proto-oncogene activation and tumor suppressor gene inactivation.CtBP binds to HBV and affects metabolism and replicative gene expression in hepatocytes.The CtBP-CtIP (carboxy-terminal binding protein-responsive protein) complex may link the transcriptional repression activity of CtBP to the DNA repair function of CtIP, which can interact with CtBP through a conserved sequence in its N-terminal region.Although the exact mechanism and outcome of this interaction are not fully understood, it has been suggested that this interaction may affect the function of CtIP in DNA repair [99].Liu et al. examined the mRNA and protein expression levels of CtIp in a HepG2 hepatocellular carcinoma cell line stably expressing HBx, and the results showed that the cell line was blocked in the G2/M phase of the cell cycle, and HBx down-regulated both CtIP protein expression level and mRNA expression level [100].Therefore, HBx can interfere with the expression of the tumor suppressor protein CtIp and alter the activity of the CtBP-CtIP complex, which in turn affects cellular DNA transcription activity and damage repair.Investigating the role of CtBP in HBV replication and its interactions with host proteins could reveal novel therapeutic targets.Developing small molecule inhibitors that disrupt CtBP-HBx interactions may provide a new approach to treating HBV infections and preventing liver cancer. Human Immunodeficiency Virus Human immunodeficiency virus (HIV) can invade the human brain during the early stages of infection [101].However, its interactions with the blood-brain barrier (BBB) cells remain poorly studied [102].Victor et al. found that CtBP is closely associated with HIV transcription.CtBP1 plays an important role in HIV-infected pericentromeric cells and is strongly associated with occludin levels and SIRT-1 activity.During the first 48 h of HIV infection, intranuclear translocation of CtBP1 increases with the improvement of HIV transcriptional efficiency.CtBP1 is a transcriptional co-repressor and NF-κB is a key regulator of HIV transcription.The NADH-dependent intranuclear translocation of CtBP1 is associated with reduced activity of SIRT-1 (deacetylase), which normally acts as a transcriptional co-repressor by deacetylating and inhibiting NF-κB activity.During early HIV invasion of the brain and interaction with the blood-brain barrier, there is a regulatory role for CtBP1 nuclear translocation by the brain pericyte tight junction protein occludin, with occludin levels decreasing by approximately 10% within 48h post-infection.Reduced levels of occludin correlate with an increase in the nuclear translocation of CtBP1.This leads to an increase in NADH levels, which in turn reduces NAD + -dependent intranuclear translocation of CtBP1, decreases SIRT-1 activity, and increases NF-κB acetylation and HIV transcription.SIRT-1 expression and phosphorylation levels were restored after the recovery of occludin levels [103].In pericytes with occludin overexpression, cytoplasmic levels of CtBP1 were increased and nuclear localization was reduced, suggesting that changes in the level of subcellular localization of CtBP1 allow HIV transcription to be affected.Targeting the regulatory role of CtBP1 in HIV transcription could lead to novel antiviral strategies.Future research should explore the development of therapies that modulate CtBP1's nuclear translocation and interaction with host proteins, potentially reducing HIV replication and its effects on the central nervous system. Marek's Disease Virus For Marek's disease virus (MDV), there is an interaction between the encoded protein MEQ (MDV EcoRI Q fragment) and CtBP1, which has been implicated in MDV-induced tumor formation [104].MEQ is the major viral oncogenic protein of MDV.It was found that MEQ can interact with CtBP through the PLDLS motif, and this interaction is critical for MDV oncogenicity.MDV mutants defective in CtBP interaction, constructed by reverse genetics techniques, showed a complete loss of oncogenicity in experiments but were still able to replicate.Thus, non-oncogenic MDV generated by mutating the CtBP interaction domain of MEQ has the potential to be an improved vaccine against pathogenic MDV infection [105].Zhao et al. analyzed the results by confocal microscopy and showed that MDV infection induced nuclear accumulation of Hsp70 and its co-localization with Meq.Subsequently, they confirmed the Meq-Hsp70 interaction using bidirectional immunoprecipitation using Meq-and Hsp70-specific antibodies [106].This suggests that the interaction between Meq and Hsp70 is important in the promotion of MDV tumorigenesis by CtBP.Developing CtBP inhibitors that prevent its interaction with MEQ could serve as an effective strategy for controlling MDV infections.Such inhibitors may also have potential as improved vaccines against pathogenic MDV strains. Zaire Ebola Virus CtBP can interact with BARS to influence cellular signal transduction and functional regulation [6].CtBP/BARS regulates macroendocytosis by phosphorylating LIMK and CtBP/BARS proteins to promote actin remodeling and macroendosome closure [107].It is shown that the Zaire Ebola virus (ZEBOV) utilizes a mechanism similar to macroendocytosis to achieve endocytosis by promoting local actin polymerization and membrane perturbation to invade host cells.This endocytosis is not dependent on clathrin, caveolae, or dynamin, but requires cholesterol-rich lipid rafts.CtBP/BARS plays an important role in ZEBOV (Zaire ebolavirus) viral infections.The CtBP/BARS protein is thought to be a factor that replaces dynamin in macropinocytosis, and it promotes the disconnection of nascent giant vesicles from the plasma membrane.It was found that inhibition of CtBP/BARS expression using siRNA significantly reduced ZEBOV infection, suggesting that CtBP/BARS plays an important role in ZEBOV entry into host cells and is a key regulator of macropinocytosis [6,108].By inhibiting the function of CtBP/BARS, ZEBOV infection can be significantly reduced, providing a potential target for future therapeutic options.Targeting CtBP/BARS-mediated pathways could provide a novel therapeutic approach for Ebola virus infections.Future research should focus on identifying and developing inhibitors that specifically disrupt CtBP/BARS function, potentially reducing the viral load and improving patient outcomes. Conclusions and Prospects In summary, CtBP plays a significant role in a variety of tumorigenesis and viralrelated diseases, but the specific pathways of action are unclear and the mechanism of interaction with a variety of cancer molecules or viral antigens is unknown, which makes CtBP play different roles in different tumors or viral replication processes.Meanwhile, the two members of the CtBP family, CtBP1 and CtBP2, have different molecular structures and different binding targets, which ultimately means that they show different physiological activities, even in the same tumor or viral disease.For example, on the one hand, in hepatocellular carcinoma, CtBP1 acts as a transcriptional co-blocker to inhibit cellular expression of E-cadherin, which results in the loss of intercellular adhesion and promotes pathological processes such as epithelial-mesenchymal transition, cancer cell migration and invasion into the stroma.On the other hand, CtBP2 may inhibit the expression level of oncogenes through the formation of transcriptional co-repressors, thereby promoting hepatocellular carcinoma.The reason for this situation may be that CtBP1/2 are regulated by different substances in different tumor microenvironments.However, for adenovirus infection, instead of promoting tumorigenesis, the high expression of CtBP served to inhibit the growth of cancer cells.Moreover, in virus-infected cells, CtBP activity is also virus-specific, including affecting host cell gene expression (EBV, HBV), influencing viral replication (HIV), regulating viral entry into host cells (ZEBOV), etc.The diversity of physiological roles of CtBP may be related to the differences in surface antigens of the viruses and the types of virus-infected host cells. In this review, we emphasize the mechanism of interaction between CtBP and adenovirus E1A protein.We found that in adenoviruses, CtBP1 and CtBP2 have opposite physiological roles, i.e., exon 1 promotes tumorigenesis, whereas exon 2 acts as a repressor of exon 1, inhibiting cellular immortalization and tumorigenesis, and exon 2 also has the biological activity of promoting adenoviral replication.By comparing the effects of CtBP in different tumors and viruses, we found that CtBP showed completely different biological activities, and sometimes the two conclusions seemed to contradict each other, suggesting that the specific mechanism of CtBP's effects in different tumors and viruses is still not completely clear, and needs to be further studied in the future. Given the important role of CtBP in a variety of viral infections and tumorigenesis and development, future research will focus on exploring CtBP in greater depth as a target for clinical disease therapy.This includes exploring the molecular mechanisms by which CtBP interacts with specific viral proteins, the role of CtBP in the formation of the tumor microenvironment, and how these interactions affect tumor progression, viral replication, and host immune responses.Meanwhile, more detailed studies on the mechanism of CtBP action in different types of viral infections and cancers will be carried out in the future to develop and test small molecule inhibitors targeting the CtBP site of action.Ultimately, future work will focus on clinical trials that will advance CtBP inhibitors, evaluate their safety and efficacy in cancer patients, and explore their potential to be used in combination with other therapeutic approaches, facilitating their translation to clinical applications and providing new strategies and approaches for cancer therapy and the treatment of virus-associated tumors. Figure 2 . Figure 2. Interaction of CtBP with adenovirus and its role in tumor transformation and viral replication.CtBP, C-terminal binding protein; CR, conserved region; Ras, rat sarcoma; pRB, retinoblastoma; p300/CBP, transcriptional regulators; NADH, nicotinamide adenine dinucleotide; PLDLS motif, Pro-Leu-Asp-Leu-Ser.The adenovirus-expressed E1A protein consists of 289 amino acid residues, of which amino acids 1 to 186 are encoded by exon 1 and amino acids 187 to 289 are encoded by exon 2. On the one hand, transcriptional regulators such as Ras, pRb, and p300/CBP can enhance the tumorigenic-promoting effect of exon 1.On the other hand, exon 2 can inhibit the upstream pathway of exon 1 action, inhibiting cellular immortalization and tumorigenesis, and at the same time, exon 2 also has the biological activity of promoting adenovirus replication.CtBP, as a transcriptional co-repressor protein, can interact with the C-terminal region of the adenovirus E1A protein through the PLDLS motif.During viral infection, intracellular levels of NADH recruit CtBP to affect the activity of the E1A protein and enhance the interaction of CtBP with the E1A protein. Figure 2 . Figure 2. Interaction of CtBP with adenovirus and its role in tumor transformation and viral replication.CtBP, C-terminal binding protein; CR, conserved region; Ras, rat sarcoma; pRB, retinoblastoma; p300/CBP, transcriptional regulators; NADH, nicotinamide adenine dinucleotide; PLDLS motif, Pro-Leu-Asp-Leu-Ser.The adenovirus-expressed E1A protein consists of 289 amino acid residues, of which amino acids 1 to 186 are encoded by exon 1 and amino acids 187 to 289 are encoded by exon 2. On the one hand, transcriptional regulators such as Ras, pRb, and p300/CBP can enhance the tumorigenic-promoting effect of exon 1.On the other hand, exon 2 can inhibit the upstream pathway of exon 1 action, inhibiting cellular immortalization and tumorigenesis, and at the same time, exon 2 also has the biological activity of promoting adenovirus replication.CtBP, as a transcriptional co-repressor protein, can interact with the C-terminal region of the adenovirus E1A protein through the PLDLS motif.During viral infection, intracellular levels of NADH recruit CtBP to affect the activity of the E1A protein and enhance the interaction of CtBP with the E1A protein.
9,009
2024-06-01T00:00:00.000
[ "Medicine", "Biology" ]
DIGITAL IMPERATIVES FOR THE DEVELOPMENT OF THE FINANCIAL MARKET OF THE NATIONAL AND WORLD VECTOR bstract 365 ISSN 2306-4994 (print); ISSN 2310-8770 (online) UDC 004:336.76(075.8) Vovchak O. Doctor of Economics, Professor, Banking University, Lviv, Ukraine; e-mail<EMAIL_ADDRESS>ORCID ID: 0000-0002-8858-5386 Kravchenko A. Doctor of Economics, Associate Professor, Banking University, Lviv, Ukraine; e-mail<EMAIL_ADDRESS>ORCID ID: 0000-0001-5733-6582; Andreykiv T. Ph. D. in Economics, Associate Professor, Lviv University of Trade and Economics, Ukraine; e-mail<EMAIL_ADDRESS>ORCID ID: 0000-0001-5353-248 Introduction. In today's world, economic processes undergo permanent globalization and digital transformation changes. Financial market is the «lifeblood» of the economic environment on a national and global scale. At the same time, an acute problem in the formation of an effective global financial system is the underdevelopment of the modern financial market against the backdrop of digital imperatives, namely: a low level of integration of national financial markets in the global financial cyberspace, and the introduction of innovation and information technologies. The global financial space is characterized by globalization and transformation, digital changes. The proper operation of the domestic financial market leads to the development and integration of the national economy in the global environment. The globalization and digital processes are imperative to cover the global, national economic sphere, which influences the emergence of crisis phenomena and problems of inadequate financial market dynamics. Ensuring the appropriate transformation of countries' economies requires the development of the financial market in terms of digital imperatives. The basis of scientific approaches to the study of the modern financial market under the society digitalization is not fully formed. Therefore, there is an urgent need to develop new scientific theoretical and methodological, applied aspects of the study of financial market trends in terms of digital imperatives. Literature review. Trends of the financial market, in particular exchange and OTC markets under the impact of the digital imperative, are the object of the studies of both scientists and practitioners: the trends in the financial market and the impact of digital imperatives (E. Brynjolfsson, 2000; U. Huws, 2014; A. Dzhusov, 2017), in particular artificial intelligence, robotics, digital trading of derivatives on exchange and OTC markets (F. Naurois & S. Koppikar, 2020); an important trend in digitalization in the financial market, mobile banking, blockchain, Internet of things, robotically (V. Bastide des, S. Rao & J. Marous, 2019); trends in the modern financial market dominated by the digitalization of society, in particular the need to allocate financial engineering, crowdfunding, financial networking (V. Milovidov, 2017), banking networking (A. Capponi, F. Corel, J. Stiglitz, 2020), the creation and use of electronic networks (D. Rogers, 2018), in particular digital neural networks (Bestens D., 1997); the electronic financial market, in particular exchange-traded and OTC forms in the imperative influence, particularly because of the availability of mining digital currencies that requires the development of new methods of forecasting their prices (D. Heller & E. Truman, 2017); introduction of digital technology as the modern trend in the financial market, which are more likely to be revealed on the stock market, the OTC, in particular cryptocurrency (E. Rosenberg, P. Harrell, G. Shiffman and S. Dorshimer, 2019); C) the importance of trends in the digital development on impact of the financial market is undeniable that is revealed in financial innovations (J. Ackermann, 2013; E. Rosenberg, P. Harrell, G. Shiffman, 2019), in digital financial technologies (D. Arner, D. Zetzsche, R. Buckley, 2017), which necessitate the use of digital platforms (D. Metz, 2019), the digital financial services (J. Cooke, 2018), the new financial products (K. Amosson, 2017) are one of the major trends of financial market development in terms of digital imperatives. Despite the number of publications on the study of the financial market, the issue of trends of the financial market, which makes stock and over the counter of digital imperative remains insufficiently disclosed and requires further research. Methodology. In order to identify trends in the operation of the financial market of Ukraine we have determined the dynamics of changes in inflation and GDP of Ukraine, quantity, volume and rate of growth of financial assets of banks, stock exchanges, financial companies, trends in price movements of precious metals and foreign currency trading in Ukraine. To determine the impact of globalization and digital transformation of the operation of the financial market of Ukraine, the following indicators are studied: dynamics of volumes of concluded derivatives on the world stock exchange financial market (in particular the subjects of financial market of Ukraine) on interest rates, securities, stock indices, currencies, precious metals; indicators of changes in the movement of the OTC markets group (where traders of Ukraine are actively ingaged in trading), credit derivatives, trading capital, gold, currency; change of indicators of exchange rates EUR / USD (the National Bank of Ukraine is a participant of the Forex). The methodology of the study of the financial market is a totality of methods of nonlinear dynamics, time series of economic growth rates, market fluctuations and divergences (to study the trends of the financial market of Ukraine and the world market of financial derivatives); correlation and regression analysis, extrapolation, analysis, market shocks, technical (charts of the dynamics of prices and trading volume, MACD, stochastic oscillator) and fundamental analysis at the macroand mega-levels (to determine the trend of the OTC markets group, Forex). The methods of evaluation: comparison of GDP with the volume of loans (excluding the loans and types of borrowers) is disclosed in the economic literature, in particular in the works of K. Sheedy; the approaches of definition of a currency devaluation, the level of its depreciation (without specifying algorithmic performance period) are analyzed by A. Willman; the methods of studying the dynamics of time series through the definition of a general growth rate are identified (without selection indicators for the criterion «influence» and clarify its period and group) by B. Park, E. Mammen, K. Pauwels, Currim I.; in the methods of technical analysis only a few features are mentioned by D. Brown, R. Jennings, K. Kavajecz, E. Odders-White (the authors draw attention to the significance of past asset prices without revealing in details the technical scientific approach; asset prices are analyzed technically only to determine liquidity), in the methods of fundamental analysis only its separate aspects are identified by E. Swanson, L. Rees, L. Juarez-Valdes, B. Lev, S. Thiagarajan (used for evaluation: financial reports of companies; the level of corporate services, development and risk companies); approaches to determine the actual prices of assets in the economic literature are formed on the basis of determining income assets (such methods neither take into account all factors influencing the asset nor consider digital financial assets), in particular in the works of R. Bansal and I. Shaliastovich. In our study, we calculated the devaluation of the currency as an algorithmic ratio of the current exchange rate to the exchange rate of the previous period, determined and clarified the ratio of the country's GDP (Gross domestic product) on it's loans and borrowers = ( ). In the study, we determined the rate of economic growth of time series of the financial market with a preliminary selection of indicators on the criterion of usefulness (trend impact) with the specification of the period and its group. where EGR -index of growth rates of the influential indicator of the financial market; IiCPji is an influential indicator of the j-group of the current i-period of the financial market; IiPPji -an influential indicator of the j-group of the previous i-period of the financial market. In the methods of technical analysis, we distinguish tools: digital charts, digital figurespatterns, algorithmic-digital-indicators. In the methods of fundamental analysis, we distinguish tools: analysis of the economic and political situation at the micro-, meso-, macro-, mega-levels; cluster digital analysis; trend analysis; structural analysis; analysis of cyber threats and biorisks of the cyberfinance sphere; social analysis; informational psychological analysis; forecasting force majeure. To determine the future price of a digital financial asset, modeling is used. We have identified the following algorithmic interdependence: with an increase in the IDFA index 1, the predicted price of a digital financial asset increases, and vice versa: , where IDFA is the index of forecasting the digital financial asset price; is volume of open interest (positions); Vci is the amount of closed interest (positions -transactions); Pdfac is the current price of a digital financial asset; APdfa is the average price of a digital financial asset In turn, we have determined that the revenue from a cryptocurrency transaction is formed in such an algorithmic interdependence: , where ITj is income obtained from the transaction with j cryptocurrency; RECj is the rate of the j cryptocurrency; VECj is the volume of the j cryptocurrency; MCECij is the cost of i mining of the j cryptocurrency; ICR is the cyberrisk index. Our study is based on the scientific provisions of fundamental and applied research in the fields of finance, economics and cybernetics. Results and discussion. The study of the financial market of Ukraine, the market of global financial derivatives, Forex, OTC markets group in terms of digital imperatives has identified the need analysis of trends in the development of the financial market in Ukraine of under the digitalization of society and it's separate segments -bank (credit, deposit, currency), stock, financial derivatives, precious metals, parabank (credit, insurance), as well as global digital markets and analysis of trends of the most electronic types -the market of world financial derivatives, OTC markets group, Forex, cryptocurrency market. It is expedient to conduct such study of the financial market development in certain periods of time that demonstrate the formation of modern market trends, patterns. To estimate trends in the financial market development in 2010-2019, we have selected the following main imperatives (characteristics), the results of which are given in Table 2. The calculations have been made using the statistical data of the National Bank of Ukraine (NBU), the Ministry of Finance of Ukraine, the State Committee of Statistics of Ukraine, the National Securities and Stock Market Commission of Ukraine, the National Financial Services Regulatory Authority. The features of the current stage of financial market development is digitalization. The analysis of trends in the development of the digital financial asset market and the main exchange and over-the-counter market by the prevailing type of electronic trade of the global level as a component of the globalization and digital process, vector of the largest participation of the [10]. ere trends o global stock instrument nd downwar onal dynam th a peak at nent was ac to the indic erica (12%) ading volum [10]. o 2019, the e Fig. 1), t hest one in 2 s of global Fig. 1-7). The study of the forex market using the methods of fundamental analysis has shown that during the period from 1999 to 2019, the determinants of changes in the price trends of the EUR/USD currency pair were: the political, social, financial and economic, defense-oriented conditions and the state of natural and technological security of the EU countries, the USA; the financial situation of the EU, the USA (reflected in the economic indicators of the European Central Bank, the US Federal Reserve System); the policy, reports, speeches of the heads of the European Central Bank, the US Federal Reserve System; budget, GDP level, inflation index, volumes of gold and foreign exchange reserves, balance of payments, currency stability index, economically active population of the EU countries and the USA; the level of government bond yield of the USA and EU member countries; decisions adopted (on carrying out reforms) at the general meeting of the members of the European Central Bank and the US Federal Reserve System; the stability of the EU and US financial systems; meetings of the European Commission and forecast economic indicators on the eurozone development; legislative changes made by the US Congress, the European Parliament, as well as the Council of the European Union; business activity of the forex market participants; volumes of trading operations, strategically formed by institutional and individual speculations on the forex market; the condition of the global commodity and financial markets; global financial crisis events; political, military situation in the countries of the East; Brexit. For the period from 2010 to 2019, the weighted average monthly exchange rates of the BTC / USD, ETH / USD signified a trend in the gradual growth at the beginning of the period, with the highest values at the beginning of 2018 and a price decline at the beginning of 2019 and its adjustment in 2019 -at the beginning of 2020. The determinants of the digital currency market development were: market conditions, open interest in cryptocurrency, the overbought and oversold conditions of the market, trading volumes, the development of digital technologies, public digitalization, demand for decentralized mining of cryptocurrency with further hashing with the use of cryptographic methods. During the studied period from 2010 to 2019, the following trends were observed in the OTC Markets Group (see Table 3): fluctuations in trading volumes and changes in the price range of the OTC financial derivatives; an overriding decline in the market value of credit financial instruments; fluctuations in the value of stock capital, interest rates, gold, currency. Thus, given the significant impact of digital imperatives on the development of the financial market of Ukraine and the global market of financial derivatives, Forex, OTC markets group, it should be noted that an important aspect of financial cyberspace is the trend of digital technology financial relationships of market participants (see Fig. 4-7). In turn, there is an economic interest The study of financial market trends in terms of digitalization of society, vector of the Ukrainian policy has revealed the features and trends of its development in certain periods, which are characterized as follows: at the beginning of the period under study (the second half of 2013-2015) there was the market recovery, the period 2015-2016 was characterized by crisis phenomena in the market followed by its revitalization, and the end of 2019 could be described as the period of increased digital imperatives, which were formed under the influence of the political, economic, digital, process, social, force majeure determinants. The study of digital imperatives of the financial market development on the global scale has revealed among the most digitalized financial markets the highest growth rate in digitalized trading in futures, Forex, interest rates on the global financial exchange market (in 2019, the largest percentage of digitalized financial trading was in Asia (42%), North America (30%), currency, securities in the OTC Markets Group, bitcoins on the cryptocurrency market, significant changes in the long-term trends in the forex market, as well as identified the patterns of «triple bottom», «head -shoulders», «double top». Changes in currency exchange rate trends in the forex market were underpinned by the technical and fundamental determinants. The study of digital imperatives of the financial market has enabled to carry out economic-mathematical modeling of forecasting the digital financial asset prices (which form the indicators of open interest (positions) and closed interest (positions -transactions) and current price of a digital financial asset), and future income depending on the cryptocurrency transaction (which form the indicators of rate, volume, costs of mined cryptocurrency and cyberrisk) and to reveal a close dependence of the cryptocurrency rate on trading volume (open interest) based on the coefficient of elasticity amounting to 2.7. The Ukranian financial market, the market of world financial derivatives, Forex, OTC markets group have been changing tendentially both in the direction of decreasing and increasing of indicators that characterize its development under the influence of political, financial and economic determinants, as well as the social development, military situation, globalization impact, integration vector, economic interest of market participants, the development of society digitalization. The study of digital imperatives of the trends in the development of financial market of Ukraine and the market of world financial derivatives, Forex, OTC markets group has identified the need for qualitative changes regarding restrictive determinants, such as: the development of the digital economy on a national and global scale; global political stability; the development of a legal framework for digital financial assets, financial cybersecurity; the introduction of the up-to-date digital financial instruments; strengthening of the protection of a blockchain of financial transactions. Digital imperatives of financial space trends have influenced the emergence of digital financial technologies, such as: technological things, electronic banking, digital neobanking, cloud technologies, robotic financial transactions, blockchain, legaltex, pertex, insurtex, high-frequency algotrading, electronic trading platforms. It is advisable to carry out further research on forming an effective financial and economic strategy for the development of financial market of Ukraine under digitalization that will ensure the sustainable development of the financial market and economy at meta-, mega-, macro-and microlevels. 12
3,851.4
2021-11-08T00:00:00.000
[ "Economics", "Computer Science", "Business" ]
Generation of double-scale femto / picosecond optical lumps in mode-locked fiber lasers We observed generation of stable picoseconds pulse train and double-scale optical lumps with picosecond envelope and femtosecond noise-like oscillations in the same Yb-doped fiber laser with all-positivedispersion cavity mode-locked due to the effect of non-linear polarization evolution. In the noise-like pulse generation regime the auto-correlation function has a non-usual double (femtoand picosecond) scale shape. We discuss mechanisms of laser switching between two operation regimes and demonstrate a good qualitative agreement between experimental results and numerical modeling based on modified nonlinear Schrödinger equations. 2009 Optical Society of America OCIS codes: (140.4050) Mode-locked lasers; (140.3510) Lasers, fiber; (140.7090) Lasers and laser optics: Ultrafast lasers. References and links 1. K. Tamura, H. A. Haus, and E. P. Ippen, “Self-starting additive pulse mode-locked erbium fiber ring laser,” Electron. Lett. 28, 2226-2228 (1992). 2. C. J. Chen, P. K. A. Wai, and C. R. Menyuk, “Soliton fiber ring laser,” Opt. Lett. 17, 417-419 (1992), http://www.opticsinfobase.org/ol/abstract.cfm?uri=ol-17-6-417. 3. V. J. Matsas, T. P. Newson, D. J. Richardson, and D. N. Payne, “Selfstarting passively mode-locked fiber ring soliton laser exploiting non linear polarisation rotation,” Electron. Lett. 28, 2226–2228 (1992). 4. A. Chong, W. H. Renninger, and F. W. Wise, “Propeties of normal-dispersion femtosecond fiber lasers,” J. Opt. Soc. Am. B 25, 140-148 (2008), http://www.opticsinfobase.org/josab/abstract.cfm?uri=josab-25-2-140. 5. F. W. Wise, A. Chong, and W. H. Renninger, "High-energy femtosecond fiber lasers based on pulse propagation at normal dispersion," Laser Photonics Rev. 1-2, 58-73, (2008). 6. V. L. Kalashnikov, E. Podivilov, A. Chernykh, and A. Apolonski, “Chirped-pulse oscillators: theory and experiment,” Appl. Phys. B 83, 503-510 (2006). 7. S. Kobtsev, S. Kukarin, and Yu. Fedotov. “Ultra-low repetition rate mode-locked fiber laser with highenergy pulses,” Opt. Express 16, 21936-21941 (2008). 8. S. Kobtsev, S. Kukarin, S. Smirnov, A. Latkin, and S. Turitsyn. “High-energy all-fiber all-positivedispersion mode-locked ring Yb laser with 8 km optical cavity length,” CLEO-Europe/EQEC-2009, CJ8.4. Munich, Germany, 14-19 June 2009. 9. M. Horowitz, Y. Barad, and Y. Silberberg, “Noiselike pulses with a broadband spectrum generated from an erbium-doped fiber laser,” Opt. Lett. 22(11), 799–801 (1997). 10. M. Horowitz and Y. Silberberg. “Control of noiselike pulse generation in erbium-doped fiber lasers,” IEEE Phot. Technol. Lett. 10, 1389-1391 (1998). 11. L. M. Zhao, D. Y. Tang, J. Wu, X. Q. Fu, and S. C. Wen. “Noise-like pulse in a gain-guided soliton fiber laser.” Opt. Express 15, 2145-2150 (2006). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-5-2145 12. S. Chouli and Ph. Grelu, “Rains of solitons in a fiber laser,” Opt. Express 17, 11776-11781 (2009). http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-17-14-11776 13. A. I. Chernykh and S. K. Turitsyn, “Soliton and collapse regimes of pulse generation in passively modelocking laser systems,” Opt. Lett. 20, 398-400 (1995). 14. A. B. Grudinin, D. N. Payne, P. W. Turner, L. J .A. Nilsson, M. N. Zervas, M. Ibsen, and M. K. Durkin, “Multi-fiber arrangements for high power fiber lasers and amplifiers,” United States Patent 6826335, 30.11.2004. #116846 $15.00 USD Received 8 Sep 2009; revised 18 Oct 2009; accepted 20 Oct 2009; published 27 Oct 2009 (C) 2009 OSA 9 November 2009 / Vol. 17, No. 23 / OPTICS EXPRESS 20707 15. G.P. Agrawal, Nonlinear fiber optics, (Academic Press, 2001). 16. A. Komarov, H. Leblond, and F. Sanchez, “Quintic complex Ginzburg-Landau model for ring fiber lasers,” Phys. Rev. E 72, 025604 (2005). 17. N. Akhmediev, J. M. Soto-Crespo, and G. Town, “Pulsating solitons, chaotic solitons, period doubling, and pulse coexistence in mode-locked lasers: Complex Ginzburg-Landau equation approach,” Phys. Rev. E 63, 056602 (2001). Introduction Nonlinear polarisation evolution (NPE) [1][2][3][4] is a commonly employed mechanism used to achieve stable mode-locking and ultra-short pulse generation in an all-positive dispersion cavity.In comparison to systems using semiconductor saturable absorbers, the NPE-based lasers are relatively easier to assemble and they are more robust against power damage.In addition, NPE-based laser systems allow simple adjustment of the saturable absorber action, and as a result, they can be effectively used for studies of different pulse generation regimes.Compared to soliton generation in the anomalous group velocity dispersion (GVD) regime, use of the normal dispersion allows one to achieve higher pulse energies without wave breaking [4][5][6].The key to achieving stable generation of high energy pulses is the management of nonlinear propagation effects in the cavity.Another new promising approach to ultra-high energy pulse generation just in fiber master oscillators introduced in [7] is to elongate a mode-locked laser cavity in order to achieve extremely low pulse repetition rates with considerable increase in pulse energy.Applying this technique we have recently demonstrated the generation of highly chirped short pulses with few-nanosecond envelope width and energy as high as 3.9 µJ [7] and 4 µJ [8] both in a hybrid (consisting of both bulk elements and fiber sections) and all-fiber master oscillators with optical length of the longcavity amounting to 3.8 and 8 km, respectively.Mode-locked fiber laser generating high energy pulses typically presents a multi-parametric nonlinear system that by changing device parameters might operate in a range of regimes: conventional soliton, dispersion-managed soliton (stretched pulse), self-similar pulse regime, all-normal dispersion highly-chirped pulse, multiple-pulse regimes and so on.In this work we present in more detail an unusual mode of operation of all-positive dispersion fiber lasers and demonstrate how a change of system parameters leads to switching between standard stable pulse train generation and more exotic regimes.In particular, we study both experimentally and numerically a generation of a doublescale picosecond lump envelope and a femtosecond scale noise-like oscillations inside the larger wave-packet.Such a lasing mode is characterised by an unusual double-feature shape with an auto-correlation function (ACF) comprising of a femtosecond-scale peak on a picosecond-scale pedestal.Though noise-like pulse generation regime has been studied for Erdoped fiber lasers [9][10][11][12], the nature of fine structure oscillations is not yet completely clear.In [10], the noise-like pulse generation was attributed to large birefringence of the laser cavity in a combination with the nonlinear field evolution.In [11], similar noise-like pulse operation was demonstrated in erbium-doped fiber laser with weak birefringence.It was conjectured in [11] that the formation of the noise-like oscillations is caused by the pulse collapse effect [13] and that such dynamics is a generic feature of a range of the passively mode-locking systems.The pulse collapse effect studied in [13] is an intermediate dynamical process that is caused by the third-order nonlinear gain term in the Taylor expansion of the nonlinear response of the effective saturable absorber.For some range of parameters, initially smooth field distribution is engaged in an explosive-like growth of the amplitude with corresponding compression.The collapse effect dynamics is more sensitive to initial conditions and as a result the forming field structures tend to be more random and complex compared to non-collapse pulse generation regimes.Though, evidently such fast growth of the field amplitude is saturated at some level by the higher terms in the expansion and other high-order effects, the resulting noisy and random asymptotic state could be quite different from more smooth regimes of the radiation intensity growth.Here we present experimental and numerical studies of the double-scale femto-pico-second optical lumps generation in Yb fiber laser and discuss in detail physical mechanisms behind the generation of mode switching in mode-locked all-positive-dispersion fiber lasers. Experimental setup The experimental set-up of the considered all-fiber ring-cavity laser is shown in Fig. 1.A 7-m long active ytterbium fiber with a 7-µm core was used as the laser active element.This fiber was pumped from one of the ends of the twinned fiber (GTWave technology [14]) with 1.5 W of 980-nm radiation from a laser diode.In order to couple the generated radiation out of the resonant cavity, a fiber-based polarisation beam splitter (FPBS) was used.In order to ensure uni-directional generation, a polarisation-independent fiber optical isolator (PIFI) was also introduced into the cavity.Mode locking was achieved due to the effect of non-linear polarisation evolution [1][2][3][4].For control of the polarisation state in the layout, two fiber-based polarisation controllers PC1 and PC2 were employed. For elongation of the laser cavity and boosting of the output pulse energy a stretch of passive SMF-28 fiber was inserted, so that the full resonator length came to 11.2 m (round trip time τ = 54 ns).All optical fibers in this layout had normal dispersion within the working spectral range of the laser.The average output power of the laser was limited by the power rating of FPBS and did not exceed 150 mW. Numerical simulations In the theoretical analysis of operation of considered laser schemes a standard approach is to apply numerical modelling based on the system of coupled modified non-linear Schrödinger Eqs.(1) [15]: where A x , A y are the orthogonal components of the field envelope, z is a longitudinal coordinate, t -time, γ = 4.7×10 -5 (cm•W) -1 -non-linear coefficient, g 0 -unsaturated gain coefficient, β 2 = 23 ps 2 /km-dispersion coefficient, P sat -saturation power for the active fiber, τ -time of cavity round trip.In our calculations we neglected the higher-order dispersion and linear birefringence of the fiber.The amplifier parameters were estimated from experimental measurements and taken equal to g 0 = 540 dB/km, P sat = 52 mW.To improve convergence of the solution to the limiting cycle, the numerical modelling included a spectral filter with a 30nm band-width, which exceeds considerably the typical width of generated laser pulses.The effect of polarisation controllers was taken into account by introduction of corresponding unitary matrices (see e.g.recent publication [16] and references therein).In the numerical modelling, in order to reduce the required computational time we shortened the laser cavity to 6 m.Furthermore we only varied three of the 6 possible polarisation controller parameters (α, β, ψ) using following matrices (2,3) for PC1,2 at ϕ = 2ψ + π /4: ( ) Results and discussion A remarkable feature of the system under consideration -a laser mode-locked due to NPE -is a possibility to obtain generation in a variety of ways and using different tuning of polarisation controllers, which is borne out by both the numerical modelling and experiment.Depending on phase shifts introduced by the polarisation controllers the energy and duration of laser pulses at the output may change their values by up to an order of magnitude.An immense variety of results generated in experiments and in modelling may be divided into two main types of generation regimes.The first type presents a well-known generation of isolated pulses (Figs.2(b), 2(d)) with bell-shaped auto-correlation function (ACF) and steep spectral edges (see Figs. 2(a), 2(c)).As the numerical modelling has shown, the pulse parameters obtained in this generation regime are stable and do not change with round trips after approaching those asymptotic values after the initial evolution stage.Since the cavity has all-positive dispersion, the generated laser pulses exhibit large chirp [4][5][6] and may be efficiently compressed with an external diffraction-grating compressor, which is confirmed by both the numerical results and the experiment. The more interesting, however, is that in addition to this standard generation regime, the considered laser scheme exhibits a different type of operation.For this second type of operation, an unusual double-structured ACF (femto-and pico-second) can be observed, see Figs. 3(b), and 3(e).Experimentally measured laser generation spectra of this type usually has a smooth bell-shaped appearance (see Fig. 3(a)).However, as the numerical simulation demonstrates, such smooth spectra are a result of averaging over a very large number of pulses, whereas the spectrum of an individual pulse contains an irregular set of noisy-like peaks (see the un-averaged spectrum in Fig. 3(d) shown with a grey line).In the temporal representation this type of generation corresponds to a picosecond wave packet consisting of an irregular train of femtosecond sub-pulses (see Fig. 3(f)).Peak power and width of such sub-pulses stochastically changes from one round trip along the cavity to another, also leading to fluctuations in the wave packet parameters easily noticeable when observing the output pulse train in real time on the oscilloscope screen during experiments (see Fig. 3(c)).We do not observe any systematic change in generation parameters such as power or wave-packet duration even after several hours of generation.The absence of systematic drift of pulse parameters is typical for numerical simulations as well (in the latter case however much shorter pulse train of about only 5×10 4 was examined.)The discussed regime presents an interesting symbiotic co-existence of stable solitary wave dynamics and low-dimension stochastic oscillations.Note that similar structures have been studied in different context numerically in the complex cubic-quintic Ginzburg-Landau model in [17].The co-existence of stable steady state pulses and pulsating periodic, quasi-periodic or stochastic localised structures is a general feature of multi-parametric dissipative nonlinear system.Each particular nonlinear dynamic regime exists in a certain region of parameter space.Therefore, in systems that possess a possibility to switch operation from one region of parameters to another, one can observe very different lasing regimes.The regions of parameters where pulsating (periodic, quasi-periodic or stochastic) localized structures do exist might be comparable or even larger than regions of existence of conventional steady state solitons.Note that in the multi-dimensional space of parameters of the considered laser scheme, it is almost impossible to explore all possible operational regimes via direct modelling. Both experiment and simulation show that wave-packets generated in the double-scale femto-picosecond regime can be compressed only slightly and pulses after compression remain far from spectrally limited.As our experiments have demonstrated, after extra-cavity compression of these complex wave packets with the help of two diffraction gratings ACF of the resulting pulses has qualitatively the same double-feature shape.Switching between different modes of laser generation was done by changing the parameters of polarisation controllers.In order to clarify the physical mechanisms leading to quasi-stochastic oscillations and mode switching we have carried out numerical simulations of laser operation in the vicinity of the boundary of the stable single-pulse generation.While performing calculations, we introduced into the cavity fixed-duration pulses with different energies and analysed the resulting gain coefficient for the pulse over one complete round trip over the resonator (see black line in Fig. 4).At the input power P = 1 (in arbitrary units) the one-trip gain coefficient equals unity, which at negative curve slope corresponds to stable generation.Starting with P ~ 1.75 a.u.single-pulse generation becomes unstable (corresponding to positive slope of the black curve in Fig. 4).In this region an exponential growth of small intensity fluctuations can be observed and, over only several round trips of the cavity, an isolated picosecond pulse is decomposed into a stochastic sequence of femtosecond pulses.Because in this process a substantial change in the pulse form takes place the gain curves in Fig. 4 no longer correspond to the real situation.Exponential power growth is quickly quenched and over a number of trips along the cavity (as a rule, from dozens to hundreds) an isolated picosecond pulse with stable parameters is formed in the cavity again.So, in our numerical results there was no bi-stability, which could be expected on the basis of the curve shapes in Fig. 4: at any fixed set of cavity parameters at most only one of the two generation types could be stable irrespective of the initial field distribution within the resonator. For reliable switching of the generation type it is necessary to adjust polarisation controllers or to change other cavity parameters.For instance, curves 1-3 in Fig. 4 correspond to different resonator length (12.0, 12.3 and 12.6 m accordingly).It can be seen that as the resonator parameters are shifted towards the boundary of single-pulse generation, the width of the stable domain is reduced.Indeed, for curve 3 in Fig. 4 the unity-gain point almost coincides with the extremum, so that even small intensity fluctuations may bring the system into the unstable region and lead to decay of the pulse.If the resonator length is further increased the stable single-pulse generation becomes impossible and the laser starts generating quasi-stochastic wave packets.Conclusions that we drew on the basis of the above-conducted analysis are also valid for the effect of generation type switching observed in our experiments due to changing the parameters of polarisation controllers.It should also be noted that the set of curves presented in Fig. 4 for dependence of gain per round-trip on the pulse power is qualitatively very well reproduced in analytical treatment of laser generation, in which every optical element of the laser corresponds to a 2×2 matrix.The detailed analytical study of these phenomena will be published elsewhere. Conclusion To the best of our knowledge we present for the first time a noise-like generation regime in ytterbium fiber laser with all-positive dispersion and mode locking due to non-linear polarisation evolution.Both the experimental and numerical studies show an unusual feature typical of this new generation type that is a double-structured (pico-and femto-second) non- Fig. 4 . Fig. 4. Net gain per roundtrip vs. initial pulse power.Curves 1-3 correspond to slightly different cavity parameters close to boundary of single-pulse generation regime area.
3,865.4
2009-11-09T00:00:00.000
[ "Physics" ]
Chiral phonons in quartz probed by X-rays The concept of chirality is of great relevance in nature, from chiral molecules such as sugar to parity transformations in particle physics. In condensed matter physics, recent studies have demonstrated chiral fermions and their relevance in emergent phenomena closely related to topology1–3. The experimental verification of chiral phonons (bosons) remains challenging, however, despite their expected strong impact on fundamental physical properties4–6. Here we show experimental proof of chiral phonons using resonant inelastic X-ray scattering with circularly polarized X-rays. Using the prototypical chiral material quartz, we demonstrate that circularly polarized X-rays, which are intrinsically chiral, couple to chiral phonons at specific positions in reciprocal space, allowing us to determine the chiral dispersion of the lattice modes. Our experimental proof of chiral phonons demonstrates a new degree of freedom in condensed matter that is both of fundamental importance and opens the door to exploration of new emergent phenomena based on chiral bosons. Quasiparticles in solids fundamentally govern many physical properties, and their symmetry is of central importance.Chiral quasiparticles are of particular interest.For example, chiral fermions emerge at degenerate nodes in Weyl semimetals [1] and chiral crystals [2,3].Their chiral characters are directly manifested by a chiral anomaly [7] and lead to enriched topological properties, including selective photoexcitation by circularly polarized light [8], chiral photocurrent [9], and transport [7].The presence of chiral bosons such as phonons [4][5][6][10][11][12][13][14][15][16][17] and magnons [6,[18][19][20], has also extensively been debated.Chiral phonons are vibrational modes of solids in which the atoms have a rotational motion perpendicular to their propagation with an associated circular polarization and angular momentum.As a result of their angular momentum, chiral phonons can carry orbital magnetic moments, enabling a phono-magnetic effect analogous to the opto-magnetic effect from other helical atomic rotations [21,22].Correspondingly, the phonons can create an effective magnetic field, which has been invoked to explain the observation of excited magnons [23] and enables their excitation through ultrafast angular-momentum transfer from a spin system [24].Whereas a phononic magnetic field has so far been discussed primarily at the Γ point, chiral phonons naturally arise in noncentrosymmetric materials away from the zone center, and are based on a fundamentally different symmetry. Experimental observation of phonon chirality has proven to be challenging.If atomic rotations are confined in a plane containing the phonon propagation direction (circular phonons), the mode cannot possess a chiral character (see Supplementary Information for symmetry consideration), as occurs for non-propagating phonons at Γ and other highsymmetry points.Therefore, results based on optical-probe techniques, such as chiroptical spectroscopy [16] and circularly polarized Raman scattering [17], are insufficient to identify the presence of chiral phonons because of the large wavelength of optical photons, restricting the exploration very close to the Γ point.The first claim of observation of a chiral phonon was made at the high-symmetry points of a monolayer transition-metal dichalcogenide [5], though it has been argued to be inconsistent with symmetry arguments [6].Thus, establishing an experimental method that directly verifies the chiral character of phonons is strongly demanded. In this work, we demonstrate chiral phonons in a chiral material at general momentum points in the Brillouin zone.We probe the chirality of phonons using resonant inelastic X-ray scattering (RIXS) with circularly polarized X rays.Our strategy rests on the fact that circularly polarized X rays are chiral and is inspired by the use of resonant elastic X-ray scattering to probe the chirality of a static lattice by using circularly polarized X rays on screw-axis forbidden reflections [25].Using RIXS, circularly polarized chiral photons can couple to dynamic chiral phonon modes by transferring angular momentum, and the process can occur at general momentum points in reciprocal space.Our theoretical analysis shows that the observed circular dichroism in RIXS is caused by the orbitals of the resonant atoms that align in a chiral way determined by the chiral crystal structure; we calculate the angular momentum of the phonons at the corresponding Q point using density-functional theory (DFT). RIXS is a two-step process in which the energy of the incident photon with a given polarization coincides (resonates) with an atomic X-ray absorption edge of the system [26].For RIXS at the O K edge, an incident photon excites an electron from the O 1s inner shell to the 2p outer shell.The combined core hole and excited electron form a short-lived excitation in this intermediate state that interacts with the lattice and creates phonons as it deforms its local environment [27,28].The final RIXS step involves the de-excitation of the electron from 2p to 1s, causing the emission of a photon while leaving behind a certain number of phonons in the system.The detected energy and momentum of the emitted photon are directly related to the energy and momentum of the phonon created in the solid. To illustrate the mechanism by which RIXS excites chiral phonons in quartz, we consider a Si-O chain in which the O ions bond to the Si ions via the 2p orbital pointing towards the central axis of the chain (see Fig. 1 and Fig. S1).While this O 2p orbital is unchanged in the local frame of the ligand as it rotates around the central axis with angle , in the global frame its direction changes upon rotation.We describe the spatial coordinate of the phonon by the angle and denote the creation operator of an electron in the 2p orbital along the global x axis as # † and along the global y axis as % † .We construct the RIXS intermediate state Hamiltonian ' such that during an (adiabatically slow) rotation of the atom around the z axis, the ground state wavefunction always points towards the center of rotation (see Supplementary Information for a detailed derivation): where , is the core-hole density operator, the vector operator = 9 # , % ; and < denotes the Pauli matrixes with = , , .The RIXS operator that takes the system from the ground state |0 > to the final state | > with m phonon modes can be evaluated to lowest order in α using the ultrashort core-hole lifetime expansion [27].Introducing the circular polarization basis , where a fully left circularly polarized photon corresponds to G H = (1,0) and a right one to G J = (0,1), the RIXS amplitude becomes (see Supplementary Information) This shows that angular momentum is transferred to the phononic system when the incident and scattered photons have different circular polarization.Figure 1 shows conceptually how such interactions between circularly polarized photons and the lattice can launch rotational lattice vibrations through this angular momentum transfer.As our target material, we choose the prototypical chiral crystal, quartz (α-SiO2), in which SiO4 tetrahedra form a chiral helix along [001] (Fig. 2).The resulting chiral space group is either P3221 (left quartz, Fig. 2a) or P3121 (right quartz, Fig. 2b).A recent DFT study [15] pointed out the chirality and phonon angular momentum of some phonon branches and demonstrated the reversal of chirality between opposite enantiomers, as well as the absence of phonon angular momentum at the Γ point.2c) (see Methods for details).The spectrum for various incident photon energies (shown in Fig. 3) shows clear peaks on the energy-loss side at resonance, which become suppressed for energies further away from resonance.Note that the energy resolution is insufficient to assign the peaks to individual phonons [29].All peaks above the energy of the highest phonon mode of ~0.2 eV [29] are the result of higher-harmonic phonon excitations.Figure 4 shows the C+ and C-RIXS spectra from left-(4a) and right-(4b) handed quartz and their dichroic contrasts (4c) at 20 K. We see a clear contrast between C+/C-, and the dichroism changes sign for the opposite chiral enantiomers, indicating that it is caused by chirality of the modes.We find similar contrast between C+/C-at the other reciprocal points with different RIXS spectra due to different phonon energies (dispersion) and different RIXS cross sections.(see Fig. S2 in Supplementary Information).These observations demonstrate unambiguously that circularly polarized photons couple to chiral phonons, with the chirality of the phonons defined by the lattice chirality, and that RIXS with circularly polarized X rays can be used to probe phonon chirality. (5) (6) (1) (2) We use DFT to calculate the phonon dispersion and phonon circular polarization for all phonon branches, and show their dispersion between Q1 and Γ in Fig. 5a for left quartz (details found in Methods, see Figs.S4 and S5 for other directions in reciprocal spaces and components of the circular polarization vector).Note that, since we are interested in low symmetry points in the Brillouin zone, we show a different direction from that in Ref. [15], as well as additional bands.The color scale indicates the phonon circular polarization (S) [4], which indicates the chirality of a phonon mode; it is defined, for example for the z component 0 , as Here L are the phonon eigenvectors of each atom (normalized such that ∑ |⟨ L | L ⟩| L = 1), and | L,0 ` and |ℓ L,0 ` are eigenvectors corresponding to pure right-and left-handed rotations.The phonon angular momentum (L) is then given by = ℏ [4].We also report the mode effective charges (Fig. 5b) as a metric of the strength of the interaction between mode and light, calculated following the method of Ref. [30]. When we match the calculated and measured modes, we find that those with the strong dichroic contrast are those calculated to have a large chirality.The peak with the largest contrast is at ~50 meV for all the reciprocal points we measured [Q1 in Fig. 4c, and Q2 = (-0.29,0.14, 0.32) and Q3 = (-0.25,0.25, 0.32) shown in Fig. S2], suggesting that a mode that has large phonon circular polarization and energy around 50 meV dominates the contrast.The mode at the energy of ~47.6 meV at Q1, which we refer to as mode X, matches the conditions (see Supplementary Figure 4 and Supplementary Table 1, which tabulates the energy and phonon circular polarization of all phonon modes at the measured Q points).Figure 5c and Supplementary Movie 1 visualize mode X at Q1, and show that it involves a circular motion of the atoms.Importantly, the mode satisfies the symmetry requirement for a chiral phonon mode. For non-magnetic quartz, the RIXS spectra at the O K edge are mainly sensitive to the O 2p orbital states.This means that phonon modes that significantly affect for example the orientation of the 2p orbital states will create large scattering contrast in RIXS, and will also be strongly X-ray polarization dependent.Figure 5d and Supplementary Movie 2 visualize the evolution of the local charge quadrupoles at the O site when the chiral phonon mode is excited (see Fig. 5c or Supplementary Movie 1).These charge quadrupoles reflect the time evolution of the O 2p orbitals, which shows that the dichroic RIXS signal is due to an evolution of the chiral stacking of the O 2p orbital moments in the chiral phonon excitation as described in (1) and (2) above. Note that mode with the largest contrast is not the phonon mode with the largest phonon circular polarization.Instead, the mode has a large mode effective charge at Q1, as shown in Fig. 5b.This indicates that the contrast depends on not only the chiral amplitude of a mode itself but also the modulation of the electronic charges with respect to the plane of the electric fields of the circularly polarized X rays.Note that there is an additional consideration: Phonon circular polarization specifies a preferred rotation direction of atoms in the excitation, which can only be excited with the matching circular photon polarization (see Fig. 1).As modes of opposite chirality have different energies (see Fig. S4 and Supplementary Table 1 that modes with opposite chirality, degenerated at the Γ point, split at away from the zone center), the peaks which are composed of several modes show a peak shift when taken with opposite circular polarization (see Fig. 4). In Fig. 5e we show the associated magnetic moments induced by the chiral motion of the charged ions in the chiral phonons, which we calculate by extending the method used in Refs.[21,22] so that it is applicable at an arbitrary point in Q-space.We begin by constructing the atomic circular polarization vector L as L = [ #,L %,L 0,L ] [see (3)]; yielding the angular momentum of each atom as L = ℏ L .The magnetic moment ( L ) of each atom participating in the phonon is where L is the gyromagnetic ratio tensor, which is derived from L , the Born effective charge tensor, L , the atomic masses.The phonon magnetic moment is then simply = ∑ L b L[\ .We show our calculated mode-and Q-point resolved magnetic moments in Fig. 5e and see that chiral phonons in quartz carry magnetic moments throughout the Brillouin Zone, although the calculated magnetic moments are relatively small due to the low values of L .These phonon magnetic moments do not normally create a net magnetization due to the presence of time-reversal related pairs with opposite chirality and magnetic moment.If timereversal symmetry is broken, however, population imbalances between the chiral pairs can be created [31].Figure 5e also suggests that the phonon chirality can be investigated directly through interactions with the phonon magnetic moment, using, e.g., polarized inelastic neutron scattering.In conclusion, we have used inelastic X-ray scattering with circularly polarized X rays to demonstrate the chiral nature of the phonons in chiral quartz crystals, and in turn have established a fundamental methodology for characterizing chiral phonons.With the technique established by this proof-of-principle study, the chirality of phonons at general momentum points can be characterized, opening up new perspectives in chiral phononics.For example, our work indicates that RIXS can be used to quantify the role of chiral phonons in exotic phenomena proposed in topological materials [32][33][34][35], as well as to characterize interactions such as electron-and spin-couplings with chiral phonons [14,[36][37][38][39]. Methods Resonant inelastic X-ray scattering.RIXS measurements were performed at the Beamline I21 of Diamond Light Source in the UK [41].Used photon energy is around the O K edge, and polarization is circular (C+/C-).The energy resolution is estimated as 28 meV from the full width of the half-maximum of the elastic peak from a carbon tape.Enantiopure single crystals purchased commercially, have the widest face perpendicular to the [001] axis.The manipulator installed at the beamline allows us to rotate the crystal along the azimuthal angle, enabling us to access different momentum points while the experiment, Q1 = (-0.25,0, 0.32), Q2 = (-0.29,0.14, 0.32), and Q3 = (-0.25,0.25, 0.32).XAS obtained prior to the RIXS measurements is based on the total electron yield method. Density-functional theory.Density-functional calculations were performed using the Abinit software package (v.9) [42,43] and the Perdew-Burke-Ernzerhof exchange-correlation functional [44] with the dispersion correction of Grimme [45].The phonon band structure was determined using density functional perturbation theory [42], using norm-conserving pseudopotentials, a 38 Ha plane wave energy cutoff, an 8 × 8 × 8 Monkhorst-Pack grid in kspace [46], and a 4 × 4 × 4 grid in Q-space.Calculations of the electronic and phononic structure were additionally performed explicitly at the experimentally measured Q points.Frozen-phonon calculations were performed using the projector-augmented wave (PAW) method [47] to obtain local quadrupole moments with the multipyles post-processing script [48].These calculations used a 192 Ha plane wave energy cutoff within the atomic spheres and a 32 Ha cutoff without.The default pseudopotentials and PAW datasets from the Abinit library were used. Data availability Experimental and model data are accessible from the PSI Public Data Repository [40]. Fig. 1 | Fig. 1 | Angular-momentum transfer in a RIXS experiment.The angular momentum of the photons [opposite between C+ (up, red) and C-(down, blue)] is transferred to a crystal, causing a rotation in this case of anions (orange spheres with p orbitals) relative to their neighboring cations (green spheres). Fig. 2 | Fig. 2 | Crystal structure and Brillouin zone of quartz.Crystal structures of a, left quartz and b, right quartz, and c, the Brillouin zone with Q1, where the RIXS spectra has been taken. Fig. 3 | Fig. 3 | XAS and photon-energy dependence of RIXS.a, X-ray absorption spectrum around the O K edge, and b, RIXS spectra taken with C+ for left-handed quartz at Q1 = (-0.25,0, 0.32) for the incident photon energies indicated by the dashed lines in a.Each spectrum in b is vertically shifted to enhance visibility.Errorbars are smaller as the line (a) and in Standard Deviation (SD) (b). Fig. 4 | Fig. 4 | RIXS with circularly polarized X Comparison between a, left quartz and b, right quartz, taken at the incident photon energy of 534 eV and Q1 = (-0.25,0, 0.32).c, Extracted circular dichroic components of the data shown in a and b.Errorbars in SD Fig. 5 | Fig. 5 | Phonon dispersion and chiral phonon mode.a, Low-energy phonon dispersion for left quartz along the G -Q1 direction.Colors represent the z component of the phonon circular polarization.b, The same phonon band structure with colors representing mode effective charges (a measure of the degree in which the electronic charge distribution is perturbed by the phonons), in units of the elementary charge.c, The chiral phonon mode at Q1 = (-0.25,0, 0.32) (indicated with an arrow in a) showing the main chiral revolutions of the oxygen atoms that have a different phase along the chain.d, Associated change in the local quadrupole moment (associated with the O 2p orbital) for a revolving oxygen atombetween the phonon at phase 0 and phase π (black vectors representing an increase in the atomic quadrupole moment between its position at phonon phase 0 and its position at phase π and green vectors representing a decrease).e, The phonon band structure colored according to the magnitude of the magnetic moment of the phonons, in units of the nuclear magneton.
4,107.6
2023-02-08T00:00:00.000
[ "Physics", "Materials Science" ]
ANALYSIS OF STRAIGHTENING FORMULA . The straightening formula has been an essential part of a proof showing that the set of standard bitableaux (or the set of standard monomials in minors) gives a free basis for a polynomial ring in a matrix of indeterminates over a field .The straightening formula expresses a nonstandard bitableau as an integral linear cobmbination of standard bitableaux. In this paper we analyse the exchanges in the process of straightening a nonstandard pure tableau of depth two.We give precisely the number of steps required to straighten a given violation of a nonstandard tableau.We also characterise the violation which is eliminated in a single step. INTRODUCTION. The straightening formula has been an intregral part of the theorem showing that the set of all standard monomials in minors of a matrix of indeterminates form a free basis of the polynomial ring in those indeterminates. It tranforms a nonstandard bitableau into an integral linear combination of the standard bitableaux, which makes sense only after using a correspondence between bitableaux and monomials in minors. The straightening formula is given in Rota-Doubillet -Stein [1] first and given again and exploited greatly in Desarmenien-Kung-Rota [2], DeConcini-Procesi [3], DeConcini-Eisenbud Procesi [4]. Abhyankar [5] gives a proof of the above mentioned theorem by explicitly counting tile dimension of a vector space generated by all the standard bitableaux of area V and length less than or equal to p, and deduces the result that the ideal generated by the p by p minors of a matrix of indeterminates is Hilbertian. In this exposition one also finds a form of the straightening formula which is very amenable for analysis. In a sequel of [5], Abhyankar has proved the Hilbertianness of a much more general determinantal ideal following the same strategy of counting and straightening, eliminating the proof of linear independence of standard bitableaux [6]. He also states the straightening formula in much general form and proposes the Problem: Given a nonstandard bitableau T, if T Y. c 1T is the expression for T given by the Straightening formula where T i's are standard bitableaux; can one determine c/s in terms of T ? He defines the final integer function there which helps to give the coefficients ct. He also gives a recursion satisfied by this fin function, and states a problem of finding fin in terms of a given nonstandard bitableau. More knowledge about the number of steps required to straighten a given nonstandard bitableau will help finding this fin function. In this paper we analyse the formula as given in [5]. For an analysis of the straightening formula for a nonstandard unitableau it is enough to look at a nonstandard pure unitableau of depth two. We state a form of the straightening formula using an arbitrary violation. The proof of this form is identical to the proof in [5]. We give an exact number of steps required to eliminate the violation from all unitableaux obtained in the straightening. As a part of our proof we give a detailed analysis of exchanges in the straightening and characterise a violation which gets eliminated in a single step (a good violation ). NOTATION AND TERMINOLOGY. Let Xij ]l<_ <_m,1 _< _<n be a matrix all of whose entries are indeterminates over a field K. Let Y be an m by m+n matrix formed by keeping the first m columns of Y to be those of X and putting the (n+i)th column to be (m-i+ 1)th column of an m by m identity matrix for < < m. Throughout the discussion we use the word "minor with the meaning as "determinant of a minor". In the proof of a theorem, slowing the set of standard monomials in maximal size minors of Y to be a free basis of K[Y], the spanning part of it is done by repeated applications of the straightening formula to a nonstandard unitableau of pure length m and bounded by m+n. Using this, one proves that the standard monomials in minors of X form a free basis of K[X] by invoking the correspondence between all minors of X and the maximal size minors of Y, and then the correspondence between tableaux and monomials in minors of X. A univector of length m and bounded by p is an increasing sequence of m positive integers which are bounded by p. To a univector of length m and bounded by p there corresponds an m by m a maximal'size) mirror of an m by p matrix of indeterminates which is formed by picking up the corresponding m columns. A unitableau of depth d is a sequence of d univectors, written as A(1)A(2) A(d). By a pure unitableau of length m and bounded by p we mean a unitabeau each constituent univector has length m and is bounded by p. Given two univectors A= (A 1< A 2 <...<Ap)andB= (B < B 2 < <Bq ), we say that A<B ifp >q andA <B for <i <q. (2.1) A monomial in maximal size minors of an m by p matrix of indeterminates corresponding to a standard pure unitableau of length m and bounded by p is said to be standard. For analysis one has to concentrate on unitableaux of pure length m and depth two. Let all unitableaux be bounded by p hereonwards. Let morn (X, AB be a monomial in maximal size minors of an m by p matrix X of indeterminates over a field K. Given a unitableau AB of pure length m and depth two as PROOF. The proof of this theorem is the same as that in [5] PROOF: Since every element of { A1,A2 Avq-} U b is smaller than every element of { A v q,Av q + A m }\ a, and by (4.0) and the given, card({A1,A 2 Av_q_ 1}Ub )=v-q-l+r >v, Bm}Ua,andby(4.0), From the above Theorem it is clear that starting with AB and applying the straightening formula N AB v times we can express AB or mom( X, AB as an integral linear combination of unitableaux which do not have v in their violation sets. In this process we do not perform any cancellations as we go. With this in mind, talking of the number of steps required to straighten a nonstandard unitableau is not confusing. By the above theorem and noting that the oddity function drops at each step, it follows that a nonstandard unitableau AB can be straightened in at most 1 < <m N AB (i) steps. ACKNOWLEDGEMENTS. would like to thank Professor Shreeram S. Abhyankar for his continuous encouragement and guidance while doing my Ph. D. thesis with him. would like to thank the Bhaskaracharya Pratishthana for its hospitality.
1,561
1988-01-01T00:00:00.000
[ "Mathematics" ]
Near-complete photoluminescence retention and improved stability of InP quantum dots after silica embedding for their application to on-chip-packaged light-emitting diodes Silica is the most commonly used oxide encapsulant for passivating fluorescent quantum dots (QDs) against degradable conditions. Such a silica encapsulation has been conventionally implemented via a Stöber or reverse microemulsion process, mostly targeting CdSe-based QDs to date. However, both routes encounter a critical issue of considerable loss in photoluminescence (PL) quantum yield (QY) compared to pristine QDs after silica growth. In this work, we explore the embedment of multishelled InP/ZnSeS/ZnS QDs, whose stability is quite inferior to CdSe counterparts, in a silica matrix by means of a tetramethyl orthosilicate-based, waterless, catalyst-free synthesis. It is revealed that the original QY (80%) of QDs is nearly completely retained in the course of the present silica embedding reaction. The resulting QD–silica composites are then placed in degradable conditions such UV irradiation, high temperature/high humidity, and operation of an on-chip-packaged light-emitting diode (LED) to attest to the efficacy of silica passivation on QD stability. Particularly, the promising results with regard to device efficiency and stability of the on-chip-packaged QD-LED firmly suggest the effectiveness of the present silica embedding strategy in not only maximally retaining QY of QDs but effectively passivating QDs, paving the way for the realization of a highly efficient, robust QD-LED platform. Introduction Based on substantial progress in the photoluminescent (PL) qualities of semiconductor quantum dots (QDs), which was achieved by the incessant development of colloidal synthetic methodology and sophisticated engineering of core/shell heterostructures, they have been highlighted as key materials for various optoelectronic devices including next-generation lightemitting diodes (LEDs), lasers, and luminescent solar concentrators. [1][2][3] In particular, QDs have been already commercially applied to LCD backlight units as color-converting emitters in combination with a blue LED pumping source. In this commercialized version, QDs are dispersed in a large-area polymeric resin lm and the resulting QD-resin composite is then sandwiched by two oxide-based gas barrier lms in order to suppress the photodegradation of QDs from permeable oxygen and water vapor molecules. Moreover, for long-term reliability of this type of QD lm assembly, oen referred to as QD enhancement lm (QDEF), blue LED is placed distantly from QDEF to avoid direct exposure of QDs to high temperature and high photon ux from LED at operation. 4 However, QDEF is a costly design in that it acquires a large quantity of QDs and expensive gas barrier lms. Therefore, a more cost-effective alternative such as direct QD integration onto an LED, also called as on-chip packaging, should be pursued. Together with huge advancement in PL attributes of QDs towards higher quantum yield (QY) and narrower emission bandwidth, their stability against degradable processings and conditions including repeated purication cycle, ligand exchange, prolonged UV irradiation, or thermal treatment has been steadily improved by the elaborate control on core/shell heterostructure mainly based on multi-shelling and/or thick-(or giant-) shelling strategies. [5][6][7][8] However, in the case of on-chip-packaged QD-LEDs (herein, QD-LEDs stand for optically-pumped, color-converting devices, not electrically-driven ones), where more harsh environments of considerable heat plus intense photon ux emitting from LED chip are placed, even structurally strengthened QDs are not robust sufficiently, unexceptionally losing their initial uorescent intensity over time of device operation. The most viable strategy to protect QDs against degradable environments is to physically encase them with chemically stable oxide phases in the form of individual overcoating or collective embedding. The most widely used oxide for QD encapsulation is sol-gel-derived silica based on tetraethyl orthosilicate (TEOS) as a silane precursor, even though other non-silica oxide candidates such as In 2 O 3 and ZnGa 2 O 4 have been proposed to individually overcoat InP 9 and Cu-In-S QDs, 10 respectively. Silica phase can be produced as either overlayer [11][12][13][14][15][16][17][18] or matrix, 19-24 depending on its synthetic details of Stöber or reverse microemulsion (RM) process. However, both routes encounter a critical issue of considerable QY loss relative to pristine QDs aer silica growth, since ligand exchange of QD for phase transfer and/or primer formation, hydrolyzed TEOS, and ammonia catalyst likely lead to the chemical devastation of QD surface. 11,14,16,18,19 Therefore, a few notable strategies to mitigate such a silica processing-accompanying side effect have been developed. Jun et al. adopted the combination of 6-mercaptohexanol as ligand exchange reagent and propylamine as a mild base catalyst for an effort to minimize the oxidative damage of QD surface in the course of silica processing, enabling the formation of a highly luminescent and photostable CdSe/CdS/ ZnS QD-silica monolith. 25 Murase group suggested that the silanization (or ligand exchange) of hydrophobic Cd-based QDs with partially hydrolyzed TEOS species was a crucial pretreatment step prior to silica growth reaction in not only retaining the original QY of QDs maximally but ensuring the homogenous formation of silica shell. 14 Meanwhile, as a simple, non-chemical approach to the formation of QD-silica composites, pre-prepared micron-sized mesoporous silica particles were employed as physical templates, wherein Cu-In-S or perovskite CsPbBr 3 QDs could be inltrated into the pores by means of swelling and solvent evaporation method. 26,27 Although QY and emission bandwidth of non-toxic (i.e., Cdfree) InP QDs are fast approaching those of CdSe-based counterparts, the former QDs are quite inferior in stability against degradable conditions to the latter ones. On that account, InP QDs are more susceptible to the silica reaction condition and thus may suffer from a more PL quenching aer silica growth compared to CdSe-based ones. For instance, the original QY (30-50%) of red InP/ZnS QDs became severely deteriorated to 15% aer RM-processed silica overcoating. 12 Therefore, a special silica processing for such delicate InP QDs to enable the maximal retention of original QY is demanded, but work on the formation of InP QD-silica composites has been rarely reported to date. Very recently, a simple, novel silica encapsulation for perovskite CH 3 NH 3 PbBr 3 QDs, which are highly vulnerable to contact with various polar solvents due to their strong ionic nature, has been developed by a waterless, catalystfree synthesis, where the trace amount of water present in toluene solvent was used to hydrolyze tetramethyl orthosilicate (TMOS) at room temperature (25 C) in a sealed reactor. 28 On the basis of the above strategy for the formation of QDs-silica, in this work, we explore the embedment of multishelled InP/ ZnSeS/ZnS QDs in silica matrix with a synthetic modication. Instead of the sealed system aforementioned, we adopt the open system, where silica reaction is directly exposed to a relative humidity (RH) of 70% at 30 C. It is revealed that under this condition the original QY of QDs is well preserved throughout silica reaction. To attest to the efficacy of silica encapsulation on QD stability the resulting InP QD-silica composites are then applied as color-converters with a blue LED in on-chippackaging, showing 88% retention of the initial QD emission aer a long-term operation of 100 h at a driving current of 60 mA. Synthesis of multishelled InP/ZnSeS/ZnS QDs Our multishelled InP/ZnSeS/ZnS QDs were prepared by following our previous synthetic protocol 29 with a slight modi-cation. For a typical preparation of red-emitting InP core QDs, 0.45 mmol of indium chloride (InCl 3 ), 2.2 mmol of zinc chloride (ZnCl 2 ) and 6 ml of oleylamine (OLA) were placed in a threenecked ask, and then this mixture was degassed at 120 C for 60 min and further heated to 180 C under nitrogen ow. At that temperature, 0.35 ml of tris(dimethylamino)phosphine (P(N(CH 3 ) 2 ) 3 , P(DMA) 3 ) was swily injected and the reaction was maintained for 25 min. Consecutive growth of compositiongradient ZnSeS intermediate shell proceeded by the repeated alternate injections of anionic and cationic shell precursors by the following manners. The Se stock solution, prepared by dissolving 0.12 mmol of selenium (Se) in 1 ml of trioctylphosphine (TOP), was introduced and reacted at 200 C for 30 min. And the Zn stock solution, prepared by dissolving 1.58 mmol of zinc stearate in 4 ml 1-octadecene (ODE), was injected, followed by the reaction at 220 C for 30 min. Then, the Se-S stock solution (0.06 mmol of Se and 2 mmol of sulfur (S) dissolved in 1.6 ml of TOP) was injected and reacted at 240 C for 30 min and then the above Zn stock solution was introduced, followed by the reaction at 260 C for 30 min. For the last ZnSeS intermediate shelling, another S-richer Se-S stock solution (0.02 mmol of Se and 4 mmol of S dissolved in 2 ml of TOP) was added and reacted 280 C for 30 min, followed by the injection of the same Zn stock solution and the reaction at 300 C for 60 min. For the deposition of ZnS outer shell, 5 ml of 1-dodecanethiol was slowly introduced and reacted at 200 C for 60 min, and then another Zn stock solution, prepared by dissolving 3 mmol of Zn acetate in 3 ml of oleic acid, was injected, followed by the reaction at 190 C for 120 min. Aer cooling the reaction, as-synthesized multishelled InP/ZnSeS/ZnS QDs were placed in repeated purication cycles by a precipitation/ redispersion with an ethanol/hexane combination using centrifugation (8000 rpm, 15 min for each) and nally redispersed in hexane or toluene. Preparation of QD-silica composites To 2 ml of toluene dispersion of InP/ZnSeS/ZnS QDs having an optical density (OD) of 2.0 adjusted at 574 nm was added 18 ml of toluene, resulting in 20 ml of dilute QD dispersion with an OD with 0.2 with the same wavelength. 1 ml of TMOS was added to the above dispersion. Then, this mixture was placed and stirred in a thermohygrostat (TH-PE-100, JEIO Tech., Korea) set at a temperature of 30 C and a RH of 70%. This silica embedding reaction proceeded typically for 20-24 h, where the occulation of QDs-embedded silica was observed in the solution. Such occulated particles were precipitated by centrifugation (8000 rpm, 10 min) and washed twice with an excess of acetone. Stability tests and QD-LED fabrication Both pristine QDs and QDs-silica became powdered by completely drying them in a vacuum oven at 60 C for the following QD stability tests and QD-LED fabrication. Two comparative powdered QDs were closely packed into a stainless sample holder (diameter: 7 mm, depth: 0.5 mm) and then were identically exposed to degradable conditions of UV irradiation and 85 C/85% RH for certain periods of time using a 365 nmmulti-band UV lamp (225 mW cm À2 ) and a thermohygrostat, respectively. For the fabrication of QD-LEDs, ca. 6 mg of pristine QDs and ca. 18 mg of QDs-silica were individually mixed rst with 0.3 g of thermally curable epoxy resin (YD-128, Kukdo Chem., Korea). To these QD-resin mixtures was added 0.3 g of a hardener (KFH-271, Kukdo Chem., Korea). The resulting pastes were dispensed into a 5 mm  5 mm-sized, surface-mounted typed, blue InGaN LED (l ¼ 455 nm, Dongbu LED Inc.) mold and then hardened by a sequential thermal curing process of 90 C for 1 h and 120 C for 30 min (Fig. S1 †). Characterization Absorption spectrum of QDs was recorded by using UV-visible absorption spectroscopy (Shimadzu, UV-2450). PL spectra of QDs and QDs-silica in the forms of dispersion and powder were collected with a 500 W Xe lamp-equipped spectrophotometer (PSI Co. Ltd., Darsa Pro-5200). Absolute PL QYs of QDs in dispersion were assessed at an excitation wavelength of 450 nm in an integrating sphere by using an absolute PL QY measurement system (C9920-02, Hamamatsu). Relative QYs were also measured by calculating the integrated emission of QD sample in dispersion relative to that of rhodamine 6G (PL QY of 95%) in ethanol at an identical OD, resulting in the same values as in absolute ones. PL lifetimes were measured by employing the time-correlated single-photon counting (TCSPC) method on a FS5 spectrophotometer (FS5, Edinburgh Instruments) equipped with an EPL-375 nm picosecond pulsed diode laser. Transmission electron microscopic (TEM) images of QDs and QD-silica composites were taken by a JEM-2100F (JEOL Ltd.) operating at 200 kV. Electroluminescent (EL) data of QD-LEDs fabricated such as EL spectrum and luminous efficacy (LE) were acquired with a diode array rapid analyzer system (PSI Co. Ltd) in an integrating sphere. Results and discussion Although silica in the form of overlayer or matrix is effective in passivating QDs from oxidative environments, most of QDs experience an inevitable PL quenching during silica reaction, which proceeds mostly by the conventional Stöber or RM protocol. Degree of PL loss appears rather sensitive to the heterostructure of QDs 18 and silica processing detail. 14,16 In this regard, we rst investigated the effect of the present silica processing on QY of pristine QDs by monitoring temporal variation in QY over reaction time. Our pristine red InP/ZnSeS/ ZnS QDs, consisting of an intermediate shell of compositiongradient (i.e., ZnS-richer phase outward) ZnSeS and an outer shell of ZnS, with a peak wavelength of 611 nm possess the original QY of 80%. Their absorption spectrum is also presented in Fig. S2 † for reference. Only a slight change in QY from 80 to 77% was observed upon introducing 1 ml of TMOS into 20 ml of QD toluene dispersion with an OD of 0.2 at 574 nm, and the overall QY values were well retained in the range of 75-78% even aer extended periods of reaction (Fig. 1a). As compared in Fig. 1b, PL spectrum of the QD solution aer 12 h-TMOS reaction stayed nearly unchanged compared to that of the pristine QD solution. To the best of our knowledge, this is the unprecedented result demonstrating the formation of InP QD-silica composites with the original QY nearly intact. Note that the present silica reaction was performed in a high RH of 70% to facilitate the hydrolysis of TMOS, unlike in silica embedding of perovskite QDs, where a very small amount of water content contained in toluene was used for its hydrolysis (i.e., the sealed system). 28 As a preliminary experiment, silica reaction was attempted in the sealed system, but there was no sign for the generation of silica phase regardless of the amount of TMOS and reaction time. Additionally, TEOS was used instead of TMOS with other reaction conditions unchanged. Intriguingly, injection of TEOS into QD toluene dispersion led to an instantaneous QY drop down to 52%, aer which this value was largely maintained within the range of 50-55% (Fig. S3 †). Such a considerable QY drop during TEOS reaction should be correlated to its substantially slow hydrolysis rate compared to TMOS. 28 TEOS molecules, which stay unhydrolyzed for a long period of time, likely devastate the surface of QDs presumably via removal of organic surface ligands throughout the reaction prior to the formation of silica phase. On the other hand, TMOS molecules that can be rapidly hydrolyzed can give rise to a faster silica formation, preventing the deterioration of QD surface and thus well retaining QY. As shown in transmission electron microscopic images ( Fig. 2a and b), our InP/ZnSeS/ZnS QDs exhibit a distinctively large size of 7 nm compared to other single-30,31 and multishelled InP QDs 5,32 with typical diameters <5.5 nm. This implies that the present ZnSeS intermediateshelling strategy in a composition-gradient fashion should lead to the effective relief of substantial interfacial strain between InP core and ZnS outer shell with a large lattice mismatch of 7.7%, thus allowing for a thick shell growth. Fig. 2c and d present low-and high-magnication TEM images, respectively, of QD-silica composites, where individual QDs were well buried in silica matrix. QDs in the matrix were quite evenly distributed without notable QD agglomerates that can cause a nontrivial PL quenching via Förster resonant energy transfer (FRET). These InP/ZnSeS/ZnS QD-silica composites were completely dried into a powder form (Fig. S4 †) for the following stability tests in degradable conditions. To prove the superiority of the present silica embedding strategy in preserving the original QY, an archetypal RM method, where the presence of ammonia catalyst and water is necessary to produce silica phase, was applied to encapsulate InP/ZnSeS/ZnS QDs by adopting the protocols in literature (see ESI for details †). 12,17 Analogous to the detrimental effect of TEOS on QY of QD toluene dispersion, just addition of TEOS into QD cyclohexane dispersion gave rise to a marked PL loss. And the consecutive introduction of NH 4 OH for hydrolysis and condensation further abated PL intensity of the QD dispersion (Fig. 3a). Such a huge drop in PL intensity of InP QDs, which is also consistent with the result (30-50% / 15% in QY) in literature, 12 is likely unavoidable, even though some synthetic parameters such as NH 4 OH amount are controlled. For an intent to mitigate PL loss in the course of RM-based silica reaction, two different amounts of NH 4 OH (28 wt%) were tested. However, the variation of NH 4 OH amount was not helpful in minimizing PL loss, although it affected the morphology of QDsilica composites (Fig. S5 †). Specically, with increasing NH 4 OH amount the phase fraction of silica in QD-silica composites increased accordingly without changing QD size. Fig. 3b show PL spectral comparison of three samples in a powder form of pristine InP/ZnSeS/ZnS QDs together with two QD-silica composites obtained through TMOS-based, catalystfree versus TEOS-based RM methods. While the former QDsilica composites were comparable in PL intensity to pristine QDs, the latter ones exhibited a considerably low PL intensity. This clearly indicates our TMOS-based, catalyst-free approach is highly advantageous in maximally retaining the original QY of QDs by effective preventing the devastation of QD surface accompanied by the conventional silica reaction. To demonstrate the efficacy of silica embedding on QD stability, two comparative powder samples of pristine QDs versus QDs-silica were rst exposed to UV irradiation in air atmosphere. Aer the elapse time of 15 h QDs-silica maintained 80% of the initial PL intensity, while a more marked reduction (42%) of PL was observed from pristine QDs (Fig. S6 †). Followingly, the same samples were placed in Fig. 2 (a and c) Lower-and (b and d) high-magnification TEM images of pristine QDs (a and b) and QDs-silica (c and d). a thermohygrostat set at 85 C and 85% RH for an extended period of time up to 144 h, consistently showing a much higher PL retention of QDs-silica (30% reduction) over pristine QDs (53% reduction) (Fig. S7 †). These comparative PL retention results of InP/ZnSeS/ZnS QDs without versus with silica encapsulation are by and large in line with those obtained from the earlier CdSe-based QDs. 15,16,18 That is, although silica well serves as a physical barrier to protect QDs against degradable environments and thus impede the devastation of QD surface, a full retention in PL aer silica encapsulation in the form of embedding or overcoating is unlikely. This is presumably because sol-gel-derived amorphous silica consists of a porous network structure (with a porosity of 10-15%), 23 where oxygen and water vapor molecules become accessible to QD surface, followed by its oxidation and/or corrosion. Two powder samples of pristine QDs and QDs-silica were individually blended with an epoxy resin and on-chip-packaged in a blue LED chip, and their EL spectra collected at a relatively high input current of 60 mA were compared in Fig. 4. Here, the light conversion efficiency (LCE) of QD-LED is assessed as the spectral integration ratio of the converted QD emission to the blue LED emission spent for that conversion. The QD-LEDs with pristine QDs and QDs-silica exhibited LCE values of 46 and 48%, respectively. Such similar LCEs imply that QDs-silica were comparable in PL QY to pristine QDs, again proving that our silica reaction strategy was highly benecial in nearly retaining the original QY of pristine QDs. As inferred from Fig. 2, the average inter-QD spacing in on-chip-packaged devices would be more distant for QD-silica powder relative to pristine QD one, thus expecting that quenching of QD emission by means of FRET can be mitigated for the former compared to the latter. 23,33 This speculation may be also supported by PL decay measurements showing average lifetime (s avg ) values 48 and 51 ns for pristine QDs and QDs-silica samples, respectively (Fig. S8 †). In general, on-chip-packaged QD-LEDs suffer from the light scattering resulting from either aggregation of QDs in polymeric resin or mismatch in refractive index (RI) of QDs with resin, reducing the photon ux outwards. 17,34 Particularly, such a gap in RI may be alleviated by forming silica-based QD composites and producing an intermediate RI value between those of QD and resin, depending on their volume fraction. 15,17 Hence, these two factors, i.e., reductions of FRET and light scattering, are likely jointly responsible for a slightly better LCE of QDs-silicabased device compared to pristine QDs-based one. The QD-LEDs with different loading amounts (i.e., ca. 13 and 25 mg) of QDs-silica powders were additionally fabricated, showing decreases in blue-to-QD spectral ratio and LCE with increasing QDs-silica loading (Fig. S9 †). Such an LCE reduction is attributable to a more active light reabsorption from more concentrated QDs-silica powders packaged in a LED mold. Two comparative QD-LEDs presented in Fig. 4 were subsequently subjected to the continuous operation at 60 mA for a prolonged period of time up to 100 h. As compared in temporal EL spectral evolutions (Fig. 5), only slight reductions of QD emission intensity was observed from the device with QDs-silica, whereas one with pristine QDs exhibited more progressive degrees of QD emission quenching. As a result, compared to the former device, a more marked EL color change aer 100 h-operation was recognizable from the latter one (insets of Fig. 5a and b). Changes of relative QD emission area were also evaluated with operational time, showing 65 and 88% retention of their initial values aer 100 h-operation for QD-LEDs without versus with silica embedding, respectively (Fig. 6a). Note that these operational stability tests were repeated by taking the same four Fig. 4 Comparison of EL spectra of on-chip-packaged QD-LEDs with pristine QDs and QDs-silica collected at an input current of 60 mA. An EL spectrum of 60 mA-driven bare blue LED was also collected for the calculation of blue-to-red light conversion efficiency. devices of each QD-LED and the resulting marginal variations were presented in the error bars. These comparative tests on long-term operational device stability, which are consistent with the earlier ones on QD stability under UV irradiation and 85 C/ 85% RH, manifest the efficacy of silica barrier in protecting QDs against the present degradable LED operational environments, i.e., high blue photon ux and chip temperature of ca. 50 C (accompanied by 60 mA-LED driving current) in air ambient. Atmospheric gaseous species are allowed to be readily accessible to QDs-silica due to a polymeric loose structure of epoxy encapsulant (used for QD packaging) and further permeate into QDs via porous channels of silica (aforementioned), incurring the photochemical degradation of QDs under a high photon ux from a blue LED chip. On that account, the photodegradation of QDs even encased in silica matrix will inevitably occur, but its rate should be substantially slowed down, considering only 12% loss of the initial QD emission intensity aer 100 h-operation. According to the results of Fig. 6a, temporal variations in LCE were calculated, showing 46 / 34% and 48 / 42% aer 100 h-operation for QD-LEDs without and with silica embedding, respectively (Fig. 6b). Initial (or 0 hdriving) LEs were higher for the device with QDs-silica (26.3 lm W À1 ) compared to that with pristine QDs (24.7 lm W À1 ) (Fig. 6b). Even though there is a marginal mismatch in blue-tored EL spectral ratio of those two devices (Fig. 5), this increase in LE is a direct consequence of an enhanced LCE concomitant with the formation of QDs-silica aforementioned. Changes of LE with operational time also followed the same trends as those of relative QD emission intensity or LCE, showing temporal LE reductions of 24.7 / 19.4 lm W À1 and 26.3 / 23.8 lm W À1 from 0 h-to 100 h-driving for devices with pristine QDs and QDsilica, respectively (Fig. 6b). The above promising results on device efficiency and stability are entirely attributable to the effectiveness of the present silica embedding strategy in not only maximally retaining QY of QDs but effectively passivating QDs, which thus convincingly offers a practical means to realize the fabrication of a highly efficient, robust QD-LED system. Conclusions First, we synthesized highly efficient red InP QDs with a composition-gradient ZnSeS intermediate shell and a ZnS outer shell. Silica embedding reaction proceeded by introducing TMOS only to QD toluene dispersion without additional catalyst and water and subsequently exposing it directly to 70% RH at 30 C. It turned out that under the present silica reaction condition the original QY (80%) of QDs was well preserved, showing the overall QY values of 75-78% even aer extended periods of reaction. Such a nearly complete PL retention, which has been not achievable for InP QDs and even CdSe ones through archetypal Stöber and RM processes, can be attributed to a chemically mild environment of the present TMOS-based, catalyst-free silica reaction, effectively preventing the devastation of QD surface. Two comparative powder samples of pristine QDs versus QD-silica composites were then exposed to UV irradiation and 85 C/85% RH for prolonged periods of time, commonly exhibiting much higher PL stability behaviors from the latter relative to the former as a consequence of the effective physical protection of QDs with silica barrier against degradable environments. Two types of on-chip-packaged QD-LEDs fabricated with pristine QDs and QDs-silica displayed comparable blue-to-red LCE values of 46 and 48%, respectively, again supporting a nearly full retention of the original QY aer silica embedding. The efficacy of silica embedding on the improvement of device stability was testied by a continuous operation at 60 mA up to 100 h. The QD-LEDs with pristine QDs versus QDs-silica exhibited markedly different degrees in the retention of QD emission and the reduction of LE, i.e., 65 and 88% (in relative QD emission area) and 24.7 / 19.4 lm W À1 and 26.3 / 23.8 lm W À1 , respectively, aer 100 h-operation.
6,073.2
2018-03-05T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Advancing forensic-based investigation incorporating slime mould search for gene selection of high-dimensional genetic data Modern medicine has produced large genetic datasets of high dimensions through advanced gene sequencing technology, and processing these data is of great significance for clinical decision-making. Gene selection (GS) is an important data preprocessing technique that aims to select a subset of feature information to improve performance and reduce data dimensionality. This study proposes an improved wrapper GS method based on forensic-based investigation (FBI). The method introduces the search mechanism of the slime mould algorithm in the FBI to improve the original FBI; the newly proposed algorithm is named SMA_FBI; then GS is performed by converting the continuous optimizer to a binary version of the optimizer through a transfer function. In order to verify the superiority of SMA_FBI, experiments are first executed on the 30-function test set of CEC2017 and compared with 10 original algorithms and 10 state-of-the-art algorithms. The experimental results show that SMA_FBI is better than other algorithms in terms of finding the optimal solution, convergence speed, and robustness. In addition, BSMA_FBI (binary version of SMA_FBI) is compared with 8 binary algorithms on 18 high-dimensional genetic data from the UCI repository. The results indicate that BSMA_FBI is able to obtain high classification accuracy with fewer features selected in GS applications. Therefore, SMA_FBI is considered an optimization tool with great potential for dealing with global optimization problems, and its binary version, BSMA_FBI, can be used for GS tasks. the examination of three distinct project scheduling challenges.Tolba et al. 51 presented a robust, improved forensic-based investigation (mFBI) optimization method for calculating the most efficient location of distributed generators (DGs) in electricity distribution networks (EDNs) to minimize the loss of power, as well as voltage deviations.Furthermore, hierarchical analysis is employed to derive the most relevant weighting factors for the multi-objective function (MOF).The efficacy of the proposed mFBI technique is validated and demonstrated through an investigation into the impact of DG integration on 118 IEEE EDN nodes and real Delta-Egypt EDN nodes.Chou et al. 52 suggested a forensic-based multi-objective investigation method for the multi-objective engineering optimization problem.Within this algorithm, the population undergoes initialization via chaotic mapping.Subsequently, Lévy flights, two elite groups, and a fixed-size file are employed to regulate the activities of investigators and police officers during offender search and frisking procedures.Simultaneously, a control time mechanism is integrated into MOFBI to harmonize exploration and exploitation, thereby attaining a Paretooptimal solution within the multi-objective search space.Experiments show that MOFBI can approximate the Pareto-optimal frontier more accurately than other algorithms. Although many advanced and improved FBIs have been proposed, most of the existing improved algorithms still suffer from the question of slow convergence and the greater probability of falling into local optimal when solving some specific cases.In the original FBI algorithm, two phases are included in the process of criminal investigation: one is the investigation phase, and the other is the tracing phase.The two phases perform independent searches in their respective populations, and trapping into the same local optimal is possible.The search agent generated by the SMA can adaptively go beyond the local optimal and better find the optimal solution through the positive and negative feedback mechanism.This article is based on the original FBI version, and the search phase of the slime mould algorithm has been added to assist with the solution.During the proposed algorithm, the slime mould search mechanism is integrated as an independent inspector group that compensates for the shortcomings of the investigation and pursuit groups.The slime mould search mechanism dynamically adjusts the search patterns according to the probability of the suspect being at the location.When the probability is high, the slime mould search mechanism uses an area-restricted search methodology, which focuses on the identified area.If the probability of the suspect being at the location is initially found to be low, the slime mould search mechanism controls the search to jump out of the current area and look for other locations with a high probability of the suspect being there.Thus, this strategy can significantly increase the convergence speed of the algorithm and the capability of skipping local optimal.A new variant of the FBI, called SMA_FBI, is developed by incorporating the SMA strategy into the original FBI.After that, the binary version of the algorithm, i.e., BSMA_FBI, is obtained utilizing a conversion function, which is applied to the GS problem with high-dimensional data. The remainder of this paper is divided into two sections: "Overview (FBI and SMA)" section describes the original FBI and SMA.In "Proposed SMA_FBI" section, SMA_FBI and the algorithm's time complexity are given a detailed description."Experiments" section gives and analyzes the experimental results."Discussion" section discusses the experiments as well as the results."Conclusion and future work" section summarizes the conclusions and gives some future directions. Overview (FBI and SMA) This section provides a detailed description of the FBI and SMA. Forensic-based investigation (FBI) The forensic-based investigation algorithm was inspired by Chou et al. from the investigation-localization-pursuit process of pursuing suspects by police officers involved in criminal investigations.It consists of two phases: the investigation phase (Step A) and the pursuit phase (Step B).The investigation phase is responsible for determining the location interval of the suspect in a general direction, while the pursuit phase requires a detailed search at the suspect location.The search space of the algorithm is defined as all possible suspect locations with the probability of locating the suspect as a metaphor for the objective function.In the search space, the investigator analyzes and evaluates the collected information to determine the identity and location of suspects, based on which the police can carry out an arrest.During this process, investigators and pursuers shift the direction of the search based on the latest evidence, thus requiring them to coordinate closely with each other throughout the process.The pseudo-code of the FBI appears in Algorithm1.The flowchart for the FBI is pictured in Fig. 1. The original FBI can be categorized into several imperative steps: Step A1 Interpretation of Discovered Information.During this step, the investigative team analyzes the collected findings and initially identifies possible suspect locations, which can be inferred based on information related to X A i and other suspect points.In this work, each individual is influenced by the others.The new sus- pected location X A1 i is represented in Eq. (1). Step A2 Determine the direction of the investigation.To establish the most likely suspect site, the investigator compares the probability of each suspect location to each other.p A i indicates the likelihood (objective value) that the suspect is located at position X A i , i.e., p A i denotes the objective value for the location X A i (i.e., p A i = fobjec- tive(X A i )).The investigator evaluates the likelihood of the new suspect's position and compares it to the location (1) www.nature.com/scientificreports/ of the current entrance.The site with the higher probability (objective value) of the suspect's presence will be reserved, while the other location will be discarded.The probability of each location is calculated by Eq. ( 2). where p worst is the minimum probability of the existence of a suspect, p best is the maximum probability, and X best is the optimal location.The update of the search position will be affected by other suspicious positions, the direction of random selection is introduced on the basis of the optimal individual X best , to increase the diversity of the search area and expand the search space.The position update formula is shown below: where X best indicates the best position obtained in step A1, r 2 is a random value between 0 and 1; and d, e, f are three random values indicating the three suspicious positions. Step B1 Begin the operation.In this stage, the arresting officer approaches the target location and arrests the suspect based on the best location provided by the investigation team.Each B i (pursuing officer) approaches the location with the best likelihood and updates the location if the newly approached location yields a better likelihood than the likelihood of the old location. where r 3 and r 4 express two random values in the range of 0 and 1. j = 1, 2, . . ., D. Step B2 Real-time position updates based on actions.While the tracking team is taking pursuit actions, it reports new suspect points to the headquarters in real-time.The headquarters will update the position and direct the tracking team to approach the suspect point.Each tracking team member works closely and interacts with each other.Agent B i approaches the target point and receives influence from team member B r at the same time.When the likelihood of B r is greater than the likelihood of B i , a new suspect location is generated according to Eq. (5); and vice versa according to Eq. ( 6). Slime mould algorithm (SMA) Many newly developed algorithms based on their respective properties and search mechanisms can balance exploitation and exploration well.SMA is an effective population-based algorithm proposed by Li et al. 28 .SMA principally emulates the behavioral and morphological changes of slime moulds when they feed.The algorithm uses weights to model the positive and negative feedback slime moulds generate during the foraging process, resulting in three different morphological types.Previous studies in many application scenarios [53][54][55][56] have demonstrated the superior performance of SMA in exploration and exploitation. Approaching food Slime moulds can approach food through odors in the environment.The following equation is used to model this contraction pattern of approaching food: where vb is a parameter between −a and a , and vc exhibits a linear decline from one to zero.T represents the contemporary iterations, and X b indicates the position with utmost concentration of odor discovered thus far.X refers the location of the slime moulds.X A and X B signify two fortuitously chosen individuals from the slime moulds.W symbolizes the weights assigned to the slime moulds.p is parameterized as follows: where i ∈ 1, 2, . . ., n, S(i) signifies the fitness evaluation of X , while DF indicates the most superior fitness achieved across all iterations.vb is given by the following equation: W relies on the utilization of the following formula: where condition refers to when S(i) is ranked within the superior half of the population, r signifies a randomly selected value between 0 and 1. max_t represents the maximum iterations, and bF and wF refer the best fitness and worst fitness, respectively, achieved during the current iteration.smellIndexindicates the sequence of fitness values for the ranking (which goes up in the minima problem). Wrapping food This section models the shrinking pattern of the venous tissue structure as the slime mould searches for food. As the concentration of food that the vein is subjected to increases, the intensity of the wave produced by the biological oscillator amplifies, the speed of cytoplasmic flow accelerates, and the thickness of the vein augments.Equation ( 13) provides a mathematical representation of the positive and negative feedback relationship between the width of the veins of the slime mould and the food concentration, where the parameter r models the uncertainty in the contraction pattern of the veins.Including a logarithmic function slows the change in frequency so that no drastic changes occur in contraction frequency values.conditionemulates the fact that the slime moulds dynamically tune their search pattern due to the concentration of food.In conditions of elevated food concentration, the weight of the neighborhood increases; vice versa, the weight of the vicinity decreases, prompting slime moulds to venture into alternative regions for exploration. The trends of vb and vc. where lb and ub denote the upper and lower boundary within a search range.rand and r represent random vari- ables encompassing values inclusively between 0 and 1. rand is used as a key parameter to control whether or not to enter the stochastic update, and r determines whether or not to entry into the exploration and exploita- tion phase.Additionally, to adhere to the original text, the parameter z is specifically assigned the value of 0.03. Oscillation Slime moulds rely heavily on propagating waves produced by a biological oscillator to manipulate the movement of cytoplasmic flow within the vein to a location that favors food concentration.To emulate the alterations in the pulse width of slime moulds, W, vb, vc were used.W mathematically modeled the rate of oscillations of slime moulds relative to varying food concentrations.Consequently, this facilitated the slime moulds' ability to approach regions of higher food quality swiftly.In conditions of the food concentration was lower at certain locations, the slime moulds approached the food more slowly, thus improving the capability of the slime moulds to select the best food source efficiently. The variable vb exhibits random oscillations between [−a, a] , but steadily converges to 0 as the number of itera- tions increases.Similarly, the variable vc undergoes oscillations within the interval of [−1, 1] , and eventually converges to 0. The visual representation of this behavior can be observed in Fig. 2. The synergy between vb and vc simulates the selection patterns demonstrated by slime moulds.The slime moulds will explore some areas independently to find better food sources.The slime mould will branch out to search for better food sources instead of concentrating on one food source.This strategy ensures that the slimy bacteria algorithm will not easily fall into a local optimum. Proposed SMA_FBI In this section, we provide a detailed description of the improved FBI and the time complexity of the algorithm in conjunction with the previous chapter. Enhanced FBI with the SMA (SMA_FBI) Except for the population size and the evaluation of stopping conditions, the base FBI does not rely on predefined parameters, so the parameters do not affect the algorithm's behavior.At the beginning of the FBI algorithm, www.nature.com/scientificreports/ the population was replicated into two parts, and each part independently searched for the optimal solution, interacting information through the current optimal position and the best fitness value.However, this led to barriers to communication within the algorithm, and the two parts did not exchange other information.Kaveh et al. 47 improved FBI by enhancing the exchange of information between the two parts.Instead of attempting to enhance communication, we added an inspector group to the investigating and searching groups, in which we introduced the search mechanism from the SMA.When searching for food, the slime moulds can adaptively tune their search pattern due to varying levels of food concentration.In conditions of elevated food concentration, the slime moulds concentrate their search on the currently recognized food sources; if they find a low food concentration, the slime moulds depart from the food source and search for other food concentrations.Throughout this process, the front end of the slime mould extends out and is able to build a network of veins in the search space, and the quality of the food source affects the propensity of the slime mould to search. Using this property of slime moulds in FBI, an inspector group can be reconstructed in addition to the original investigation group and pursuit group, which can make up for the deficiencies of the first two stages and enhance the capacity of the FBI.The flowchart of SMA_FBI is displayed in Fig. 3 Computational complexity analysis The time complexity of SMA_FBI is primarily associated with the dimension ( D ), the number of suspect locations ( NP ), and the number of evaluations ( GEN ).Overall, the time complexity is calculated from 4 aspects: initialization, fitness assessment, location update, and slime mould update strategy.For the initialization of the suspect location, the time complexity is O(D * NP) , the time cost of the adaptation assessment is O(NP) , the location updating part includes the investigation phase as well as the pursuit phase, and the time complexity of each phase is 2 * O(NP * D) , and the time consumption of the mucus updating strategy is O NP * 1 + logN + D .Considering the total number of evaluations GEN , the total time complexity of Experiments In order to assess the efficacy of the proposed algorithm SMA_FBI, a substantial quantity of experiments is undertaken in this section.Firstly, SMA_FBI is compared with 10 other original and 10 improved algorithms on 30 benchmark functions in CEC2017 57 .Benchmark datasets serve as widely acknowledged instruments for assessing the performance of various technologies against uniform criteria 58,59 .These datasets facilitate the evaluation of different technological dimensions, determining which technology excels over others across multiple domains 60,61 .Secondly, the algorithms are tested on different number of evaluations as well as different population sizes while controlling for other variables, and the complexity of SMA_FBI is also investigated and explained. Finally, the effectiveness of SMA_FBI in practical applications is tested on the GS dataset. To emphasize the impartiality of the experiments, all comparison algorithms undergo testing within the same hardware environment.Within the continuous optimization experiments, metaheuristic algorithm parameters are configured with a population size set at 30, and a maximum of 300,000 assessments.At the same time, to mitigate the influence of randomness on the experiments, all the algorithms are repeated on the test function for 30 times.Based on the experimental data, the capability of the comparison algorithms was evaluated using the mean ( avg. ) and standard deviation ( std.) of the optimal function values.The best results in the data are shown in bold.The nonparametric statistical test Wilcoxon signed-rank test 62 was utilized to ascertain whether SMA_FBI exhibits statistical superiority over other algorithms, with a significance level set at 0.05.The symbols " + / = /−" denote the proposed algorithm's superiority, equality, or inferiority to the other algorithms.Consistency was assessed using the Friedman test 63 to rank the mean experimental results and list the average ranked value (ARV).Accurate validation of any proposed model or algorithm must be done based on known parameters and settings.In the next experiments, all the parameter settings of the compared algorithms will be listed separately. The experimental results of SMA_FBI in terms of function optimization are shown and analyzed in this subsection.Thirty functions of CEC2017 are selected as test functions, and the specifics of these functions can be found in Appendix A.1.Within this selection, F1-F3 represent unimodal functions, F4-F10 are associated with multimodal functions, F11-F20 pertain to hybrid functions, and F21-F30 are linked to composite functions.The unimodal function contains a single global optimal solution, which serves as a means to assess the algorithm's exploitation capabilities.Meanwhile, the multimodal function has multiple locally optimal solutions and is employed to evaluate the algorithm's capacity for global exploration.The hybrid and composite functions gauge the algorithm's equilibrium between exploitation and exploration. All assessments were conducted on a Windows Server 2012 R2 datacenter operating system equipped with 128 GB of memory, utilizing an Intel (R) Xeon (R) E5-2650 v4 (2.20 GHz) CPU, within a MATLAB R2014b programming environment. In "Parameter sensitivity analysis" section, experiments are analyzed for different evaluation numbers as well as population size."Comparison with conventional algorithms" and "Comparison with state of the art algorithms" sections entail comparisons of SMA_FBI with 10 original algorithms and 10 enhanced algorithms, respectively, aimed at substantiating SMA_FBI's performance in addressing exploration and exploitation in the context of CEC 2017.Furthermore, in "Experiments on real world optimization of GS" section, SMA_FBI is used to handle the GS problem for a dataset from the UCI database. Parameter sensitivity analysis In order to enhance the analysis of the algorithm's parameter sensitivity, the impact of population size and the number of evaluations on the algorithm is examined by manipulating individual parameters while holding other variables constant. During this phase of the experiment, in order to reflect the comprehensiveness of the experiment, four different functions, namely, unimodal function F3, multimodal function F7, hybrid function F14, and composite function F29, were selected for verification.The population sizes were set to 10, 30, 60, 100 and 200 to study the influence of population size on the algorithm's performance.Based on the findings presented in Appendix A.2, the optimization effectiveness of SMA_FBI generally surpasses that of FBI.Moreover, when the population size is 30, the optimization effect of SMA_FBI reaches the optimal value, while the optimization effect of SMA_FBI is relatively poor when the population size deviates from 30. Another pivotal factor influencing the experimental results is the number of algorithm evaluations.We selected five evaluation times, 50,000, 100,000, 150,000, 200,000, and 300,000, to investigate the effect of evaluation times on the property of SMA_FBI.Similarly, the validation is carried out in four functions: unimodal function F3, multimodal function F7, hybrid function F14, and composite function F29.From Appendix A.3, it can be seen that SMA_FBI has achieved the optimal value on the composite function before 50,000 evaluations.For the multimodal and hybrid functions, the optimal value is already close to the optimal value at 200,000 evaluations, but only at 300,000 evaluations can all the functions take the optimal value.In conclusion, we chose 300,000 evaluations. Comparison with state-of-the-art algorithms Within this experiment phase, the same CEC2017 benchmark function test set has been chosen to evaluate the capability of SMA_FBI in correlation to 10 state-of-the-art algorithms, namely EPSDE 68 , ALCPSO 69 , BMWOA 70 , CLPSO 71 , IGWO 72 , CESCA 73 , RDWOA 74 , LSHADE 75 , CBA 76 , and DECLS 77 .These 10 algorithms contain improved versions of various algorithms, especially of DE, PSO.EPSDE and LSHADE are two champion algorithms that have performed well in the field of evolutionary algorithms, and the superior performance of the proposed algorithm can be verified by comparing it with ten algorithms including these two.Table 2 shows the detailed parameter settings of the algorithms mentioned above.Appendix A.6 shows the comparison outcomes between SMA_FBI and the above advanced algorithms. As can be noted from the summed rankings in Appendix A.6, SMA_FBI is still number one, even in the face of competition from the most highly acclaimed algorithms.In some of the previous functions of CEC2017, SMA_FBI did not achieve the best result compared to the champion algorithm, but it took the better solution.Furthermore, SMA_FBI achieves the optimal solution on most of the composite functions, i.e., F23, F25-30, and the sub-optimal solution on F24, and the std is 0 on all these functions, which indicates that SMA_FBI is more stable as well as robust on the composite functions.It shows that introducing the search mechanism of the SMA makes the algorithm more balanced between exploitation and exploration.The results of the Wilcoxon signed-rank test, comparing SMA_FBI with other state-of-the-art algorithms, are depicted in Appendix A.7. From the table in Appendix, it can be observed that in the experiments of SMA_FBI with BMWOA, IGWO, CESCA, CBA, the p-value is much less than 0.05, which proves that SMA_FBI outperforms these algorithms.Meanwhile, compared with other algorithms, most are also less than 0.05, which shows that SMA_FBI has apparent advantages over them. As shown in Fig. 5, SMA_FBI also shows competitive performance compared to the state-of-the-art and improved algorithms, proving that SMA_FBI is more competitive.The comparison algorithms also contain www.nature.com/scientificreports/some improvements of DE and PSO algorithms, which further verifies that the introduction of the slime mould mechanism serves as a significant enhancement to the FBI.In summary, in the face of competition from the challenging state-of-the-art algorithms, the optimization ability of SMA_FBI is reflected in the overall optimization performance in different types of functions, especially in composite and hybrid functions.The slime mould search mechanism, as the third search scheme in the improved algorithm, enhances the algorithm's exploration and search capability as a whole. Experiments on real-world optimization of GS In this section, we employ the proposed algorithm SMA_FBI to address the GS problem and showcase the improved algorithm's effectiveness.Whereas the GS problem is a binary optimization task, we adapt the continuous SMA_FBI into a discrete variant, i.e., BSMA_FBI, to solve the high-dimensional GS problem. Basic information The GS problem requires selecting a set of most representative subsets from a collection of features for the purpose of dimensionality reduction of a dataset.GS can effectively reduce the computational cost of data, so many domains with large datasets wish to downsize application data. In SMA_FBI based GS algorithm, x = x i,1 , x i,2 , . . ., x i,n represents a set of features, if x i,1 = 1 , it implies that the i th feature is selected; otherwise, the feature is not selected.GS represents a discrete optimization problem; therefore, converting the SMA_FBI algorithm to a binary version is necessary.We utilize a transfer function to convert continuous SMA_FBI to binary SMA_FBI(BSMA_FBI).The machine learning algorithm is employed in a classification capacity, and its classification accuracy is utilized to evaluate the ability of BSMA_FBI to screen important features in the dataset.In addition, during the evaluation process, cross-validation was employed to assess the optimum subset of features used for classification to avoid the impact of random elements on the experiment. Fitness function and implementation of experiments In previous work on continuous optimization, the proposed SMA_FBI searches for optimal solutions in a continuous search space.Whereas the GS problem is a binary problem, such a problem requires that the solution must be binary, i.e., it can only take either 0 or 1.However, many optimization algorithms are inherently designed for continuous spaces.Therefore, we need a way to convert the outputs of these continuous optimizers to binary values to satisfy the requirements of the problem.The transfer function (or threshold function) is the key to this conversion.The basic idea is to set a threshold value for the output of the continuous optimizer, and then convert the output to 0 or 1 according to this threshold value, with 1 indicating selected and 0 indicating unselected.By adjusting the threshold, we can control the stringency of the selected genes.A higher threshold will result in fewer genes being selected, while a lower threshold may result in more genes being selected.Here we choose a threshold of 0.5 as shown below: X j i is the i th value of the current search entity in the j th dimension within the discrete space.The transfer function is a proper translator that converts a continuous optimization algorithm into a discrete variant of the algorithm without altering the structure of the algorithm, which is convenient and efficient.Within this paper, V -type transfer function is employed, and its transfer function is shown below: GS is a process of obtaining the lowest classification error rate employing the least subset of features, which needs to be achieved simultaneously.Evidently, the GS problem presents itself as a multi-objective optimization challenge, and to satisfy each objective, a fitness function can be designed using the classification error rate and the number of selected features to evaluate the chosen feature subset.The specific form of the fitness function is shown below: where error represents the classification error rate computed by the K-Nearest Neighbor (KNN) 78 classifier, l signifies the size of the selected feature subset, and d is the total number of features in the dataset.Meanwhile, a and b serve as two weighting factors indicating the significance of the classification error and the subset length, respectively, to the GS problem.Our study asserts that the classification error rate deserves more attention than the feature subset length.Thus, we assign a to be 0.95 and b to be 1 − a , i.e., 0.05.Each feature subset is evaluated based on fitness, with smaller fitness values indicating superior feature subsets. Experimental results and analysis of FS The SMA_FBI based GS method, which we refer to as BSMA_FBI, will be in the face of competition from several state-of-the-art GS methods on 18 datasets, including bGWO 79 , BBA 80 , BGSA 81 , BPSO 82 , bALO 83 , BSSA 84 , bHHO 85 , and the original GS algorithm for the FBI, BFBI.These algorithms used for comparison are more classical algorithms and are commonly used in comparison experiments.They include many different kinds of ( 14) www.nature.com/scientificreports/algorithms such as nature-inspired algorithms, algorithms inspired by physical phenomena, and so on.Table 3 lists the detailed parameters of these classifiers.GS based on the SMA_FBI algorithm is performed on each dataset and is run N times, and tenfold cross-validations are performed each time.The data samples are partitioned into training, validation, and test sets in the cross-validation procedure according to a certain ratio.In this paper, the KNN classifier is used for classification.The classifier initially undergoes training and classification on all the data within the training set, subsequently assessing and validating against the samples in the validation set, and ultimately applies the chosen features to the test data to ascertain computational accuracy.Table 4 lists 18 detailed features from the UCI dataset, including the number of instances, features, and categories.As can be observed from the table, these datasets have 32-6598 samples, 23-15,010 features, and 2-26 classes.These datasets essentially represent different types of data, containing both small high-dimensional samples and large low-dimensional samples, which challenges the performance of the algorithm. Appendix A.8-Appendix A.11 reflects the statistical findings of the means by the number of features selected, error rates, fitness values, and computation time.The bolded values represent the most favorable outcomes for the present comparison results.Examination of Appendix A.8 distinctly illustrates that the proposed BSMA_FBI selects the fewest features across nearly all datasets and achieves the second least number of features on the Parkinson, Lungcancer_3class dataset.By comparing the data of BSMA_FBI with BFBI, we can also find that our improvement of FBI is very effective, and our proposed algorithm selects fewer features and fetches better results than the original algorithm.The ARV metric shows the ranking results of various algorithms on multiple datasets, and there is no doubt that BSMA_FBI is ranked first.This shows that BSMA_FBI is competitive in selecting the least features. According to the ARV, comparison results in Appendix A.9, as a whole, BSMA_FBI has not achieved the optimal results, but it has also achieved a suboptimal ranking, with an average error that is only a little bit higher than that of bGWO, and the average error value of BSMA_FBI is noteworthy lower when compared with that of BFBI.The proposed algorithm achieves the least average error on more than half of the datasets and has the smallest standard deviation, even though many of them are 0.This indicates that the proposed algorithm is very stable, which also proves the algorithm's superior behavior.Of course, we can also see that BSMA_FBI achieves relatively poor results on some algorithms, especially Tumors_9, Tumors_11, and Tumors_14, and we speculate that it may be the fact that these three datasets contain too many categories, which leads to the algorithm's general effect. Appendix A.10 demonstrates the fitness values for the algorithm comparison, i.e., the weighted results of the error rate versus the number of features, from which it is evident that the data is mainly in line with the trend in Appendix A.9, although BSMA_FBI achieves the best results due to the addition of the number of features as a factor.The ARV results show that BSMA_FBI outperforms bGWO, and both significantly outperform the other optimizers, with BFBI having the worst results.This proves that the entry of the slime mould algorithm as a mechanism that improves the effectiveness of the original algorithm in searching for suitable features in the feature space has a positive impact.Based on the average computation time results in Appendix A.11, it can be observed that although the BSMA_FBI algorithm has a high computation time, it is still superior to BFBI, which can also prove the value of the improvement side by side, reducing the time cost. Tables 5 and 6 show the Wilcoxon signed-rank test results of BSMA_FBI against other GS optimizers in terms of classification error as well as the number of features selected, respectively.From Table 5, it can be seen that there seems to be no significant difference between BSMA_FBI and other GS optimizers in terms of classification error and only in a few tests is the p-value less than 0.05.However, it can be seen from Table 6 that there is a significant difference between BSMA_FBI and other gene selection optimizers in terms of the number of features selected.This indicates that SMA_FBI has a significant advantage over these algorithms. The images are more intuitive and visual than the data in the tables.Figures 6 and 7 show the optimal fitness values calculated by multiple algorithms during the optimization process in the form of curves.The horizontal Discussion This section summarizes the experimental findings of the proposed SMA_FBI on continuous function optimization and GS problems and provides a detailed analysis of the nature of the algorithms involved as well as the experimental results.The experimental part in "Experiments" section can be divided into three aspects: (1) Comparative experiments on the population size and on the number of algorithm evaluations in function optimization to find the most suitable population size and the number of evaluations; (2) On the CEC 2017 dataset, the correctness of the introduction of the slime mould search mechanism for the FBI is verified by comparing it to the base algorithm and state-of-the-art algorithms as well as the overall SMA_FBI algorithm's superiority. (3) SMA_FBI is applied to a high-dimensional GS optimization problem from UCI data to prove the algorithm's behavior for effective dimensionality reduction of high-dimensional data and addressing discrete combinatorial optimization challenges. From the perspective of function optimization, it can be seen from Appendix A.4, Appendix A.5 as well as Fig. 4, that the algorithm after the integration of the slime mould mechanism is superior compared to the FBI, and its optimization ability is stronger.Secondly, the SMA_FBI algorithm can occupy an obvious advantage, no matter it is compared with the classical DE, PSO, or the novel SMA.In addition, according to the data in Appendix A.6, Appendix A.7 and the curves in Fig. 5, we can see the comparison between SMA_FBI and a variety of improved algorithms, which contain several champion algorithms (EPSDE, LSHADE), as well as other stateof-the-art and improved algorithms (e.g., ALCPSO, DECLS), and so on.We can see that SMA_FBI significantly outperforms these state-of-the-art algorithms.At the same time, we can also see that SMA_FBI is not optimal in some problems, especially in the hybrid function, which is not obvious when comparing with the basic algorithms, but when comparing with the state-of-the-art algorithms, this problem becomes evidently apparent.In discrete combinatorial optimization, SMA_FBI achieves satisfactory results for GS problems.We evaluated BSMA_FBI (the binary version of the algorithm) as well as several GS optimizers using 18 datasets from the UCI repository (containing different types of data).Appendix A.8-Appendix A.11 quantitatively analyzes the performance of the algorithms in the four aspects of the number of selected features, classification errors, fitness values, and time cost, respectively.It is readily apparent that BSMA_FBI surpasses the other optimization techniques, and the proposed algorithm maintains high classification accuracy while selecting fewer features.It can also be seen that BFBI and BSMA_FBI algorithms are ranked in the bottom two positions in terms of time cost, while bGWO is also effective and has a higher time complexity.Given the significant reduction in time cost compared to the original algorithm, it is justifiable to take satisfaction in the performance of BSMA_FBI.In addition, we can see that BSMA_FBI is more effective on high-dimensional small-sample data but less effective on low-dimensional large-sample datasets and multi-classified data, which is the direction of our future improvement.In addition, Figs. 6 and 7 show that BSMA_FBI exhibits elevated classification accuracy and convergence at a superior rate compared to its counterparts.Thus, it shows that BSMA_FBI is a promising approach for discrete combinatorial optimization challenges in GS. In brief, this article discusses the SMA_FBI algorithm, which incorporates a slime mould search mechanism based on the original FBI to achieve improved algorithm performance.By comparing with other excellent algorithms on the function optimization problem, it is found that SMA_FBI has a significant advantage in enhancing population diversity as well as convergence.In addition, compared with other GS methods on the GS problem, it is verified that BSMA_FBI can obtain higher classification accuracy while selecting fewer features.Of course, there is also a problem that BSMA_FBI has high time loss when performing GS, which is an optimization direction we need to consider afterward.Overall, SMA_FBI shows good prospects in addressing diverse optimization and GS problems. Improving the accuracy and efficiency of gene selection plays a crucial role in medical diagnosis and personalized therapy, and has a profound impact on the development of drug discovery and individualized treatment.By improving the accuracy of gene selection, we are able to more accurately identify genetic variants associated with diseases and thus diagnose them more accurately.This helps to avoid misdiagnosis and underdiagnosis, and provide patients with more precise and personalized treatment plans.Meanwhile, in the process of drug development, by accurately selecting relevant genes, we can study the mechanism of action of drugs in greater depth, accelerate the process of drug development, reduce the cost of research and development, and improve the efficiency of research and development, so as to assist in medical diagnosis and personalized treatment. Figure 5 . Figure 5. Convergence curves of SMA_FBI and ten state-of-the-art algorithms on twelve functions. Figure 6 . Figure 6.Convergence plots for the BSMA_FBI and alternative binary metaheuristic algorithms across 9 datasets. Figure 7 . Figure 7. Convergence plots for the BSMA_FBI and alternative binary metaheuristic algorithms across 9 datasets. Table 1 . The specific configuration of parameters. Table 2 . The detailed parameter settings. Table 3 . Parameter settings for the classifiers. Table 4 . Characteristics of gene expression datasets. Table 5 . The p-value of the Wilcoxon test between the BSMA_FBI and alternative GS optimizers on average error rate. Table 6 . The p-value of Wilcoxon test between the BSMA_FBI and alternative GS optimizers on average number of the selected features.
8,684.2
2024-04-13T00:00:00.000
[ "Computer Science", "Biology" ]
Impact of attenuation correction of radiotherapy hardware for positron emission tomography‐magnetic resonance in ano‐rectal radiotherapy patients Abstract Background Positron Emission Tomography‐Magnetic Resonance (PET‐MR) scanners could improve ano‐rectal radiotherapy planning through improved Gross Tumour Volume (GTV) delineation and enabling dose painting strategies using metabolic measurements. This requires accurate quantitative PET images acquired in the radiotherapy treatment position. Purpose This study aimed to evaluate the impact on GTV delineation and metabolic parameter measurement of using novel Attenuation Correction (AC) maps that included the radiotherapy flat couch, coil bridge and anterior coil to see if they were necessary. Methods Seventeen ano‐rectal radiotherapy patients received a ‐FluoroDeoxyGlucose PET‐MR scan in the radiotherapy position. PET images were reconstructed without (CTACstd) and with (CTACcba) the radiotherapy hardware included. Both AC maps used the same Computed Tomography image for patient AC. Semi‐manual and threshold GTVs were delineated on both PET images, the volumes compared and the Dice coefficient calculated. Metabolic parameters: Standardized Uptake Values SUVmax, SUVmean and Total Lesion Glycolysis (TLG) were compared using paired t‐tests with a Bonferroni corrected significance level of p=0.05/8=0.006. Results Differences in semi‐manual GTV volumes between CTACcba and CTACstd were approaching statistical significance (difference −15.9%±1.6%, p=0.007), with larger differences in low FDG‐avid tumours (SUVmean<8.5gmL−1). The CTACcba and CTACstd GTVs were concordant with Dice coefficients 0.89±0.01 (manual) and 0.98±0.00 (threshold). Metabolic parameters were significantly different, with SUVmax, SUVmean and TLG differences of −11.5%±0.3% (p<0.001), −11.6%±0.3% (p<0.001) and −13.7%±0.6% (p=0.003) respectively. The TLG difference resulted in 1/8 rectal cancer patients changing prognosis group, based on literature TLG cut‐offs, when using CTACcba rather than CTACstd. Conclusions This study suggests that using AC maps with the radiotherapy hardware included is feasible for patient imaging. The impact on tumour delineation was mixed and needs to be evaluated in larger cohorts. However using AC of the radiotherapy hardware is important for situations where accurate metabolic measurements are required, such as dose painting and treatment prognostication. accurate metabolic measurements are required, such as dose painting and treatment prognostication. INTRODUCTION Positron Emission Tomography-Magnetic Resonance (PET-MR) scanners have great potential for pelvic radiotherapy planning through high-quality MR anatomical and functional imaging combined with simultaneous PET molecular information. [1]This can be used for more accurate delineation of the Gross Tumour Volume (GTV), [2,3] delineation of tumour sub-volumes for radiotherapy dose painting [4] and/or as a prognostic tool to identify poorer prognosis patients for dose escalation. [5]or anal cancers, 18 F-FluoroDeoxyGlucose (FDG)-PET has demonstrated significantly smaller GTVs compared to CT [2] and good corresponance with MR. [3] A study in rectum cancer patients showed reduced interobserver variability for tumour delineations on 18 F-FDG-PET-CT compared to CT alone. [6]PET imaging also has good potential for automatic delineation methods utilising the semiquantitative metric Standard Uptake Value (SUV), [7] with automatic methods showing good agreement with manual contours [8] and better agreement with pathological analysis than CT or MR. [9]PET derived metabolic parameters such as the maximum SUV within a tumour (SUV max ) and Total Lesion Glycolysis (TLG) have also shown promise as prognostic factors for rectal cancers. [5,10]igh quality PET imaging is required for accurate GTV delineation and accurate PET SUVs are essential for radiotherapy dose painting and patient prognostics.High quality, quantitative PET imaging requires accurate attenuation correction (AC) of all objects traversed by the annihilation photons. [11]However, images used for radiotherapy planning need to be acquired in the radiotherapy position, which requires dedicated radiotherapy hardware such as a flat couch-top and coil bridges. [12]For PET-MR this is challenging since the radiotherapy hardware will non-uniformly attenuate the PET signal [13] and will not be visible in the MR images.In addition, the flexible anterior MR coil essential for acquiring high quality MR images also has a substantial and non-uniform PET attenuation. [14]Previously, a phantom study has demonstrated a reduction in PET-MR image quality from acquiring images in the radiotherapy position with a PET activity loss of −17.7% ± 0.1%,which was greater than the −8.3% ± 0.2% activity loss caused by the anterior MR coil alone. [15]The flexible shape and variable position of the anterior coil makes accounting for it in PET AC maps difficult, so in routine diagnos-tic use it is ignored. [14]However, an advantage of the radiotherapy position is that the use of the coil bridge means the anterior coil shape is fixed and coil position is known.The prior phantom study utilised this to develop a AC map of the radiotherapy hardware and MR anterior coil. [15]Evaluation of this method on a phantom demonstrated reduction in the PET activity loss to −2.7% ± 0.1% compared to phantom measurements made without radiotherapy hardware or anterior coil.The aim of this study was to test the feasibility of using these AC maps in ano-rectal radiotherapy patients and to determine the impact on GTV delineation and SUV measurements to see if using these AC maps was necessary.All patients received a simultaneous PET-MR scan on a SIGNA PET/MR 3T scanner (version MP26 GE Healthcare, Waukesha, USA) after their radiotherapy planning CT scan and before their first treatment fraction.Patients were scanned in the radiotherapy treatment position on a flat couch-top with a coil bridge for the anterior MR coil as shown in Figure 1. [15]Patients were positioned to match their radiotherapy planning CT scan using a combined customisable foot and knee rest (Civco) and external lasers matched to patient tattoos.Immediately prior to entering the scan room patients emptied their bladder and drank 400 mL of water.The PET acquisition started 20 min (median, range 15-37 min) after patient drinking.The PET images were acquired 70 min (median, range 60-86 min) after injection with 3.5 MBq kg −1 ± 10% of 18 F − FDG (one patient received 1.7 MBq kg −1 ).All patients had fasted for 6 h prior to injection and had a measured blood glucose concentration of < 10 mmol L −1 .The PET acquisition consisted of one 5 min bed position with the patient tumour centred in the PET field of view.Images were reconstructed using a Bayesian penalized-likelihood iterative image reconstruction (Q.Clear) with a relative noise regularizing term factor of = 350 [16] with point spread function correction and time of flight information. Patient data collection MR images were acquired using the automatic Dixon sequence used for the scanner-generated PET AC maps.This was a 3D sequence with a voxel size of 2.0 × 2.0 × 2.6 mm 3 and a field of view 500 × 500 × 312 mm 3 .The images were acquired with a repetition time 4.05 ms, echo times 2.232 ms (in-phase) and 1.116 ms (out-phase) and a receive bandwidth of 1302 Hzpixel −1 .An additional 3D T2-weighted turbo spin echo sequence was acquired as an anatomical reference for the PET image.This had a voxel size of 1.0 × 1.0 × 2.0 mm 3 , field of view 380 × 304 × 360 mm 3 , repetition time 2000 ms, echo time 148 ms and a receive bandwidth of 658 Hzpixel −1 . All patients received contrast-enhanced planning CT scans (Sensation Open, Siemens, Erlangen, Germany) in the radiotherapy position.The CT images had a voxel size of 1.1 × 1.1 × 3 mm 3 and a tube voltage of V = 120 kVp.Patients were imaged following routine bladder preparation consisting of an empty bladder 30 min prior to the scan, followed by drinking 400 ml of water, and bowel preparation consisting of the application of a micro-enema 60 min prior to the scan followed by bowel emptying. Attenuation correction maps AC maps can be divided into two components: a map of the patient and a map of all hardware components within the PET lines or response.For the purposes of this study what was used for the patient map did not matter as long as it was consistent between all PET images.We decided to use the patient CT acquired in the same radiotherapy position as the PET-MR since CT is the gold standard source of patient AC.The CT was rigidly registered to the in-phase MR image in RayStation (v9B,RaySearch Laboratories,Stockholm,Sweden).The external contour of the in-phase MR was automatically delineated using RayStation's function, and manually modified where necessary.The registered CT was cropped to the MR external contour, with any tissue outside the CT external contour but inside the MR external contour set to water density.Any air within the patient was automatically delineated and set to water density.Two different hardware AC maps were used,each with the CT patient map: CTAC std and CTAC cba .CTAC std was automatically generated by the scanner and included the MR spine coil components within the scanner bed.CTAC cba was the same as CTAC std but with the manual addition of a model of the radiotherapy couch placed abutting the patient posterior edge and a model of the coil bridge and anterior coil, as described in Wyatt et al. [15] The coil bridge and anterior coil model was placed in the patient right-left and anteriorposterior directions using the measured distances to the radiotherapy couch.The inferior-superior position was calculated through landmarking the scanner table to the centre of the coil bridge and using the scanner table Attenuation correction maps for an example patient. position during the PET acquisition, accessible through the private DICOM tag 'PET_table_z_position'.Examples of the three attenuation maps are shown in Figure 2. CTAC std would be the hardware AC map produced directly by the scanner without modification whereas CTAC cba would include all hardware within the PET lines of response.This study aimed to assess whether the improvement in PET accuracy from using CTAC cba would result in clinically significant differences in GTV delineation and SUV measurements or whether CTAC std was accurate enough for radiotherapy purposes. Tumour delineation The CTAC std and CTAC cba PET images were independently contoured at least 7 weeks apart by an experienced consultant PET radiologist using RayStation.The image was automatically thresholded using a fixed SUV = 2.5 g mL −1 [8] and the resultant volume manually adjusted by the radiologist as appropriate to represent a gross tumour volume (GTV man std and GTV man cba for the CTAC std and CTAC cba images respectively).This was done to reduce intra-observer variability between the delineations on the two images, with 2.5 g mL −1 considered diagnostic for malignant tumours in anorectal cancer. [8]Primary and nodal volumes were delineated separately (GTVp and GTVn respectively).Examples of the PET images and semi-manual GTV contours are shown in Figure 3. A threshold method was also used to automatically delineate the tumour on both CTAC std and CTAC cba images, referred to as GTV thresh std and GTV thresh cba respectively.A threshold value of 40% of the maximum SUV within the manual GTV contour of the relevant image was calculated and voxels with a SUV above that thresh-old were included in the contour using RayStation. [2]he thresholded contour was limited to be within a 0.5 cm expansion of the manual GTV contour of the relevant image to ensure physiological uptake was not included,except for patients (n = 3) where the GTV abutted the bladder, where a 0.0 cm expansion was used in that direction. Whole image analysis The per pixel percentage difference in SUV for CTAC std and CTAC c compared to CTAC cba were calculated using MICE Toolkit (v1.0.8). [17]An external contour was segmented on CTAC cba using a threshold of 0.05 g mL −1 and only differences within this external contour were included.A histogram of differences was calculated using 400 bins between −100% and +100% for each patient, and the mean difference within each bin over all patients determined.The CTAC cba PET image was used as the reference image for all analyses since a previous phantom study had showed it had the smallest PET activity loss compared to a gold standard PET acquisition without radiotherapy hardware or anterior coil. [15]The aim of this study was to assess whether this improvement in SUV accuracy translated into clinically relevant differences in tumour delineation and metabolic parameter measurements. Tumour delineation analysis The semi-manual and thresholded GTV contours were compared between CTAC std and CTAC cba to determine the impact on radiotherapy target delineation of not including the radiotherapy hardware with the AC.The contours were compared using the following metrics: the volumetric Dice coefficient, the mean distance to agreement and the GTV volume, all calculated within RayStation.Due to the large variation between patients in GTV volume, the comparisons between CTAC std and CTAC cba PET images were performed as per-patient percentage differences (CTAC std -CTAC cba ) relative to the CTAC cba result.The significance of these differences were evaluated using paired t-tests, with a Bonferroni corrected significance level of p = 0.05∕(4 × 2) = 0.006, correcting for the four different parameters were being tested (differences in GTV volume, SUV max , SUV mean and TLG) on both semi-manual and threshold contours. Metabolic parameter analysis The semi-manual and thresholded GTV contours were compared on metabolic parameters: SUV max , SUV mean and TLG.TLG was defined as the multiplication of SUV mean with GTV volume.These would not directly affect tumour delineation using PET, but have shown value as a prognostic factor for rectum patients [5] and so would have an impact on dose painting approaches or the personalisation of dose prescriptions based on the PET data.The large variation between patients in values meant the metabolic parameters were also evaluated as per-patient percentages differences.Statistical significance was assessed using paired t-tests with the same significance level (p = 0.006).The impact on the prognostic value of PET imaging in the radiotherapy position of using CTAC cba and CTAC std was assessed using TLG according to the methods presented in refs.[5, 10].Literature cut-off values were only available for rectal cancers so the anal cancer patients were not included in this analysis.The volume used in the TLG calculation was thresholded using either 30% (Ogawa et al.) or 50% (Choi et al.) of SUV max .Although neither study used the 40% of SUV max threshold used in this study, the thresholds were within 10% which was considered similar enough to apply the cut-off values.For Ogawa et al.TLG was determined for a combination of primary and nodal disease, whereas for Choi et al.only primary volumes were used.Therefore the TLG for the primary GTVs were compared to 125.84 g (Choi et al.) and the combined primary and nodal TLGs to 341 g (Ogawa et al.).Patients who changed prognostic groups depending on whether TLG was calculated using CTAC cba or CTAC std PET images were recorded. Whole image The distribution of SUVs across the image in CTAC std were lower than CTAC cba , with a mean difference of Histogram of number of voxels with percentage differences in SUV between CTAC cba and CTAC std .The solid line shows the mean counts over all patients for each bin, and shaded areas ± one standard error. −13.8%.This is apparent in the histogram plots of differences between CTAC std to CTAC cba (Figure 4). There was a difference in the semi-manual GTV volumes with the radiotherapy hardware included in the AC map, with the GTV man std volumes being −15.9% ± 1.6% (mean ± standard error, range −33.1%, −3.8%) of the GTV man cba volumes.This difference was not statistically significant (p = 0.007) over the whole cohort, but appeared to be larger for the less FDG-avid tumours (see Figure 5).All volume differences greater than 13% occurred in GTVs with SUV mean ≤ 8.5 g mL −1 .However there remained a reasonable concordance between GTV man std and GTV man cba with a Dice coefficient of 0.89 ± 0.01 (0.77,0.97) and a mean distance to agreement of 0.65 ± 0.06 mm (0.14 mm, 1.4 mm).The Dice coefficient also showed some dependence on SUV mean , although less marked than the volume differences (see Figure 6). DISCUSSION PET-MR imaging has the potential to improve GTV delineation as well as enable dose painting and dose escalation treatment strategies for ano-rectal radiotherapy.This study aimed to apply a previously developed AC method, which had demonstrated substantial reductions in PET activity loss in phantoms, to PET-MR images in the radiotherapy position of ano-rectal cancer patients.In particular, the study aimed to assess the impact of not using this AC method on GTV delineation and GTV metabolic parameter accuracy, to see if using this AC method was required for radiotherapy PET-MR.The impact on semi-manual GTV delineation was mixed.Although the volume difference was not statistically significantly, it was approaching statistical significance (p = 0.007) and Bonferonni correction is known to be conservative when the variables being tested are not independent. [18]The mean volume difference was −15.9% ± 1.6%, with differences up to −33.1% for the less FDG-avid lesions.This suggests it is likely that the volume difference would become statistically significant with a larger number of patients, and therefore that there is an impact on semi-manual GTV delineation to using CTAC cba .This impact would depend on the method of contouring used and other methods not using a fixed threshold as the basis for delineation could see different impacts when using CTAC cba .However, despite the large volume differences in some patients, the similarity metrics still showed good ageement for most patients (Figure 6).The mean levels of agreement similar to or better than inter-observer variability in GTV delineation in rectal cancer patients reported in the literature.Patel et al. reported PET-CT delineated primary GTVs had Dice coefficients of 0.81 ± 0.03 (mean ± standard error) and nodal GTVs 0.70 ± 0.12. [19]Buijsen et al. reported higher Dice coefficients, 0.90 for manual delineations and 0.96 for an automatic delin-eation using a source-to-background ratio method, also for rectal GTVs. [20]This suggests that using CTAC std compared to CTAC cba introduces differences in manual GTV delineation that are similar to those introduced by inter-observer variability, although it is important to note that these studies did not use the fixed threshold semi-manual method of GTV delineation used here.To the best of the author's knowledge, no study has F I G U R E 7 (a) Box plot of the differences in SUV mean and SUV max between CTAC std and CTAC cba PET images for both primary and nodal GTVs.The rectangles indicate the IQR, with the horizontal black line the median value, the black whiskers the maximum (minimum) data point within Q3 + 1.5IQR (Q1 − 1.5IQR) and the black crosses outlier data points.(b) shows the difference in TLG between CTAC std and CTAC cba images as a function of SUV mean , with primary GTVs indicated as circles and nodal GTVs diamonds.For both plots semi-manual GTVs are shown in green and thresholded GTVs in purple.IQR, interquartile range; GTV, gross tumour volume; PET, positron emission tomography; SUV, standard uptake value; TLG, total lesion glycolysis.investigated the impact on ano-rectal target delineation using PET images acquired in the pelvic radiotherapy position. There was a marked dependence on mean SUV for the volume differences, with much larger differences for the less FDG-avid lesions (Figure 5).This was probably due to the shallower gradients in SUV around the lower SUV mean GTVs meaning an 13.8% shift in SUV from using CTAC cba resulted in a larger volume expansion than in the more FDG-avid lesions.The Dice coefficient showed a similar if less pronounced trend with SUV mean , with all values < 0.90 occuring for GTVs with SUV mean < 6 g mL −1 .This implies that using CTAC cba is likely to be more important in GTV delineation for less FDGavid lesions.All except one of the semi-manual nodal GTVs had SUV mean < 6 g mL −1 so nodal delineations may require accurate AC to avoid under-segmentation. The impact on the thresholded GTV delineations of using CTAC cba was much less than on the semi-manual delineations, with small volume differences and high similarity metric scores.This was likely due to the fact that the threshold SUV was a relative value (40% of SUV max ), and so the ∼ 13% shift in SUV changed both SUV max and the boundary voxels by approximately the same amount, resulting in a very similar volume.In contrast, the semi-manual delineation used a fixed SUV = 2.5 g mL −1 threshold as the starting point for delineation, which means the increase in SUVs from using CTAC cba resulted in a larger volume delineated.There was a large difference between semi-manual and threshold contours in GTV volume on the same image set, with mean volumes of 44.3 cm 3 compared to 18.9 cm 3 .This is consistent with a previous study of 18 ano-rectal cancer patients in PET-CT images, which found mean semimanual volumes of 42.4 ± 6.8 cm 3 (± standard error) and percentage threshold volumes 15.5 ± 2.9 cm 3 . [8]he semi-manual volumes were closer to the gold standard expert consensus volumes, 36.2 ± 7.2 cm 3 . There was a much bigger impact from using CTAC cba rather than CTAC std on the metabolic parameters.There were statistically significant differences in SUV mean , SUV max and TLG for both semi-manual and thresholded GTVs.The differences in SUV mean for the thresholded volumes and SUV max for both semi-manual and thresholded volumes were very similar to each other, with median differences similar to the −13.8% mean perpixel SUV difference.The differences in SUV mean for the semi-manual volumes were smaller and more variable.This was likely due to the changes in semi-manual GTV volume with the two ACs, with the larger volumes on the CTAC cba images lowering the SUV mean and so partially offsetting the 13.8% increase in perpixel SUVs.One GTV actually had a larger SUV mean in the CTAC std than the CTAC cba .Examination of this volume indicated that the GTV man cba extended over two more axial slices than GTV man std .This meant GTV man cba included more lower SUV pixels, which reduced the SUV mean to 5% less than GTV man std , even though the SUV max in GTV man cba was 10% higher than in GTV man std .This may also be in part due to the difficulty in identifying the primary tumour due to physiological uptake at the adjacent bowel.TLG was also significantly lower on the CTAC std images, with a similar dependence on SUV mean as the volume differences. The clinical significance of these these statistically significant differences in metabolic parameters was difficult to determine.PET-CT SUV measurements have indicated a test-retest repeatability of 10% − 12% in tumour SUVs when performed under carefully controlled conditions in a research setting. [21]In clinical diagnostic settings variability in SUVs is likely to be 15% − 20%. [22]his is a similar order of variability as the error in SUVs reported in this study.However, the repeatability was determined using gold standard CT AC, and so failing to include the radiotherapy hardware in the AC map would generate an additional, systematic, bias to the SUV measurements.In addition, the SUV accuracy requirements for using SUV measurements for radiotherapy dose painting or treatment response assessment are higher than for routine clinical diagnostic purposes, suggesting the differences in metabolic parameters may be even more clinically significant in this context. [23]ne way of investigating this is considering the use of PET metabolic parameters for treatment prognosis.This is a pre-cursor to using SUVs for dose painting or response assessment.Several studies have provided evidence that TLG measured in a pre-treatment 18 F-FDG-PET scan are independent prognostic factors for disease-free and overall survival in rectal cancer patients. [5,10]Using the the TLG cut-off value of Choi et al., 1/8 rectal cancer patients changed prognosis group when SUVs were calculated using CTAC cba instead of CTAC std .If these prognostic factors are used to guide radiotherapy dose prescriptions, this indicates that acquiring accurate PET images which account for the attenuation of the radiotherapy hardware could be critical. Only one study has assessed the impact on metabolic parameters when acquiring PET-MR images in the body radiotherapy position.Paulus et al.evaluated differences in three lung cancer patients scanned on a Siemens PET-MR scanner. [24]Images were acquired with and without the anterior array coil on a coil bridge but with the flat couch top in both cases.The differences in SUV mean and SUV max between no coil and bridge, and coil and bridge images, without AC, was −10.0% ± 2.4% and −11.1% ± 2.0% respectively.Including AC of the anterior coil and coil bridge reduced this to −2.4% ± 3.3% (SUV mean ) and −3.9% ± 2.6% (SUV max ).These results are not directly comparable to the results reported in this study because they included the flat couch top in the AC for both images. A weakness of the methodology presented here is there has been no comparison to a patient acquisition without the radiotherapy hardware and anterior coil present.The method has been evaluated previously on phantoms, demonstrating that SUV measurements with the hardware included in the AC map were significantly closer to the gold standard than without. [15]Therefore it is reasonable to assume the same applies in patients.A patient PET acquisition without the radiotherapy hardware would have provided a gold standard PET image to confirm this phantom result.However, this would also have introduced several confounding variables between the images in the radiotherapy and gold standard setups.These would include differences due to difficulties in registering patient images acquired in different setups and differences in SUV distribution due to imaging at a different time-point.In addition, it would have added significant imaging time for patients.Therefore it was decided to compare AC maps with and without the radiotherapy hardware included to assess the impact of changing the AC map and rely on the phantom measurements alone to confirm the accuracy of the AC map with radiotherapy hardware included.The phantom measured difference in activity loss between using the AC map with radiotherapy hardware included was 15.0%, [15] which is close to the whole image difference in SUV in this study of 13.8%, suggesting the phantom measurements do translate to patients.Winter et al. did compare PET-MR images in the radiotherapy position with appropriate AC and diagnostic position for head and neck patients. [25]They concluded thresholded GTV contours were equivalent between the diagnostic and radiotherapy images based on a median Dice score of 0.89 and a median average assymetric surface distance of 0.6 mm.This confirms it is reasonable to use PET-MR images in the radiotherapy position with the radiotherapy hardware in the AC map as a gold standard.Interestingly, the agreement in threshold GTV delineation in this study was substantially higher, with mean Dice 0.98 and mean distance to agreement of 0.12 mm.This suggests the impact on percentage threshold GTV delineation from not using the radiotherapy hardware in the AC map is minimal. The other major component of AC for PET-MR image reconstruction is accounting for the attenuation of the patient.The current vendor supplied method uses a Dixon MR sequence to segment the patient into fat, water and air tissue classes which are then assigned linear attenuation coefficients. [26]This has been shown to introduce SUV errors.In this study this problem was avoided by using the registered radiotherapy planning CT image for patient AC.Improved methods for accounting for patient attenuation in PET-MR images are currently being investigated.From a radiotherapy perspective, algorithms used to generate synthetic CTs from MR images for MR-only radiotherapy are in clinical use. [27,28]One of these algorithms has demonstrated improvements in patient PET AC compared to the previous Dixon-based method, [29] although the magnitude of the difference in SUVs was less than half of the discrepancy reported here.This suggests that incorporating the radiotherapy hardware is more important for accurate PET quantification than accurate patient AC.Future work could investigate combining synthetic CT patient AC with AC of the radiotherapy hardware evaluated in this study. CONCLUSION Acquiring PET-MR images for radiotherapy planning requires patients to be imaged in the radiotherapy position on a flat couch with a coil bridge.Applying AC maps that incorporate this hardware and the MR anterior coil was feasible in radiotherapy patients and resulted in a 13.8% increase in SUVs.This resulted in differences in GTV delineation which were approaching statistical significance, and were more pronounced for less FDG-avid volumes.Evaluating this in a larger patient cohort is likely to provide a more conclusive result.It also had large and significant differences in metabolic measurements, which could have clinically significant consequences.This suggests that it is likely to be beneficial to incorporate AC of the radiotherapy hardware and anterior coil for radiotherapy planning PET-MR images in the pelvis, especially if used for dose painting and treatment prognostication. C O N F L I C T O F I N T E R E S T S TAT E M E N T G.P. declares that he receives fees from GE Healthcare for reporting DaTSCANs and honoraria for DaTSCAN educational presentations.J.J.W. declares receiving an honorarium from GE Healthcare for speaking in a webinar series.None of these had an impact on the work reported in this study. F I G U R E 1 Example of patient setup showing the flat couch top, patient immobilisation device, coil bridge and anterior array coil. F I G U R E 3 Example PET images reconstructed using the CTAC std (a) and CTAC cba (b) attenuation correction maps.The GTVp man std (a, blue contour) and GTVp man cba (b, red contour) are shown for image respectively.The per-pixel SUV difference map (CTAC std -CTAC cba ) is also shown (c) and the per-pixel percentage difference in (d).PET, positron emission tomography. F I G U R E 5 Plot of the difference in GTV volume between CTAC std to CTAC cba PET images as a function of the mean SUV within GTV cba .Both semi-manual contours (green) and thresholded contours (purple) are shown.Primary GTVs are represented as circles and nodal GTVs as diamonds.GTV, gross tumour volume; PET, positron emission tomography; SUV, standard uptake value.F I G U R E 6Plot of similarity metrics Dice coefficient (a) and mean distance to agreement (DTA, b) between GTV std and GTV cba as a function of SUV mean .Semi-manual contours (green) and thresholded contours (purple) are shown, with primary GTVs (circles) and nodal GTVs (diamonds) also distinguished.DTA, distance to agreement; GTV, gross tumour volume. AU T H O R C O N T R I B U T I O N SJonathan J. Wyatt: Contributed to the study design, data acquisition, data anlysis and drafted the manuscript.George Petrides: Contributed to the study design, data acquisition and revised the manuscript.Rachel A. Pearson: Contributed to the study design, data acquisition and revised the manuscript.Hazel M. McCallum: Contributed to the study design, data analysis and revised the manuscript.Ross J. Maxwell: Contributed to the study design, data analysis and revised the manuscript.AC K N OW L E D G M E N T SThis research is part of an activity (Deep MR-Only RT) that has received funding from EIT Health.EIT Health is supported by the European Institute of Innovation and Technology (EIT), a body of the European Union that receives support from the European Union's Horizon 2020 Research and innovation programme.This activity also received funding from the United Kingdom Research and Innovation council. (using the Ogawa et al. figure) or 5/8 (Choi et al. figure) rectum cancer patients in the poorer prognosis group.Importantly, one patient changed from the good prognosis to poor prognosis group when TLG was calculated using CTAC cba rather than CTAC std , using the Choi et al. cut-off value.This patient was not the patient who received the lower activity injection.
7,273
2023-11-03T00:00:00.000
[ "Medicine", "Engineering", "Physics" ]
Valence can control the nonexponential viscoelastic relaxation of multivalent reversible gels Gels made of telechelic polymers connected by reversible cross-linkers are a versatile design platform for biocompatible viscoelastic materials. Their linear response to a step strain displays a fast, near-exponential relaxation when using low-valence cross-linkers, while larger supramolecular cross-linkers bring about much slower dynamics involving a wide distribution of timescales whose physical origin is still debated. Here, we propose a model where the relaxation of polymer gels in the dilute regime originates from elementary events in which the bonds connecting two neighboring cross-linkers all disconnect. Larger cross-linkers allow for a greater average number of bonds connecting them but also generate more heterogeneity. We characterize the resulting distribution of relaxation timescales analytically and accurately reproduce stress relaxation measurements on metal-coordinated hydrogels with a variety of cross-linker sizes including ions, metal-organic cages, and nanoparticles. Our approach is simple enough to be extended to any cross-linker size and could thus be harnessed for the rational design of complex viscoelastic materials. INTRODUCTION Soft hydrogels are ubiquitous in biology and dictate the mechanics of cells and tissues [1].Due to their biocompatibility, synthetic hydrogels are thus prime candidates to serve as robust soft tissue implants, although fine control of their viscoelastic properties is crucial for their success in this role [2,3].In simple viscoelastic materials, stress relaxes according to a single exponential with a single relaxation time.This is not however the case for most biological materials such as cells [4], tissues [5], mucus [6], and biofilms [7].Instead, their relaxation is characterized by a broad distribution of relaxation times [8].Such relaxation is often heuristically described by a stretched exponential: where smaller values of the exponent α ∈]0; 1[ denote broader distributions of relaxation time scales [9].Other similarly phenomenological fitting functions include power-law dependences of σ on t [10][11][12] and lognormal distributions of the relaxation times [13,14]. Associative gels, which relax by a succession of binding and re-binding events [15], offer a promising route to design controllable viscoelastic materials.It is thus possible to tune their relaxation time by adjusting the chemical binding energy of their crosslinkers [16,17].Although this chemistry-based approach allows tuning of the overall stress relaxation time as illustrated in Fig. 1(a), less is known about the different approaches to tune the shape of the stress relaxation curve of reversible hydrogels.Accordingly, most existing models for the relaxation of multivalent gels focus on regimes dominated by a single relaxation time scale [18], leading to exponential relaxation [19].Control over the distribution of relaxation time scales could however be achieved in synthetic hydrogels connected with multivalent crosslinkers such as nanoparticles [20], metal-organic cages [21], clay [22], and latex beads [23], which are known to exhibit nonexponential viscoelastic relaxation.Here, we aim to elucidate this emergence of a wide distribution of time scales in materials with high valence crosslinkers to enable the rational design of complex gels [Fig.1(b)].Here we use the term "valence" to designate the number of polymer strands that a crosslinker can bind, a property sometimes also sometimes referred to as their "functionality" [24]. We propose that the emergence of a broad distribution of relaxation time scales arises from microscopic events consisting of the severing of the physical connection between two crosslinkers.We first propose a model where FIG. 1. High-valence crosslinkers yield a slow, potentially complex unbinding dynamics (a) Hydrogels held together by small crosslinkers relax over the time scale associated with the unbinding of a single polymer strand.There, stress decays exponentially in response to a step strain: σ(t)/σ(0) = exp(−t/τ ).Since t/τ = exp(ln t − ln τ ), changes in the time scale τ shift the σ vs. ln t relaxation curve horizontally, but do not alter its shape.(b) In contrast, relaxation events in the presence of high-valence crosslinkers require the simultaneous unbinding of many polymer stands.The associated time scale is long and highly variable depending on the number of strands involved in the "superbond" (grey shade).As a result the stress relaxation of such gels is no longer exponential and the precise shape of the relaxation curve strongly depends on the valence of the crosslinkers.this connection, hereafter termed "superbond", breaks if all its constitutive crosslinkers are detached at the same time [Fig.1(b)].We show that the breaking time of a superbond increases exponentially with the number of strands involved, consistent with previous observation [25].As a result of this strong dependence, small spatial heterogeneities in the polymer concentration may result in widely different relaxation times from one superbond to the next.Such exponential amplification of relaxation times originating from small structural differences forms the basis of models previously used to describe the relaxation of soft glasses [26][27][28].In contrast with these studies, our approach explicitly models the microscopic basis of this amplification.That allows it to not only recover relaxation curves virtually indistinguishable from those discussed in previous studies, but to also predict the influence of temperature and crosslinker valence on the macroscopic stress relaxation observed in the resulting gel.The details of the polymer strand morphology are not central to this influence, and we thus enclose them in a few effective parameters which could be derived from first principles in specialized models related to specific implementations of our basic mechanism. To confirm these predictions, we conduct experiments on hydrogels with four distinct crosslinker types of different sizes, and find that our model quantitatively reproduces multiple relaxation curves using this small set of microscopic parameters.Finally, we show that several phenomenological fitting functions used in the literature can be recovered as asymptotic regimes of our analytical model. Model of a single superbond We first model a single superbond in the simple, experimentally relevant [21] case of strong interactions combined with short polymers, which implies negligible entanglements.In the limit of very large and rigid crosslinkers, the polymer layer around a crosslinker is locally planar, its structure is not affected by small fluctuations of polymer concentration and its thickness fixes the distance between crosslinkers [21,29].We thus use the simplifying assumption that all individual bonds participating in a superbond are identical and non-interacting, an approximation whose validity we ultimately assess through comparisons with experiments. We model the attachment and detachment of a single polymer strand from a pair of crosslinkers as shown in Fig. 2(a).When both its ends are bound, the strand may or may not connect two different crosslinkers.The corresponding "bridging" and "looping" states have the same energy since we assume the polymer strand to be completely flexible, and we denote by ∆S the entropy difference between them.To transition between these two states, the strand must disconnect one of its ends and form a "dangling" state.The disconnection of the strand in this state implies an energy barrier ∆E that is much larger than the thermal energy k B T = β −1 .This implies that the dangling state is short-lived, and thus need not be explicitly included in our modeling.Our approximation scheme implies that the overall rate ω + and ω − to go from the looping to the bridging state and back are constant.They read where the typical time scale τ 0 takes into account the entropy difference between the looping and dangling state. At equilibrium, we denote the probability for a single polymer strand to create a bridge as p on = 1 − p off = 1/(1 + e ∆S ).We now consider the dynamics of a superbond involving N polymer strands.Within our approximation of independent attachment and detachment of the individual polymer strands, the superbond undergoes the Markov process illustrated in Fig. 2(b) and the probability P n (t) for n strands to create bridges between the two crosslinkers at time t satisfies the master equation FIG. 2. We model superbond breaking as the disconnection of many independent polymer strands.(a) Disconnecting a single polymer strand requires going through a high-energy, short-lived dangling state (larger arrows indicate faster transitions).The looping and bridging states both have two polymer-crosslinker bonds, and therefore have the same energy.(b) Individual strands in a superbond attach and detach independently, resulting in a one-dimensional random walk in the number n of attached strands [Eq.( 3)].Here we only draw the bridging strands, not the looping strands. which ensures that the number of bridging polymers can never be greater than N .To determine the rate at which a superbond breaks, we set an absorbing boundary condition P 0 (t) = 0 and define its survival probability as S(t) = N n=1 P n (t).In the limit N ≫ 1 where a large number of strands are involved in the superbond, we show in Sec.S1 of the Supplementary Information that the detachment of the two beads is analogous to a Kramers escape problem.We thus prove that the survival probability decays as a single exponential S(t) = exp(−t/τ N ) [30] with an average detachment time The breaking of the superbond can thus be assimilated to a Poisson process with rate 1/τ N regardless of the initial condition P n (0).The strong, exponential dependence of τ N on N implies that any dispersity in the number of strands involved in a superbond may result in a wide distribution of time scales. Model of the relaxation of a gel Two factors influence the dispersity of N .First, its value is constrained by the available space at the surface of each crosslinker, which we model by setting an upper bound N sat on the number of polymer strands (in a loop or a bridge) participating in any superbond.Second, depending on the local density of polymer in the vicinity of the superbond, the actual number of strands present may be lower than N sat .In the regime where the polymer solution surrounding the crosslinkers is dilute, polymer strands are independently distributed throughout the system.As a result, the distribution of local strand concentrations within a small volume surrounding a superbond follows a Poisson distribution.We thus assume that N is also described by a Poisson distribution up to its saturation at N sat : where N would be the average number of strands in a superbond in the absence of saturation and thus depends on the ratio of polymer to crosslinker concentration.Note that the specific form of the distribution used in Eq. ( 5) does not significantly modify our results, as discussed later. In response to a step strain, we assume that each superbond is stretched by an equal amount and resists the deformation with an equal force prior to breaking.Superbonds may subsequently reform, but the newly formed bonds are not preferentially stretched in the direction of the step strain and therefore do not contribute to the macroscopic stress on average.Denoting by t = 0 the time at which the step strain is applied and by σ(t) the resulting time-dependent shear stress, the progressive breaking of the initial superbonds results in the following stress response function: While the breaking times τ N are unaffected by the applied stress in the linear response regime, nonlinearities could easily be included in our formalism by making ∆S stress-dependent and thus favor strand detachment.The relaxation described in Eq. ( 6) occurs in two stages.At long times t ≫ τ Nsat , few short-lived superbonds remain.Saturated superbonds (N = N sat ) dominate the response and Eq. ( 6) is dominated by the last term of its sum.As a result, the stress relaxes exponentially over time, as seen from the linearity of the log-lin curves of Fig. 3(a) for large values of t.Systems with smaller values of N sat manifest this regime at earlier times; in the most extreme case, the relaxation of a system where superbonds involve at most a single polymer strand (N sat = 1) is fully exponential and extremely fast as compared to systems with higher N sat .Over short times (t ≪ τ Nsat ), stress relaxation involves multiple time scales.This nonexponential regime is apparent on the left of Fig. 3(a).These two regimes have already been reported in several Relationship between the stretch exponent α quantifying the nonexponential character of the relaxation and the microscopic parameter Nsat/ N .Here p off = 0.2.A low Nsat/ N gives an exponential relaxation (α ≃ 1), while a larger Nsat/ N leads to a more complex behavior (α < 1).While α appears to converge to a finite value for large Nsat/ N for the largest values of N , this behavior is contingent on our choice of fitting interval.This issue does not affect the rest of the curves.Large stars correspond to the curves represented in (a).Inset: illustration of the quality of the fits between the heuristic stretched exponential [dashed orange line, Eq. ( 1)] and our prediction [solid blue line, Eq. ( 6)]. While Eq. ( 6) is not identical to the stretched exponential of Eq. ( 1), the inset of Fig. 3(b) shows that they are remarkably close in practice.We thus relate the stretch exponent α to the saturation number N sat by fitting a stretched exponential to our predicted stress response function over the time interval required to relax 90% of the initial stress [Fig.3(b)].The fits are very close matches, and consistently give correlation factors r 2 > 0.98 (see detailed plots in Supplementary Fig. S2).If N sat ≲ 0.5 N then α ≃ 1, indicating a nearlyexponential relaxation.Indeed, in that case superbond saturation occurs well before the peak of the Poisson distribution of N .Physically, this implies that the local polymer concentration surrounding most superbonds is sufficient to saturate them.As almost all superbonds are saturated, they decay over the same time scale τ Nsat .As a result, the material as a whole displays an exponential relaxation.For larger values of N sat , the Poisson distribution is less affected by the saturation and the dynamics is set by the successive decay of superbonds involving an increasing number of strands, implying lower values of α.The larger the value of N , the sharper the crossover between these two regimes. Experiments To validate our model of the effect of crosslinker valence on hydrogel relaxation, we perform step-strain experiments of poly(ethylene glycol) (PEG)-based gels involving a range of crosslink valences and compare Eq. ( 6) to the resulting relaxation curves.We implement strand-crosslink bonds based on two sets of metalcooordination chemistry: nitrocatechol-Fe 3+ coordination and bispyridine-Pd 2+ coordination.The first set of gels is made with nitrocatechol-functionalized PEG crosslinked by single Fe 3+ ions with an estimated valence of 3, and by iron oxide nanoparticles.The nanoparticles have a mean diameter of 7 nm with a surface area that allows a valence of ∼ 100 ligands [20].The second set of gels are made with bispyridine-functionalized PEG, wherein bis-meta-pyridine ligands induce self-assembly of gels that are crosslinked by Pd 2 L 4 nanocages with a valence of 4, and bis-para-pyridine ligands induce self-assembly of gels that are crosslinked by Pd 12 L 24 nanocages with a valence of 24 [31].As shown in Fig. 4, these four distinct gel designs result in a broad range of relaxation behaviors.Overall, large-valence gels and lower temperatures result in longer relaxation times, consistent with the illustration of Fig. 1(a).The relaxation curves associated to high-valence crosslinkers are also less steep, consistent with the involvement of a broader distributions of relaxation times and the schematic of Fig. 1(b). At a more detailed level, our assumption that the dynamics of single polymer strand proceeds independently of its environment implies the existence of a single energy scale ∆E.As a result, we predict that all time scales involved in the relaxation are proportional to exp(−β∆E).We confirm this through a time-temperature collapse shown in the insets of Fig. 4 (see Sec. S4 of the Supplementary Information for details).This collapse provides us with the value of ∆E for each of our four systems, which we report in Table I.The binding energy value ∆E for the Pd 2 L 4 and Pd 12 L 24 gels match, as expected from the fact that they originate from the same composition.On the other hand, the ∆E of the nanoparticles gel cannot be directly compared to the Fe 3+ ion gel because of their dramatic physico-chemical properties as well as the acidic pH used to facilitate mono-coordination in nanoparticles. To compare the temperature-collapsed curves to our prediction of Eq. ( 6), we fit the parameters p off , τ 0 , N and N sat across multiple temperatures.The resulting fits, shown in Fig. 4, display a good agreement between the theory and experiments across up to 4 orders of magnitude in time scales.The fitted values of N sat are moreover consistent with the estimated valence of the crosslinkers (Table I), which confirms our interpretation of the physical origin of N sat .A more direct relation between the valence and N sat would require additional information about the structure of each category of gels, and in particular the number of nearest neighbors of an individ- FIG. 4. Stress relaxation function for four experimental systems with increasing crosslinker valences (see Table I for values). Here we use a log-lin scale [unlike in Fig. 3 TABLE I.Estimated and fitted parameters involved in the comparison between experiment and theory in Fig. 4. The energies are given in units of kBT for T = 300 K. Instead of displaying the parameter τ0, we present the more easily interpreted unbinding time of a single polymer strand at 300K, namely τ1 = τ0e β 300 ∆E /p off . ual crosslink, which remains debated for these types of gels [32][33][34].The possible clustering of crosslinkers may also influence this relation, as nanocage systems similar to ours [35] may form higher-order nanocage structures.While such complexities as well as possible imperfections in our fitting procedure complicate the literal interpretation of the fitted values of N sat , these nonetheless clearly increase with increasing crosslinker valence.The fit also supports the notion that the mean number of strands per superbond N accounts for the distribution of relaxation time scales in our gels.The Fe 3+ gel thus displays an exponential relaxation consistent with N = N sat = 1.The higher-valence Pd 2 L 4 and Pd 12 L 24 systems have a complex relaxation at early times followed by an exponential behavior, as expected for N ≃ N sat > 1.As expected from our model, the crossover time τ Nsat separating the two regimes is larger in the higher-valence Pd 12 L 24 gel.Finally, the high-valence nanoparticle system shows an extended complex relaxation associated with N < N sat , thus confirming that all the qualitative relaxation regimes discussed in the previous sections are experimentally relevant. Distribution of relaxation time scales To further visualize the differences between the responses of our gels, we plot the distributions of relaxation times p(τ ) for our fitted model in Fig. 5.The Fe 3+ gels, which relax according to a single exponential and whose p(τ ) are therefore delta functions, are not represented there.In the Pd 2 L 4 and Pd 12 L 24 systems, a distribution characterized by an initially decreasing distribution of time scales is interrupted by a valence-dependent maximum relaxation time τ Nsat .That time is comprised within the range of time scales observed in Fig. 4, accounting for the crossover to an exponential relaxation within this range.In nanoparticle systems, by contrast, the crossover occurs much later and thus cannot be directly observed in experiments.In all cases, the precise form of the distribution of time scales used in the domain τ < τ Nsat does not critically affect the predicted relaxation curves.Indeed, we show in Sec.S6 of the Supplementary Information that replacing the Poisson distribution of Eq. ( 5) with other distributions with the same mean and variance lead to essentially indistinguishable predictions over experimentally observable time scales.FIG. 5. Distribution of relaxation times corresponding to the theoretical plots of Fig. 4. The distribution associated with the Fe 3+ gel is a delta function for τ /τ1 = 1 and is thus not represented on this graph.The time distributions are given by a Poisson distribution cut for τ = τN sat (vertical lines), as described in Eq. ( 6).The dotted lines represent what these distributions would be in the absence of this saturation. This emphasizes the robustness of our predictions to the details of that choice of distribution.They are instead primarily determined by the mean and maximum superbond sizes, N and N sat . In the limit of large N and even larger N sat , the complex relaxation phase of our model characterized by t < τ Nsat may display analytical behaviors identical to some widely used rheological fitting functions.In this regime, the Poisson distribution p(N ) of Eq. ( 5) goes to a Gaussian.Since according to Eq. ( 4) the variable N is essentially the logarithm of the relaxation time τ for N ≫ 1, this results in a log-normal distribution of relaxation time scales: This result adds additional insights to this widely used fitting functional form, as it allows to relate the mean and variance of the distribution to the underlying crosslinkerscale parameters [13,14].It moreover offers a potential molecular-level justification for its use in describing the complex relaxation of systems with multivalent crosslinks.In the alternative case where p(N ) is a decaying exponential, our model results in power-law distributed relaxation time scales and the stress response function takes the form This result may also be presented in terms of the dependence of the storage and loss moduli on the frequency ω in an oscillatory rheology experiment.We thus predict that for γ < 1 The results larger values of γ and detailed derivations of Eqs.(7)(8)(9) are shown in the Supplementary Information.Again, this result has the potential to account for the power-law relaxation observed in many rheological systems [10][11][12], in addition to providing a link to their microscopic constituents.Overall, these results suggest a possible control of the system's rheology through the characteristics of p(N ), which could in turn be modulated through the spatial distribution of the polymer strands and the dispersity of the crosslinkers. DISCUSSION Our simple model recapitulates a wide range of rheological behaviors in multivalent systems based on two key superbond parameters: the mean size N and the maximum size N sat .These respectively control the amplitude of the fluctuations in superbond size and the longest superbond relaxation time scale.Prior to the longest relaxation time, the system displays an increasingly nonexponential response for increasing N [Fig.3(b)].Beyond it, it crosses over into exponential relaxation.In contrast with widely used phenomenological fitting parameters, our two variables yield reliable insights into the underlying microscopic dynamics, as demonstrated by the agreement of their fitting values with our a priori knownledge of four experimental systems covering a wide range of values of N and N sat . Our model bears a mathematical similarity with standard random energy trap models [36].There, a longtailed relaxation emerges from a short-tailed distribution of trap depths due to the exponential dependence of the relaxation times on the trap depths.Similarly, here a nonexponential relaxation emerges from a shorttailed distribution of superbond sizes N [Eq.( 5)] thanks to the exponential dependence of τ N on N [Eq.( 4)].In contrast with trap models, however, our model does not predict a glass transition upon a lowering of temperature.It instead displays a simple Arrhenius time-temperature relation, consistent with the experimental collapses in the insets of Fig. (4).Here again, an additional benefit of our approach is the direct connection between the predicted relaxation and experimentally accessible parameters such as the crosslinker surface (through N sat ). Our model's focus on the collective aspects of superbond breaking and the characteristics of the crosslinkers implies that it encloses most of the physics of the polymer strands within a few mesoscopic parameters, mainly τ 0 and ∆S.Within our approach, the morphology of the polymer thus does not affect the form of our relaxation, although it may lead to a rescaling of the relaxation times of Eq. ( 4).This formulation remains valid as long as the length and concentration of the polymer strands is low enough that the polymer strands do not become significantly entangled, which could spoil the Poissonian attachment/detachment process of Eq. ( 2).Even in this case however this equation may not be strictly valid, as the polymer layer in a superbond with many bound crosslinkers tends to be more compressed than in one with few.This effect should leads to a smooth (likely power law) dependence of ω ± on N , which would preserve the dominance of the much more abrupt exponential dependence of τ N on N .As a result, while such polymer brush effects could induce corrections in our estimations of the model parameters, the basic mechanism outlined here should still hold in their presence. Our model reproduces several qualitative characteristics of the rheology of multivalent gels, such as the strong influence of the crosslinker valence, Arrhenius temperature dependence and the transition between a nonexponential and an exponential regime at long times.Due to its simple, widely applicable microscopic assumptions, we believe that it could help shed light on and assist the design of a wide range of multivalent systems.Beyond composite gels, it could thus apply to RNA-protein biocondensates where multivalent interactions between proteins are mediated by RNA strands [37], as well as cytoskeletal systems where filaments linked to many other filaments display a slow relaxation reminiscent of that of our multivalent crosslinkers [38]. Synthesis of Fe3O4 nanoparticles (NPs) Bare Fe 3 O 4 NPs are synthesized following previously reported methods (Li et al. 2016).100 mg as-synthesized NPs are re-dispersed in 80 mL of 1:1 (v/v) solution of CHCl 3 and DMF, and 100 mg 1-arm PEG-C is added.The mixture is homogenized and equilibrated by pulsed sonication (pulse: 10 s on + 4 s off; power: 125 W) for 1 hour.Then the mixture is centrifuged at 10000 rpm for 10 min to remove any aggregates, and rotary evaporated at 50 • C, 30 mbar to remove CHCl 3 .Then the NP solution is precipitated in 150 mL cold Et 2 O (-20 • C).The precipitate is re-dispersed in H 2 O, and freezedried.The resulting NPs are 7 nm in diameter. Preparation of the Fe3 + -NC gels Preparation procedure is similar to a previously reported protocol [39], except that the gel is made in DMSO instead of H 2 O. 50 µL of 200 mg/mL 4-arm PEG-NC solution in DMSO is mixed with 16.7 µL of 80 mM FeCl 3 solution in DMSO (ligand: Fe 3+ molar ratio of 3:1).Then 33.3 µL DMSO and 13.8 µL TEA is added to facilitate deprotonation, and a gel is formed. Preparation of the Pd2L4 gels The synthesis of polymer and gel preparation procedures for P2L4 is the same as a reported protocol [31] with minor modifications.The annealing of the Pd 2 L 4 polyMOC gel was done at 60°C for 1 hr instead of 80°C for 4 hr, and 1.05 equivalent of Pd(NO 3 ) 2 . 2 H 2 O (relative to bifunctional polymer ligand) was used instead of 1 equivalent. Preparation of the Pd12L24 gels The synthesis of polymer and gel preparation procedures for polyMOC is the same as a reported protocol [31] Preparation of the NP gels Preparation procedure is the same as the reported protocol [40].Briefly, 20 mg PEGylated Fe 3 O 4 NPs (equivalent to 20 mg Fe 3 O 4 core) and 20 mg 4-arm PEG-NC are mixed in a 0.2 M HCl aqueous solution.The solution mixture (pH = 2) is transferred into a mold and sealed, and a solid gel is obtained after curing in a 50 • C oven for 24 hours. Rheology Stress relaxation measurements are done on an Anton Paar rheometer with parallel plate geometry (10 mm diameter flat probe for NP gels and polyMOC gels, and 25 mm diameter cone probe for Fe 3+ gels).All tests are done immediately after transferring the gel sample onto the sample stage.A Peltier hood is used for all experiments to control the measurement temperature and prevent solvent evaporation.H 2 O based samples are furthermore sealed with mineral oil before experimentation to reduce the evaporation rate.Relaxation tests were performed by applying a γ = 0.005.step strain for the NP gel, and γ = 0.02 step strain for the other three systems. Here we present the mathematical proofs of the main results of the manuscript as well as details on the methodology used to analyze the experimental data. S1. DISTRIBUTION OF SUPERBOND BREAKING TIME AND DERIVATION OF τN Here we show that the survival probability for the detachment of a superbond (illustrated in main text Fig. 2) containing many polymer strands (N → ∞) asymptotically goes to S(t) = e −t/τ N , where τ N is given by Eq. ( 4) of the main text.We first consider a general one-step process and derive the basic recursion equation used throughout the proof in Sec.S1.1.We solve the recursion in Sec.S1.2 and express the generating function of S(t) as a double sum.In Sec.S1.3, we apply the resulting formula to our particular problem and take the continuum limit of the second sum.Finally, we compute both sums in the N → ∞ limit in Sec.S1.4.Our derivation is adapted from the calculation presented in the appendix of Ref. [1]. S1.1. Backward Kolmogorov equation for the generating function of S(t) We consider a one-step process, i.e., a stochastic process consisting of transitions between consecutive discrete states on a line, with transition rates r n and g n illustrated in Fig. S1(a).We denote the probability for the particle to be in state k at time t after starting in state n at time 0 by P (k, t|n).We assume an absorbing boundary condition in 0 and a reflecting boundary condition in N , i.e., Inserting these definitions into Eq.(S2) yields which we endeavor to solve for h n (α) in the following. S1.2. Sum equation for the generating function We define a rescaled current between sites n − 1 and n This allows us to turn the two-step recursion of Eq. (S4) into one with only one step: which can easily be summed as We now invert Eq. (S5) and use Eq.(S7) to express the finite difference (h n − h n−1 ).We further use the property that h m = h 0 + m n=1 (h n − h n−1 ) and recognize that h 0 = 0 due to Eq. (S1) to obtain Illustration of the similarity of our modeled stress response function with a stretched exponential.We plot the relaxation modulus computed using Eq. ( 6) of the main text for N ∈ [3,5,10,15].For each value of N , we plot four values of Nsat, namely Nsat = 0.1 N , 0.5 N , N and 1.5 N , p off = 0.2.Each plot also mentions the value of the fitted stretch exponent α and the correlation coefficient r 2 . S2. LINK BETWEEN α AND Nsat/ N Here establish the connection between the stretch exponent α and the values of N sat / N shown in Fig. 4 of the main text.To mimic the observation of an experimental step strain over a finite time window, we focus our attention on the time interval between t = 0 and t = τ 90 , where τ 90 is the time required to relax 90% of the stress, i.e., σ(τ 90 ) = 0.1 × σ(0).We plot the relaxation curve given by Eq. ( 6) of the main text over this time window, then perform a least-squares fit using a stretched exponential [Eq.(1) of the main text] with α and τ as fitting parameters.As shown in Fig. S2, the agreement is excellent for a large majority of the parameters used.The corresponding value of the fitting parameters (τ and α) for a broader variety of N and N sat is also provided in Fig. S3.This suggests that experimental curves that are well fitted by a stretched exponential could be equally well described by our model. S3. TIME-TEMPERATURE COLLAPSE Here we describe the procedure used to determine the binding energy ∆E in the experimental systems discussed in the main text.Equation (4) of the main text implies that the temperature dependence of the stress response function can be eliminated by expressing it as a function of the rescaled time t = te β∆E .This should cause the relaxation curves of a given system at different temperatures to collapse. For each type of ligand, we have 5 datasets showing the stress relaxation function as a function of time at each different temperature {T To enable the comparison between timerescaled datasets, we first define an interpolating function for the stress relaxation function at each temperature used.We thus compute the set of interpolating coefficients p by perform a least-square fit of the following rational function to the datapoints t . We furthermore define the interval of definition of P (α) (x) as the range over which data is available, i.e., I P (α) = 0, max i t (α) i . We then perform the collapse of the {T (α) } α∈ [1,4] interpolated curves onto the T (0) curve.To this effect we define the set of rescaling coefficients {a (α) } α∈ [1,4] and performs a separate time rescaling for each temperature: t = te a (α) .For each α ∈ [1,4], we optimise the semidistance between the functions t → P (0) (t) and t → P (α) (te a (α) ) with respect to a (α) .The resulting collapsed curves are shown in Fig. S4 (a,b,c).The optimal rescaling coefficients are plotted as a function of the inverse temperature 1/k B T in Fig. S4 (d,e,f).Consistent with the time-temperature collapse hypothesis, this dependence is affine, and we use the slope of the best fitting line as our value of the binding energy ∆E. S4. FIT OF THE STRESS RELAXATION FUNCTION TO OUR THEORETICAL PREDICTION In the main text, we fit the experimental curves with the stress relaxation function predicted by our model.We then represent them on a log-lin scale to allow the simultaneous visualization of short and long time scales.To demonstrate the robustness of our fits, in Fig. S5 we replot these curves in a lin-lin-scale, as well as a lin-log scale that emphasizes intervals of exponential relaxation as straight lines. FIG. S4. Collapse of the relaxation modulus: (a-c) Collapsed relaxation modulus of Fe 3+ ions, Pd12L24 nanocages and nanoparticles respectively after the rescaling of the time for an optimized collapse .The curves are represented on a log-lin coordinate system, but the collapsing procedure is performed on a lin-lin scale .(d-f) corresponding rescaling parameters as a function of 1/(kBT ) the inverse temperature.The slope of the line is −∆E and the legend gives the value of ∆E in kBT unit at 300K. S5. RATIONALIZATION OF THE POISSON DISTRIBUTION OF THE SUPERBOND SIZE p(N ) The polymers used in our experiments are 4-arms polyethylene glycol (PEG).At the end of each arm is a nitrocatechol ligand that allows crosslinker binding.In our model, we assume that the ends of a polymer are always attached to a ligand.For this reason, the diffusion of such a polymer over a distance comparable to the polymer size occurs on a time scale comparable to the time required to rearrange the bonds between crosslinkers, which corresponds to the time required for the relaxation of the stress in the system.Let us consider that the 4-arm PEG are able to diffuse over a volume v during the time of the experiment.We model the spreading of the polymers in the system by discretizing the system into small boxes of volume v between which no polymer exchange occurs over the duration of the experiment.As a result the distribution of the polymers over the boxes is due to the initial preparation of the system.We assume that this processes places each polymer in a random box with equal probability.As a result, the probability that a specific box contains n polymers is given by a Poisson distribution: where ρ PEG is the average concentration of PEG in the system, and vρ PEG is the mean (over the system) number of P EG in a box of volume v. Equation (S21) is the basis for Eq. ( 5) of the main text. S6. EXPERIMENTAL FIT USING ALTERNATIVE DISTRIBUTIONS OF N To demonstrate the robustness of our model, we perform the fit of the experimental data using probability distribution different from the Poisson distribution of Eq. ( 5) of the main text.For this, we first define three new distributions with the same mean (and when possible the same variance) than the one of Eq. ( 5) of the main text and keep the Here N = 10 and p off = 0.18 ⇒ γ ≃ 0.0583.(c) Plot of the three distinct asymptotic regimes for a higher value of γ ( N = 10 and p off = 0.935 ⇒ γ = 1.49).The marker at ωτ1 = τ1/τN sat denotes the expected position of the low-frequency crossover, while the high-frequency crossover is expected for ωτ1 ≈ 1. FIG. 3 . FIG. 3. (a) Disperse, high-valence superbonds initially display a nonexponential mechanical relaxation, then cross over to an exponential regime when only the saturated superbonds remain.Curves plotted from Eq. (6) with p off = 0.2, N = 10 and different values of Nsat as indicated on each curve.(b) Relationship between the stretch exponent α quantifying the nonexponential character of the relaxation and the microscopic parameter Nsat/ N .Here p off = 0.2.A low Nsat/N gives an exponential relaxation (α ≃ 1), while a larger Nsat/ N leads to a more complex behavior (α < 1).While α appears to converge to a finite value for large Nsat/ N for the largest values of N , this behavior is contingent on our choice of fitting interval.This issue does not affect the rest of the curves.Large stars correspond to the curves represented in (a).Inset: illustration of the quality of the fits between the heuristic stretched exponential [dashed orange line, Eq. (1)] and our prediction [solid blue line, Eq. (6)]. FIG. 4. Stress relaxation function for four experimental systems with increasing crosslinker valences (see Table I for values).Here we use a log-lin scale [unlike in Fig.3(a)] to facilitate the visualization a large range of time scales.Alternate representations are available as Fig. S5.Symbols are experimental datapoints, and the lines are the associated fitting curves.Insets: timetemperature collapsed data obtained by a rescaling t → te β∆E . P FIG. S1.Superbond detachment as a Kramers-like barrier-crossing problem.(a) Definition of the rates of the one-step process.(b) Profile of the pseudo-free energy defined in Eq. (S13).Superbond detachment requires the system to fluctuate out of the free energy well to the x = 0 absorbing state, with 1/N playing the role of a temperature. 3 N = 5 N = 10 N = 15 N = 20 FIG FIG. S3.Best fit values of the stretched exponential parameters τ and α for a range of values of N and Nsat.The right-handside panel is identical to Fig. 4 of the main text. FIG. S8 . FIG. S8.Comparison between the storage and loss moduli computed from the exact expression Eq. (S35) (solid lines) and the asymptotic expressions of Eqs.(S36-S38) (dashed lines).(a) Plots in the large Nsat limit (here Nsat = 100), showing a good agreement with the power law regime of Eq. (S37) for two values of N and for constant p off = 0.18 corresponding to γ ≃ 0.116 and γ ≃ 0.0583.(b) Plots for a smaller value of Nsat (Nsat = 30) showing the three distinct asymptotic regimes.Here N = 10 and p off = 0.18 ⇒ γ ≃ 0.0583.(c) Plot of the three distinct asymptotic regimes for a higher value of γ ( N = 10 and p off = 0.935 ⇒ γ = 1.49).The marker at ωτ1 = τ1/τN sat denotes the expected position of the low-frequency crossover, while the high-frequency crossover is expected for ωτ1 ≈ 1.
9,238.6
2021-12-14T00:00:00.000
[ "Materials Science" ]
The renin-angiotensin-aldosterone system blockade and arterial stiffness in renal transplant recipients – a cross-sectional prospective observational clinical study Introduction. Arterial stiffness parameters can be used as a predictor of cardiovascular events in the general population and renal transplant recipients (RTRs). Additionally, the renin-angiotensin-aldosterone-system (RAAS) blockade mitigates arterial stiffness in the general population. There are no sufficient data concerning the role of the RAAS blockade in reducing arterial stiffness among patients after kidney transplantation. The aim of this study is to assess the influence of the above blockade on arterial stiffness in RTRs. Methods. 344 stable RTRs were enrolled in the study. 204 (59.3%) of them received RAAS blockers (angiotensin convertase inhibitors – ACEIs or angiotensin receptor blockers – ARBs): group RAAS (+), and 140 (40.7%) were not treated with such agents: group RAAS (–). Results. In the RAAS (+) group, 55.9% of the patients used ARBs and 44.1% ACEIs. Cardiovascular disease (coronary artery disease and/or peripheral obliterans artery disease) (27.9% vs 14.3%, p<0.05), and heart failure (27.4% vs 24.3%, p<0.05) were significantly more often diagnosed in the RAAS (+) group when compared to the RAAS (–) group. Systolic blood pressure, diastolic blood pressure and all arterial stiffness parameters (baPWV, cfPWV, pulse pressure) did not differ significantly between the RAAS (+) and RAAS (–) groups. The results revealed that cardiovascular disease in patients was associated with a significant increase in both, the PWV and pulse pressure. No difference between the arterial stiffness parameters was observed in patients with a cardiovascular disease, diabetes and heart failure in the RAAS (+) and RAAS (–) groups. Moreover, beta-blockers and diuretics ameliorated the arterial stiffness parameters. Conclusions: This study showed the indication bias of the RAAS prescription, and no conclusion on the influence of RAAS on arterial stiffness can be drawn. The results indicated diuretics and beta-blockers as agents lowering the arterial stiffness in RTRs. INTRODUCTION Kidney transplantation (KTx) is the optimal form of renal replacement therapy in patients with end-stage renal disease. During the last decades, although the transplantation procedures and immunosuppressive treatment have improved, the patients still live shorter than the general population. The most common causes of death among renal transplant recipients (RTRs) are cardiovascular diseases . The cardiovascular complications in RTRs are not only associated with standard risk factors, but also with non-traditional factors, such as immunosuppressive therapy, earlier dialysis therapy, proteinuria, inflammation or anemia, specific for this population. The higher value of arterial stiffness is the consequence of the risk factors, and it can lead to a cardiovascular disease (CVD) (Boutouyrie et al. 2015, Holdaas et al., 2017. Estimation of aortic stiffness can be made by measurement of the pulse wave velocity (PWV) as a direct, noninvasive method. According to the literature, the PWV value was related to all-cause and cardiovascular mortality (Laurent et al. 2006;Laurent et al., 2001;Mattace-Raso et al., 2006). In RTRs, increased stiffness predicts the incidence of cardiovascular episodes and the deterioration of renal graft function (Barenbrock et al., 2002;Bahous et al., 2004). Scientific data showed the potential clinical benefits of angiotensin-converting enzyme inhibitors (ACEIs) or blockers of angiotensin receptor (ARBs) in chronic kidney disease. Silvariño and others (Silvariño et al,. 2019) presented that administration of ARBs and ACEIs is associated with a slower CKD progression and a more significant proteinuria reduction in an observational study in KTx patients. Additionally, in a chronic kidney disease group, the inhibition of the renin-angiotensin-aldosterone system (RAAS) independently resulted in the reduction of hypertension and PWV (Frimodt-Møller et al., 2012). The majority of RTRs have graft failure after transplantation. This is associated with chronic allograft nephropathy, transplant glomerulopathy, recurrent and de novo renal disease, and immunosuppressive drug toxicity. Therefore, a nephroprotection treatment should be administered to slow down the decline of the graft function. The therapy must consist of appropriate immunosuppression, the blood pressure and lipid control, and the use ACEIs or ARBs (Seron et al., 2001). Vol. 67, No 4/2020 613-622 https://doi.org/10.18388/abp.2020_5490 After kidney transplantation, the above agents are used not only because of hypertension but also to reduce proteinuria and for treating post-transplant erythrocytosis (Vlahakos et al., 2003) Nowadays, there is no sufficient knowledge concerning post-transplantation ACEIs or ARBs therapy in comparison to other antihypertensive agents reducing arterial stiffness. Therefore, we performed this study concerning the administration of RAAS blockade and its influence on aortic stiffness in RTRs. METHODS 344 stable RTRs, transplanted between 1994 and 2018, which were treated in the outpatient unit of the Department of Nephrology Charité-Universitätsmedizin Berlin, Germany, between February and July 2018, and were enrolled in the study, and also signed a written consent. The research was approved by the Ethics Committee of Charité -Universitätsmedizin Berlin (EA 1/252/17). Patients gave their written informed consent. The study was conducted under the Declaration of Helsinki. The information concerning demographic data, immunosuppressive treatment, renal transplantation, cardiovascular and diabetic status was obtained from the patients' medical files. Additionally, the schedule of hypertensive treatment was analyzed, including administration of ACEIs and ARBs or aldosterone antagonists. Laboratory data, such as the levels of serum creatinine, potassium, hemoglobin, proteinuria, albuminuria and NT-proBNP (N-terminal pro-B type natriuretic peptide) was obtained from medical records. Additionally, the arterial stiffness parameters: brachialankle and carotid-femoral pulse wave velocity (baPWV left and right, cfPWV), ankle-brachial index, pulse pressure, pulsatile stress test left and right (pulsatile stress test=heart rate × pulse pressure) and blood pressure, were assessed in each patient using the ABI system 100 (Boso Bosch and Sohn, Germany) [12]. Statistical analyses. Statistical analyses were performed using the STATISTICA 13.0 PL for Windows software package. Categorical variables are presented as absolute numbers (percentages). Continuous variables are presented as mean value ± standard deviation (S.D.) or as the median and interquartile range (IQR) for highly skewed variables. Differences in the distribution of continuous variables were assessed using the t-tests or Mann-Whitney U-test, respectively. The Chi-squared test was used for categorical variables. Pearson correlation coefficients were calculated to assess the linear association between continuous variables. Additionally, a linear regression model was used to evaluate the relationship between a dependent variable and one or more independent variables. The Shapiro-Wilk test was used to assess normality of the continuous variables. In all statistical tests, a p-value <0.05 was considered as statistically significant. Study population characteristics Overall, 344 patients were enrolled in the study. The study population was divided into two groups based on the use of RAAS blockade. 204 (59.3%) and 140 (40.7%) RTRs were qualified to the RAAS (+) and RAAS (-) groups, respectively. General characteristics of the participants are summarized in Table 1. The main causes of end-stage renal disease in the study population were glomerulonephritis, tubulointerstitial nephropathy and polycystic kidney disease. In the RAAS (+) group, the average serum creatinine (1.58 vs 1.36 mg/dl, p<0.05) was higher and it resulted in lower estimated glomerular filtration rate (eGFR CKD-EPI) (47.9 vs 54ml/min/1.73m 2 , p<0.05), as compared to the RAAS (-) group. On the other hand, proteinuria (197 vs 157 mg/day) and albuminuria (67 vs 28 mg/day) were significantly higher in the RAAS (+) group participants. The levels of albumin in the blood and hemoglobin were similar in both groups. Renal replacement therapy time before transplantation was similar for both groups, but the time after transplantation was longer (95.5 vs 54.5 months, p<0.05) in the RAAS (+) group. There was no significant difference observed in terms of mycophenolate mofetil, mycophenolate sodium, azathioprine, mTOR inhibitors, and belatacept administration between both groups. Around 70% of RTRs received beta-blockers; the groups did not differ in that respect. Overall, 59.3% of screened RTRs had received the RAAS blockade. 55.4% of them used ARBs, and 44.6% used ACEIs. The majority of patients treated with ARBs or ACEIs received candesartan and ramipril. The mean dose of ARBs and ACEIs was 0.25 and 0.5 of their maximal recommended doses, respectively. Additional details concerning comorbidities, immunosuppressive, antihypertensive and erythropoietin treatments are presented in Table 1. Blood pressure control and arterial stiffness in the study population In the study population, the mean value of systolic blood pressure and diastolic blood pressure on the right and left arm was 140.7 mmHg, 139.6 mmHg, 85.8 mmHg, and 85.7 mmHg (p-ns), respectively. Addi-Arterial stiffness in renal transplant recipients tionally, systolic blood pressure, diastolic blood pressure and all arterial stiffness parameters (baPWV, cfPWV, pulse pressure) did not differ significantly between the RAAS (+) and RAAS (-) groups ( Table 2). Among the study population, there were 38% of RTRs with cfPWV higher than 8.1 m/s. There were 39.7% and 35% of patients in the RAAS(+) and RAAS(-) groups, respectively. Arterial stiffness parameters and CVD, heart failure and diabetes Univariate and multivariate analysis The results revealed that the presence of CVD in patients was associated with a significant increase in both, the PWV and pulse pressure, in univariate and multivariate analysis (Table 3a, b). There was no influence of diabetes, heart failure, proteinuria, albuminuria, renal function (creatinine level, eGFR), time of RRT and time after transplantation on the pulse pressure values (Table 3a). On the other hand, diabetes, heart failure, and eGFR significantly affected the pulse wave velocity. There was no influence of the creatinine level, albuminuria, and proteinuria on the pulse wave velocity in the study population (Table 3b). Both analyses showed the positive effect of beta-blockers and diuretics on arterial stiffness parameters. In the contrast, calcium channel blockers decreased the pulse pressure, but there was no effect on the pulse wave velocity. Moreover, erythropoietin administration did not influence the arterial stiffness parameters in analyzed RTRs ( No difference was observed in SBP, DBP, and arterial stiffness between patients with CVD, diabetes, and HF treated with RAAS and not treated with RAAS (Table 4). DISCUSSION Our study showed an indication bias of the RAAS prescription, and no conclusion on the influence of RAAS on arterial stiffness can be drawn. Moreover, in patients with CVD, heart failure and diabetes, higher arterial stiffness was observed in comparison to participants without these comorbidities. Additionally, a worse renal graft function correlated with arterial stiffness. Furthermore, the usage of RAAS blockade in participants with CVD, heart failure and diabetes did not cause a difference in terms of the value of PWV. It is well known that the RAAS blockade ameliorates hypertension and proteinuria which influence the renal graft-survival (Tylicki et al., 2007;Sennesael et al., 1995). Additionally, Szabo and others (Szabo et al., 2000) showed that ACEIs and ARBs may protect the renal graft from fibrosis and tubular atrophy. On the other hand, Suwelack et al. observed that the left ventricle mass index and diastolic relaxation improved after two years of treatment with quinapril, when compared to atenolol after KTx (Suwelack et al., 2000). Nevertheless, there is no sufficient data regarding the influence of the RAAS blockade on arterial stiffness in RTRs. In the study presented here, we showed a more advanced arterial stiffness and a higher level of creatinine in the RAAS(+) when compared to RAAS (-) patients. It is worth to underline that the prevalence of CVD, heart failure and diabetes was higher in the former population. Moreover, all of these comorbidities correlated with arterial stiffness parameters, but CVD had the strongest impact. Kolonko and others (Kolonko et al., 2016) reported that pre-transplantation diabetes and CVD significantly correlated with an increased PWV in RTRs. Moreover, Kim and others analyzed baPWV in the end-stage renal disease patients that were on the waiting list for kidney transplantation, and showed a higher baPWV in patients with presence of CVD in their medical history than in those without them. The authors proved that PWV was a strong predictor of CVD in RTRs. Another study also showed that PWV was associated with CVD events after KTx. Additionally, pulsatile stress was a significant (HR 3.7; p<0.02) and independent factor for cardiovascular events in RTRs, in addition to a past history of cardiovascular events (HR 1.16; p<0.04) (Bahous et al., 2004). Therefore, CVD and arterial stiffness are interrelated, as it was observed in our study. Similar results were found in a Belgian study. After a mean follow-up of 5 years, Verbeke and others (Verbeke et al., 2011) noted in a cohort of 512 RTRs that cfPWV was a significant factor in the assessment of a cardiovascular risk. Moreover, patients with a cfPWV of ≥8.1 m/s had worse cardiovascular survival when com- pared to patients with a cfPWV <8.1 m/s. In our study, there were up to 38% patients with cfPWV >8.1 m/s, which qualified them to a higher cardiovascular risk, regardless of comorbidities, age and immunosuppressive regimen. Ayub and others (Ayub et al., 2015) performed the PWV measurement and estimation of the estimated glomerular filtration rate in RTRs and showed an inverse correlation between these two parameters. Moreover, the authors suggested the necessity of PWV evaluation in the assessment of cardiovascular risk in RTRs with worse renal graft function. Similarly, the results of a study from Turkey showed a significantly higher value of PWV in RTRs with lower estimated glomerular filtration rate. The mean value of PWV was 7.4±0.6 vs 6.1±0.4 cm/s in patients with estimated glomerular filtration rate 15-49 ml/min and 50-69 ml/min, respectively (Sezera et al., 2015). Likewise, in our study, there was a significant correlation between arterial stiffness and renal graft function, assessed by glomerular filtration rate. Proteinuria is acknowledged as a marker of renal graft damage, a predictor of graft survival, and the incidence of CVD in RTRs. Its prevalence was described in up to 45% of RTRs (Park et al., 2000;Amer et al., 2007). The prevalence of proteinuria in our cohort was 31.7%, which is consistent with the above data. Guliyev and others (Guliyev et al., 2015) showed that proteinuria (>500 mg/day) was strongly associated with increased risk of cardiovascular events, according to accelerated arterial stiffness and decreased arterial elasticity when compared to patients with lower proteinuria (<500 mg/day). Among RTRs in Germany, Baumann et al. reported that pulsatile stress, not PWV was associated with the quantity of albuminuria (r=0.29; p<0.01 and r=0.06; p=0.6 respectively). Therefore, this parameter could be a marker for arterial dysfunction in RTRs (Baumann et al., 2010). Jeon and others (Jeon et al., 2015) evaluated the incidence of major adverse cardiac events (cardiac death, nonfatal myocardial infarction, or coronary revascularization) within 55.3months of follow-up. They showed that proteinuria was associated with major adverse cardiac events (hazard ratio [HR] 8.689, 95% confidence interval [CI] 2.929-25.774, p<0.001) when compared to those without proteinuria. The mortality rate among the study population was significantly higher in patients with proteinuria (HR 6.815, 95% CI 2. 164-21.467, p=0.001). In our study, proteinuria and albuminuria are not correlated with PWV. Aspirin, beta-blockers, ACEIs, ARBs, and statins can each reduce the risk of major CVD events by 25%, in the general population (Yusuf et al., 2002). However, there are no sufficient data regarding the safety and efficacy of ACEis or ARBs treatments in reducing cardiovascular risk in RTRs. Gastona and others (Gastona et al., 2009) analyzed the use of cardioprotective medications in RTRs based on The Long Term Deterioration of Kidney Allograft Function (DeKAF) study. The results indicated that the RAAS blockade was used in 24% of the study population at 6 months after transplantation and the authors presented skepticism concerning the prescription of CVD medication in that population (Gastona et al., 2009). On the other hand, Pilmore and others (Pilmore et al., 2011) reported that the prevalence of ACEIs or ARBs administration in RTRs, 5 years after transplantation, was at 36.3%. Patients with the history of myocardial infarc-tion used these agents more frequently. In comparison, administration of the RAAS blockade did not differentiate between RTRs with and without diabetes. Our data showed a higher prevalence of ACEIs or ARBs administration in RTRs as compared to the cited article. Moreover, a majority of participants with CVD or diabetes have used these medications. This demonstrated the high awareness of the transplant center about the possible role of the RAAS blockade in reducing the cardiovascular risk in that population. A small reduction in pulse wave velocity was observed in the ZEUS study, under zofenopril or irbesartan treatment (Omboni et al., 2017). Zhao and others (Zhao et al., 2018) using the rat kidney transplantation model, showed a significant mitigation of the angiotensin IIinduced contractions in the aorta and mesenteric arteries of the recipient under losartan treatment. There are no sufficient data concerning the influence of the RAAS blockade on arterial stiffness in RTRs as compared to the chronic kidney disease population. The efficacy and safety treatment with enalapril and/or candesartan was shown by Frimodt-Møller et al., among patients with mean GFR 30, range 13-59 ml/min/1.73 m 2 . The authors showed a decrease in PWV during the 24 weeks of follow-up in the analyzed population (Frimodt-Møller et al., 2012). Our study did not find a difference in PWV in high risk (CVD, diabetes, heart failure) RTRs who were treated with the RAAS blockade in comparison to participants without these medications. The results are not consistent with the above data. There is association with bias. Additionally, our study population was heterogeneous in terms of different time after transplantation and the time of RAAS blockers administration, and was not adjusted for these parameters. LIMITATIONS There are several limitations of this study that should be considered when interpreting our conclusions. First of all, this is a single-center study. The sample size was relatively small, but to the best of our knowledge, this is the first study where the influence of the RAAS blockade on arterial stiffness was analyzed in RTRs. Moreover, the amount of patients differed in the RAAS(+) and RAAS(-) groups. Secondly, the study population was a heterogeneous group, with different comorbidities, including cardiovascular disease, diabetes and heart failure, dissimilar time of dialysis, various periods after renal transplantation and the RAAS blockade treatment. Additionally, there were discrepancies in terms of the eGFR, proteinuria, and albuminuria values, and immunosuppressive regimen between the RAAS(+) and RAAS(-) patients. Thirdly, there was no follow-up in the study. Fourthly, the doses of ARBs and ACEIs used in the study population were small. Therefore, the influence of these agents on arterial stiffness parameters was non-significant. However, despite these limitations, the study highlights some important information for the RTRs in terms of prevention of CVD in this population. Hence, we cannot exclude that there is no effect of the RAAS blockade on arterial stiffness at all. Furthermore, we cannot exclude the possibility of residual confounding. To obtain a definite answer on the effect of RAAS blockade on PWV, further studies are needed.
4,295.2
2020-12-17T00:00:00.000
[ "Medicine", "Biology" ]
Characterization of Immune Responses Induced by Immunization with the HA DNA Vaccines of Two Antigenically Distinctive H5N1 HPAIV Isolates The evolution of the H5N1 highly pathogenic avian influenza virus (HPAIV) has resulted in high sequence variations and diverse antigenic properties in circulating viral isolates. We investigated immune responses induced by HA DNA vaccines of two contemporary H5N1 HPAIV isolates, A/bar-headed goose/Qinghai/3/2005 (QH) and A/chicken/Shanxi/2/2006 (SX) respectively, against the homologous as well as the heterologous virus isolate for comparison. Characterization of antibody responses induced by immunization with QH-HA and SX-HA DNA vaccines showed that the two isolates are antigenically distinctive. Interestingly, after immunization with the QH-HA DNA vaccine, subsequent boosting with the SX-HA DNA vaccine significantly augmented antibody responses against the QH isolate but only induced low levels of antibody responses against the SX isolate. Conversely, after immunization with the SX-HA DNA vaccine, subsequent boosting with the QH-HA DNA vaccine significantly augmented antibody responses against the SX isolate but only induced low levels of antibody responses against the QH isolate. In contrast to the antibody responses, cross-reactive T cell responses are readily detected between these two isolates at similar levels. These results indicate the existence of original antigenic sin (OAS) between concurrently circulating H5N1 HPAIV strains, which may need to be taken into consideration in vaccine development against the potential H5N1 HPAIV pandemic. Introduction Influenza virus infection causes serious respiratory illness, and seasonal human influenza epidemics are estimated to result in about 40,000 deaths and over 200,000 hospitalizations annually in the U.S. alone and up to 1.5 million deaths worldwide [1,2]. Influenza virus contains a segmented negative-strand RNA genome and are categorized into three different types (A, B, and C) based on the antigenic properties of its two internal proteins, nucleoprotein and matrix protein [3]. Types B and C influenza viruses are primarily human pathogens, whereas the type A influenza virus exists in both humans and a number of animal species and has a natural reservoir in aquatic birds. Influenza A viruses are further divided into different subtypes based on their surface glycoproteins, hemagglutinin (HA) and neuraminidase (NA). To date, 16 subtypes of HA (H1-H16) and 9 subtypes of NA (N1-N9) glycoproteins have been identified in influenza A viruses. However, only 3 HA subtypes (H1-H3) and 2 NA subtypes (N1-N2) have been circulated and caused pandemic and seasonal influenza epidemics. Historically, three influenza A pandemics have occurred in the last century with the appearance of each new HA subtype [4]. In March and April of 2009, outbreaks of a new H1N1 influenza virus in humans emerged in California of the United States and in Mexico, which subsequently spread worldwide and led to the declaration of a new influenza pandemic by WHO in June 2009 [5]. Characterization of the new H1N1 influenza virus showed that it is of swine origin [6,7]. The rapid spread of the new H1N1 influenza virus demonstrates that a new human influenza pandemic of zoonotic origin poses a real threat to the public health. Several subtypes of avian influenza virus have also been postulated to possess pandemic potential and the H5N1 highly pathogenic avian influenza virus (HPAIV) is of particular concern [8]. The first human outbreak of H5N1 HPAIV occurred in 1997 in Hongkong, China as a result of direct avian-to-human transmission that led to 18 human infections with 6 deaths [9,10]. While massive culling of poultry effectively controlled human outbreak for several years, H5N1 HPAIV remained endemic in poultry species in Southern China [11][12][13]. In late 2003, new human outbreaks of H5N1 HPAIV occurred in the Southeast Asia [14], and the virus has since then spread to Europe and Africa, causing over 600 human infections with 356 deaths as of May, 2012 according to the World Health Organization [15]. The ability of H5N1 HPAIV to directly infect humans with a high fatality rate (almost 60%) makes it a great threat as a causative agent for a potential new influenza pandemic. Moreover, the evolution of H5N1 HPAIV in wild birds and farm poultry has resulted in concurrent circulation of diverse virus strains with distinctive antigenic properties [16], and H5N1 HPAIV of different antigenic lineages has been reported to cause direct infection in humans [17]. The high sequence variation in circulating H5N1 HPAIV poses a great challenge for the development of a vaccine strategy for the control of a potential H5N1 pandemic. Based on genetic analysis of the HA gene, H5N1 HPAIV isolates have been categorized into 10 different clades that exhibit different antigenic properties [18]. However, information on cross reactivity of antibody responses between antigenically different H5N1 HPAIV isolates is still lacking. In this study, we investigated immune responses induced by HA DNA vaccines of two H5N1 HPAIV isolates, A/bar-headed goose/Qinghai/3/2005 (QH) and A/chicken/Shanxi/2/2006 (SX), that are representatives of two HPAIV antigenic lineages clade 2.2 and clade 7 respectively [19]. The QH virus was isolated in an outbreak in the Qinghai Lake of China in 2005 that caused massive deaths of wild birds [20], whereas the SX virus was isolated in an outbreak in Shanxi, China in farm poultry in 2006 [21]. Both viruses are highly pathogenic to domestic chickens and the HA proteins contain a polybasic amino acid segment that is characteristic of H5N1 HPAIV. Their HA protein sequence differs by about 7% and we recently reported that these two viruses do not exhibit significant cross reactivity in chicken [19]. In this study, we further investigated the immunogenicities of the QH and SX HA DNA vaccines in mice. Our results show that the HA of these two viruses are antigenically distinctive in mice. Interestingly, by carrying out heterologous priming-boosting immunizations, we observed that boosting with a heterologous HA effectively augmented antibody responses to the HA of the priming virus strain but not to the HA of the boosting virus strain, indicating the existence of original antigenic sin (OAS) between these two concurrently circulating H5N1 HPAIV strains. Results Immunization with the QH-HA and SX-HA DNA Vaccines Induces Strong Antibody Responses Against the Homologous Strain but not to the Heterologous Strain We constructed DNA vaccines expressing the HA of the QH and SX H5N1 influenza viruses and characterized their expression in HeLa cells by transfection and Western blot, which showed these two HA proteins were expressed at similar levels ( Figure S1). We further evaluated their immunogenicity and cross-reactivity in mice as outlined in Figure 1. Blood samples were collected at two weeks after each immunization and analyzed for antibody responses against the QH and SX viruses, and a summary of the antibody responses after each immunization is provided in Table S1. As shown in Figure 2a, immunization with the QH-HA DNA vaccine induced significant levels of antibody responses against the QH virus after two immunizations as compared to the control group (p,0.05). The third immunization further boosted the antibody levels significantly, whereas the fourth immunization only moderately increased antibody levels against the QH virus. In contrast, even after the fourth immunization with the QH-HA DNA vaccine, no significant level of antibody response against the SX virus was induced as compared to the control group (Figure 2b. QH-4 vs. C-4 against SX virus. p.0.1). Similarly, immunization by SX-HA DNA vaccine induced strong antibody responses against the SX virus ( Figure 2b) but not to the QH virus (Figure 2a. SX-4 vs. C-4 against QH virus. p.0.1). In agreement with the results from ELISA studies, antibodies induced by QH-HA DNA vaccine exhibit hemagglutination inhibition (HAI) activity against the QH virus ( Figure 2c) but not the SX virus (Figure 2d), whereas antibodies induced by the SX-HA DNA vaccine exhibit HAI activity against the SX virus ( Figure 2d) but not the QH virus ( Figure 2c). These results show that, similar as observed in chickens, the QH and SX HA exhibit distinctive antigenic properties in mice. A Single Boost with a Heterologous HA DNA Vaccine Augments Antibody Responses against the Original Virus Strain To further investigate the cross-reactivity of these two HA antigens, we also characterized the antibody responses induced in mice that received boosting immunizations with a heterologous HA DNA vaccine as outlined in Figure 1. As shown in Figure 3a, a single boosting immunization with the SX-HA DNA vaccine following two immunizations with the QH-HA DNA vaccine significantly augmented antibody responses against the QH virus (p,0.05, QH/SX-3 vs. QH/SX-2 against QH virus), to a similar level as those induced by three homologous immunizations with the QH-HA DNA vaccine (comparing QH/SX-3 and QH-3). Conversely, as shown in Figure 3b, a boosting immunization with the QH-HA DNA vaccine following two immunizations with the SX-HA DNA vaccine augmented antibody responses against the SX virus to a similar level as those induced by three immunizations with the SX-HA DNA vaccine (comparing SX/QH-3 and SX-3). Functional analysis of the antibody responses showed that the HAI activity of antibodies against the priming virus strain was also increased after a boosting immunization with the heterologous HA DNA vaccine (Figure 3c (comparing SX/QH-4 and QH-2 against the QH virus). On the other hand, as shown in Figure 6a and 6c, antibody levels as well as HAI titer against the SX virus (the priming virus strain) dropped slightly after the second boosting immunization with the QH HA DNA vaccine (comparing SX/QH-4 and SX/QH-3 against the SX virus). Similarly, the second boost with the SX HA DNA vaccine augmented antibody responses and HAI activity against the SX virus in Group QH/SX to similar levels as those induced by two immunizations with the SX HA DNA vaccine (Figures 5b and 5d. comparing QH/SX-4 and SX-2 against SX virus), while antibody responses against the QH virus (the priming virus strain) slightly dropped (Figure 6b and 6d). These results show that the second boosting immunization with the heterologous HA DNA vaccine effectively augmented the antibody responses as well as the HAI activity against the boosting virus strain but not the priming virus strain. The T Cell Responses Induced by Different Immunization Regimens with QH and SX HA DNA Vaccines are at Similar Levels and Exhibit Cross-reactivity After characterization of antibody responses induced by different prime-boost immunization regimens, mice were sacrificed and splenocytes were prepared for analysis of T cell responses by intracellular cytokine staining and flow cytometry. Of note, the QH and SX HA proteins share a dominant CD8 T cell epitope for Balb/c mice (H2d), IYGRPYET (amino acid 533-541), which is also found in H1N1 human influenza A viruses. As shown in groups (QH, SX, QH/SX, and SX/QH) that received HA DNA vaccines. Further, the levels of IFN-gamma-producing CD4 T cells are also similar in each vaccinated group when the splenocytes were stimulated with QH or SX virus-like particle (VLP)-pulsed BMDCs. These results show that in contrast to antibody responses, the T cell responses are cross-reactive between the HA of these two H5N1 HPAIV isolates and were boosted by a heterologous HA DNA vaccine as effectively as by the homologous HA DNA vaccine. Of note, only background level of IFN-gammaproducing CD4 T cells were stimulated by BMDCs pulsed with the PR8 VLPs (A/PR/8/34) or with SIV-Gag VLPs, indicating that CD4 T cell responses induced by these H5N1 HPAIV HA DNA vaccines do not exhibit cross-reactivity to the HA of the influenza virus A/PR/8/34, which is a human influenza virus of the H1 subtype. Boosting but not Priming with a Heterologous HA DNA Vaccine Enhanced Protection of Mice against Lethal H5N1 Influenza Virus Challenge We further investigated the efficacy of different homologous and heterologous prime-boost immunization regimens for protection against challenge by the QH H5N1 HPAIV, which causes lethal infection in mice. Mice were immunized with different HA DNA vaccines as outlined in Figure 8a and then challenged with 100 MLD50 of the QH H5N1 influenza virus. Blood samples were collected after the second immunization and analyzed antibody responses against the QH virus. As shown in Figure 8b, priming with the QH HA DNA vaccine followed by boosting with the SX HA DNA vaccine induced similar levels of antibody responses against the QH virus as two immunizations with the QH HA DNA vaccine. On the other hand, priming with the SX HA DNA vaccine followed by boosting with the QH HA DNA vaccine induced similar levels of antibody responses against the QH virus as one immunization with the QH HA DNA vaccine. We further compared neutralizing activity of these sera against the QH virus. As also shown in Figure 8b, sera from the QH-SX group exhibited similar neutralizing activity against the QH virus as sera from the QH-QH group. In comparison, sera from the SX-QH group exhibited only low level neutralizing activity against the QH virus as also detected for the PBS-QH group, whereas sera from the PBS-SX and SX-SX did not show detectable neutralizing activity against the QH virus. Similar results were also obtained for HAI activity against the QH virus. These results agree with the observation in above studies, demonstrating that a heterologous boosting but not priming with the SX-HA DNA vaccine augmented antibody response against the QH virus. The survival rates and weight changes of each group after challenge were shown in Figure 8c. The survival curves of different groups after challenge were built based on the Kaplan-Meier method and then analyzed by the Gehan-Breslow-Wilcoxon method. All mice that received two immunizations with QH HA DNA vaccine (Group QH-QH) survived the challenge by QH virus while none of the mice in the control group survived the challenge. Mice that received one (Group SX) or two (Group SX-SX) immunizations with the SX HA DNA vaccine also all succumbed to challenge, similar to the control group mice. Mice that received only a single QH HA DNA immunization (Group QH) were partially protected from the challenge, which is significantly higher compared to the control group (p,0.05). However, mice that received priming with QH HA DNA vaccine followed by boosting with SX HA DNA vaccine (Group QH-SX) were completely protected from lethal challenge by QH virus, which is significantly higher than mice that received a single immunization with QH HA DNA vaccine (p,0.05, Group QH-SX vs. Group QH). In contrast, mice that received priming with SX HA DNA vaccine followed by boosting with QH HA DNA vaccine (Group SX-QH) were only partially protected from challenge by QH virus, similar as mice that received a single immunization with QH HA DNA vaccine (p.0.05, Group SX-QH vs. Group QH). These results show that a heterologous boosting with the SX-HA DNA vaccine is as effective as two homologous QH-HA DNA immunizations for protection against a high lethal dose challenge by the QH H5N1 influenza virus, whereas a heterologous priming with the SX-HA DNA vaccine has no significant effect for protection against QH virus infection. Discussion Since its first recorded outbreak in domestic poultry and humans in Hong Kong in 1997, H5N1 HPAIV has spread worldwide except in the Americas. As a result, H5N1 HPAIV has evolved into different genotypes that are antigenically distinctive from each other [16]. However, how the diversification in antigenicity of HPAIV isolates affects their immunogenicity for eliciting cross-reactive antibodies is still not clear. In this study, we characterized immune responses induced by DNA vaccines of the HA from two H5N1 HPAIV isolates, the QH and SX viruses respectively. Our results show that the antibody responses induced by this pair of HA DNA vaccines in mice are not cross-reactive, demonstrating that the HA of these two H5N1 HPAIV strains are antigenically distinctive. On the other hand, cross-reactive CD4 and CD8 T cell responses were readily induced by both HA DNA vaccines, indicating that epitopes for CD4 and CD8 T cell responses are conserved between the HA of these two viruses. Interestingly, in characterization of the antibody responses induced by a heterologous prime-boost immunization regimen, we observed that a subsequent immunization with a heterologous HA DNA vaccine selectively boosted the antibody responses against the primary virus strain but not the secondary virus strain of which the HA DNA vaccine was used in boosting immunizations. These results demonstrate for the first time the existence of OAS between two phylogenetic clades of H5N1 HPAIV that both have caused large scale outbreaks in domestic poultry and are endemic in avian species. The rapid onset of the new H1N1 influenza virus pandemic suggests that once an influenza virus of zoonotic origin has acquired human transmissibility, it will be difficult control its spread due to the lack of herd immunity. Therefore, the preparation of a vaccine stockpile against potentially pandemic influenza viruses is of great importance. Of particular concern, outbreaks of H5N1 HPAIV over the last decade have caused over 800 human infections and different clades of H5N1 HPAIV are concurrently endemic and it is difficult to predict which strain will cause a pandemic [22]. The BHG/QH/3/05 (QH) virus represents a dominant genotype of the viruses isolated from the outbreak in wild birds in Qinghai Lake [20]. While the Qinghailike viruses have not been detected in poultry in China after 2005, they have been repeatedly detected in wild birds or domestic poultry in other countries in Europe, Africa, and Asia [23][24][25][26], and were reported to be detected in pikas in the Qinghai Lake area in 2007 [27]. In addition to their wide distribution, these viruses also bear known genetic markers that are critical for transmission in mammalian hosts and thus represent a clear pandemic potential [28,29]. The CK/SX/2/06 (SX) virus was first detected in chickens from the Shanxi province in northern China [21], which has since spread to several other provinces in northern China, including Ningxia, Henan, Shandong, and Liaoning provinces, and continues to be endemic in these regions. In addition, H5N1 HPAIV with an HA gene similar to that of CK/SX/2/06-like viruses has also been detected in chickens in Vietnam in 2008 [30]. Moreover, viruses of both clades have been reported to cause human infections [31]. Our results in this study showed that antibody responses induced by the HA of these two viruses not only failed to exert HAI activity but also exhibit almost no binding activity to each other. Further, immunization with the SX HA DNA vaccine alone conferred no protection against lethal challenge by the QH virus. The lack of cross reactivity of antibody responses between these two viruses underscores the need to prepare vaccine stockpiles for different H5N1 clades that are antigenically distinctive, to improve our preparedness against a potential H5N1 pandemic. We further investigated whether these H5N1 HPAIV vaccines will interfere with induction of protective immune responses against an antigenically distinctive strain. Our results from this study showed that boosting immunization with a heterologous HA DNA vaccine effectively augmented antibody responses against the original vaccine strain, demonstrating the presence of OAS between these two H5N1 HPAIV strains. On the other hand, induction of antibody responses against the HA of the secondary virus strain was not affected by the immune response against the HA of the primary virus strain, differing from the OAS observed for H1N1 human influenza viruses [32]. The phenomenon of OAS was first discovered over 50 years ago from studies on human infection with different strains of H1N1 influenza A viruses [33][34][35], which was further confirmed by subsequent studies of influenza infection and vaccination in humans as well as in animal models [36][37][38][39][40][41][42][43][44][45]. Generally stated, OAS refers to an observation that after immunization or infection with one antigen, subsequent boosting with a second, related but heterologous antigen leads to induction of antibodies that react primarily with the first antigen rather than with the second antigen [46]. However, the underlying mechanism for the observed OAS phenomenon remains unclear. It has been suggested that heterologous boosting immunization may selectively boost mem- ory B cells that produce cross-reactive antibodies and thus increase the level of antibody responses against the original HA antigen. At the same time, the cross-reactive memory B cells will compete with naïve B cells for the second HA antigen and thus dampen the induction of antibodies against the second HA antigen [32]. In this study, we observed that a heterologous boost augmented antibody responses against the primary strain as effectively as a homologous boost, similar as reported previously. However, we observed that induction of antibody responses against the second virus was not affected. Further, if the OAS is a result of boosting cross-reactive memory B cells, then it is expected that the antibody levels against the primary strain will be further increased after a second heterologous boost. In contrast, our results showed that a second heterologous boost significantly augmented antibody responses against the second strain but did not further increase the levels of antibody responses against the primary strain. Therefore, our results suggest that a different mechanism may be in play for the observed OAS phenomenon between these two HPAIV isolates. It is likely that the OAS between the QH and SX HA proteins is a result of reduced activation threshold of memory B cells, which are more easily stimulated by related and yet antigenically distinctive antigens as reported in early studies [47,48]. However, when memory B cells for both HA antigens are present, activation of memory B cells for the homologous antigen exhibits a dominant effect as observed after the second heterologous boost. Taken together, these results indicate that manifestation of OAS may vary between different virus strains depending on their antigenic differences. Of note, the OAS between the QH and SX HA in this study was observed by immunization with DNA vaccines, which express the only HA antigens. In studies on H1N1 human influenza viruses, Kim et al. demonstrated OAS between influenza viruses PR8 and FM1 in both immunizations with HA DNA vaccines and sequential sublethal infections and also showed that the OAS between these two viruses is more pronounced in sequential sublethal infections [32]. Thus, it is possible that the OAS between the QH and SX H5N1 HPAIVs may also be more prominent in other vaccine platforms such as inactivated virus vaccines or virus-like particles, as these complex vaccines also share other common antigens in addition to the HA which may drive the induction of immune responses further towards the original antigen. The complexity of the OAS and its potential impact on vaccine efficacy warrant further investigation. The OAS reported for human influenza viruses was mostly between chronologically distant strains [32,37,[39][40][41]44]. In these cases, the antigenic differences between these viruses are likely resulted from gradual antigenic drift over the years. Thus the OAS phenomenon may not exert a significant impact in the past for seasonal human influenza virus infection as the vaccines are regularly updated [49]. However, the 2009 H1N1 influenza pandemic introduced a new virus strain that is related to and yet antigenically different from the 2008-2009 seasonal H1N1 influenza virus. Interestingly, comparing to patterns of seasonal influenza virus infection in the past, the pandemic H1N1 influenza virus caused relatively fewer influenza-like illness in persons aged 65 or older [50]. It has been suggested that OAS may have contributed to the protection of the aged population against the new H1N1 influenza virus infection [51]. On the other hand, several reports indicated that immunization with 2008-2009 influenza vaccines may associate with a higher risk of influenza like illness caused by the new H1N1 influenza virus infection [52][53][54]. These observations raise the possibility that OAS induced by previous seasonal influenza vaccine may have enhanced infection by the new H1N1 influenza virus [52]. While these studies are not corroborated by others [55][56][57][58][59], they underscore the importance to investigate the impact of OAS on influenza vaccine efficacy and virus infection for preparation against future pandemics. Although the OAS between QH and SX viruses observed in this study did not affect antibody induction to the second strain, it remains to be determined whether a different outcome will be found between H5N1 HPAIV from other clades. Further, while the HA of QH and SX viruses antigenically distinctive, both CD8 and CD4 T cell responses induced by these two HA DNA vaccines are cross reactive and T cell responses were effectively augmented by heterologous boosting similar as homologous boosting. Evidence suggests that cross-reactive T cell responses may also contribute to protection against influenza virus infection [60]. However, while it is expected that priming with the SX HA DNA followed by boosting with the QH HA DNA will elicit higher levels of CD8 and CD4 T cell responses than a single immunization with the QH HA DNA vaccine, such enhanced T cell response apparently did not result in improved protection against subsequent challenge by the QH H5N1 virus. Thus, such cross reactive CD4 and CD8 T cell responses may not be very effective for protection against infection and pathogenesis by H5N1 HPAIV. Moreover, the impact of OAS to protection against subsequent heterologous H5N1 HPAIV infection in humans and other animals may also exhibit individual differences. As reported in our recent studies, evolution of H5N1 HPAIV has resulted in continuing antigenic drift of these viruses [19]. The genetic and biological complexity of the H5N1 HPAIV detected in recent years in China alone indicates that reassortant viruses are constantly generated during natural infection of avian species, which may enable these lethal viruses to gain transmissibility in humans. In particular, the OAS observed in this study between two antigenically distinctive HA antigens did not significantly affect the induction of antibody responses against the second virus strain. However, evolution of H5N1 HPAIV may lead to emerge of viruses with antigenicdrifted HAs that share non-neutralizing epitopes, which could potentially exert a more deleterious effect on induction of protective immune responses against other strains. Therefore, the existence of OAS between different clades of H5N1 HPAIV is also of significant concern for H5N1 influenza vaccine development. In a recent study, Want et al. showed that immunization with polyvalent DNA vaccines successfully induced cross-protective immune responses against several H5N1 HPAIVs from different clades [61]. Further, Giles et al. reported the design of a Figure 7. Similar levels of CD8 and CD4 T cell responses against HA were induced by different DNA immunization regimens. Mice were immunized as described in Figure 1. At two weeks after the final immunization, mice were sacrificed and splenocytes were prepared for analysis of T cell responses by intracellular cytokine staining and flow cytometry. (a) Comparison of CD8+ T cell responses induced in mice after DNA immunizations. Splenocytes were stimulated with a peptide corresponding to known CD8+ T cell epitopes in HA for 6 h in the presence of Brefeldin A, and then stained for cell surface CD8 as well as intracellular IFN-c, followed by flow cytometry analysis. The percentages of IFN-c-producing CD8+ T cells in splenocytes from each individual mouse after stimulation are shown. (b) Comparison of CD4+ T cell responses induced in mice after DNA immunizations. Splenocytes from immunized mice were stimulated with dendritic cells that have been pulsed with different VLPs as indicated. After 6 h stimulation in the presence of Brefeldin A, the cells were stained for cell surface CD4 as well as intracellular IFN-c, followed by flow cytometry analysis. The percentages of IFN-c-producing CD4+ T cells in splenocytes from each individual mouse after stimulation are shown. Data are presented as the mean 6 standard deviation. doi:10.1371/journal.pone.0041332.g007 computationally-optimized HA that could confer cross-protection against infection by different strains of HPAIV [62]. These studies offer promising strategies to overcome the potential impact of OAS between different HPAIVs. The rapid evolution of antigenically diverse H5N1 HPAIV in domestic poultry and wild birds underscores the importance of surveillance studies to determine cross-reactivity between different virus strains and the need to develop an effective polyvalent or universal vaccine strategy against a potential H5N1 influenza pandemic. Ethics Statement Animal ethics approval for the immunization studies in mice and guinea pigs was obtained from the Institutional Animal Care and Use Committee (IACUC) at the Emory University. All animal studies were performed in compliance with the guidelines of the Institutional Animal Care and Use Committee (IACUC) under approval from the IACUC at the Emory University. Production of DNA Vaccines The genes for the HA of QH and SX H5N1 viruses were amplified by RT-PCR following established protocols and then cloned into the plasmid vector pCAGGS (kindly provided by Dr. Kowaoka) under the chicken beta-actin promoter. Plasmids were amplified in E. coli DH5a and purified with an Endo-Free Megaprep DNA purification kit (Qiagen) following manufacturer's protocols. The plasmids were then resuspended at 1 mg/ml in sterile PBS and stored at -80uC until being used for immunizations. Expression of HA was characterized by transfection of HeLa cells using Lipofectamine2000 (Invitrogen) followed by SDS-PAGE and Western blot using mouse immune sera against QH and SX HA. Immunization and Blood Sample Collection Female BALB/c mice (6-8 weeks old) were obtained from Charles River Laboratory (Charleston, SC) and housed at the Emory University animal facility in micro-isolator cages. For DNA immunization, 50 mg of HA DNA vaccines were dissolved in 100 ml PBS and then injected into both side mouse quadriceps (50 ml per side). At two weeks after each immunization, blood samples were collected by retro-orbital bleeding, heat-inactivated, and stored at 280uC until further analysis. ELISA Influenza HA-specific antibodies were measured in individual mouse sera by an ELISA following established protocols ( [63,64]). Recombinant influenza viruses containing the HA and NA of QH and SX H5N1 isolates and internal genes from PR8 influenza virus (6+2) were rescued by reverse genetics, which were grown in SPF chicken embryonated eggs, inactivated by formalin treatment, and purified by centrifugation. The purified inactivated virus was then used as coating antigens in ELISA for detection of HA specific antibodies. A standard curve was constructed by coating each ELISA plate with serial 2-fold dilutions of purified mouse IgG with known concentration. The concentrations of influenza HA-specific antibodies in serum samples were calculated using the obtained standard curves and expressed as the amount of HAspecific antibody in 1 ml of serum samples (ng/ml). Hemagglutination Inhibition (HAI) Assay The HAI assay was performed as described previously ( [64,65]). Briefly, mouse sera were heat-inactivated at 56uC for 1 h and then treated with receptor destroying enzyme (Denka Seiken, Tokyo, Japan) at 37uC overnight according to the manufacturer's instruction. After treatment, 25-ml aliquots of 2-fold serially diluted serum samples were added to 25-ml PBS containing 4 HA units of purified inactivated recombinant influenza virus, After incubation at 37uC for 1 h, the serum-virus mixture was then incubated with 50 ml of 0.5% chicken red blood cells (LAMPIRE Biological Laboratories, Pipersville, PA) at 25uC for 45 min in a u-bottom 96-well plate. The HAI titer was defined as the reciprocal of the highest serum dilution that inhibited hemagglutination. Intracellular Cytokine Staining and Flow Cytometry Mice were sacrificed on day 14 after the final immunization to prepare splenocytes for analysis of T cell responses by intracellular cytokine staining coupled with flow cytometry following established protocols ( [66,67]). Splenocytes were stimulated with a peptide corresponding to a CD8+ T cell epitope for the influenza virus HA (IYSTVASSL, synthesized at the Emory Microchemical Facility, Atlanta, GA) or an irrelevant peptide corresponding to a segment in the HIV Gag protein (AMQMLKETI, negative control) at 10 mg/ml for 6 h in presence of 10 mg/ml Brefeldin A (Sigma, St. Louis, MO), and CD8+ T cell responses were determined by intracellular cytokine staining of IFN-gamma and analyzed by flow cytometry on a BD FACSCalibre with CELLQuest software (Becton Dickinson, Franklin Lakes, NJ). For detection of HA-specific CD4+ T cells, bone marrow-derived dendritic cells (BMDC) were prepared following established procedures ( [63]), incubated with influenza virus virus-like particles containing the HA of QH or SX (10 mg/ml) overnight, and then mixed with mouse splenocytes at 1:5 ratio for 6 h in presence of 10 mg/ml Brefeldin A. The levels of CD4+ T cell responses were determined by intracellular cytokine staining of IFN-gamma and analyzed by flow cytometry on a BD FACSCalibre with CELLQuest software. Challenge Studies Challenge of mice with H5N1 HPAIV was carried out in a biosafety-level 3 animal facility at the Harbin Veterinary Research Institute in China, and all challenged mice were housed in highefficiency particulate air-filtered (HEPA-filtered) isolators. Groups of eight 6-to 8-week-old female BALB/c mice (Beijing Experimental Animal Center, Beijing) were immunized by different DNA vaccine regimens. The wild type QH H5N1 influenza virus (A/bar-headed goose/Qinghai/3/2005) was grown in 10-day old SPF (special pathogen free) chicken embryonated eggs and stored Figure 8. A heterologous boosting with SX HA DNA vaccine protected mice against lethal challenge by the QH H5N1 influenza virus. (a) Schematic diagram of the immunization and challenge studies. (b) At 2 weeks after the second immunization, blood samples were collected and analyzed for antibody responses against the QH virus by ELISA, HAI, and virus-neutralization assays. Data are presented as the mean 6 standard deviation. Dashed lines indicate detection limit for HAI titer (1:20) and neutralizing titer (1:10). 1, Group QH-QH; 2, Group QH-SX; 3, Group SX-QH; 4, Group QH; 5, Group SX; 6, Group SX-SX; 7, Group C. (c) Mouse survival rate and body weight change after lethal influenza virus challenge. At 4 weeks after the second immunization, mice were challenged by intranasal instillation with 100 MLD50 of the QH H5N1 influenza virus and then monitored daily for weight loss and disease progression. Mice that were found to display severe signs of illness or loss more than 25% body weight were sacrificed in accordance with IACUC guidelines. doi:10.1371/journal.pone.0041332.g008 in 280uC until use. Virus was titered in 10-day old SPF chicken embryonated eggs to determine EID50 (egg-infectious dose) and MLD50 (mouse Lethal Dose) was determined in 8-week old female Balb/c mice. At 4 weeks after the final immunization, mice were lightly anesthetized with CO 2 and inoculated intranasally with 100 MLD50 of QH H5N1 influenza virus (which is approximately 1000 EID50) in a volume of 50 ml and monitored daily for weight loss and mortality. Virus Neutralization Assay Sera were heat-inactivated at 56uC for 1 h, and then mixed with 100 TCID50 (tissue culture infectious dose) of the QH H5N1 influenza virus in serial 2-fold dilutions (starting dilution 1:10). After 1 h incubation, the virus-sera mixtures were added to MDCK cells that were seeded in a 96-well plate for 1 h (duplicate in 6 wells for each sample dilution). The cells were then replaced with complete media. At 48 h post infection, medium was collected and analyzed for hemagglutination activity (HA). The virus-neutralizing titer was defined as the reciprocal of the highest serum dilution that inhibited hemagglutination activity by medium from infected cells. Statistical Analysis The average value and standard deviation for the level of immune responses within each group were calculated for comparison and the significance of the differences between the results from different groups was determined by a student t test using the Excel program (Microsoft, Redmond, WA). The statistical analysis of the survival curves were carried out using the GraphPad Prism (La Jolla, CA) software. The survival curves of different groups after challenge were built based on the Kaplan-Meier method and post-test comparison of the curves of two different groups was analyzed by the Gehan-Breslow-Wilcoxon method. Figure S1 Characterization of QH and SX HA expression by DNA vaccines. HeLa cells were grown to confluence in a six-well plate and then transfected by QH or SX HA DNA vaccines using Lipofectamine2000. At 24 hr post transfection, cell lysate was analyzed by SDS-PAGE and Western blot. Expression of the HA proteins was detected using a mixture of mouse sera against QH and SX HA as primary antibodies and HRPconjugated goat-anti-mouse antibodies as secondary antibodies. Author Contributions
8,243.2
2012-07-31T00:00:00.000
[ "Biology", "Medicine" ]
Dielectronic recombination rate in statistical model The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities. The dielectronic recombination (DR) plays the important role in the establishing ionization equilibrium of multielectron ions in high temperature plasma. In the modern thermonuclear facilities, tungsten is used for the elements of the discharge camera. Thus, the DR rate calculations of its ions are significant for evaluation of its impurities. The detailed level-bylevel calculations of the DR rate dependence Qdr(T) on plasma temperature T for multielectron ions are a very difficult problem since for complex atoms like tungsten they require to take into account a huge number of atomic states and configurations [1]. The general expression for the DR rate represents itself the sum over branching ratios assembled from the total radiation ( ) R W i and autoionization (i, ) a W nl rates of initial doubly excited states “i” and the partial autoionization rates (i, ; ) a W nl f [1]. 3/2 2 3 0 2 , 4 (i, ; ) ( ) ( ) exp , ( ) (i, ) ( ) ( ,nl; , ), (i, ) (i, ; ), if p i i a R The dielectronic recombination (DR) plays the important role in the establishing ionization equilibrium of multielectron ions in high temperature plasma.In the modern thermonuclear facilities, tungsten is used for the elements of the discharge camera.Thus, the DR rate calculations of its ions are significant for evaluation of its impurities.The detailed level-bylevel calculations of the DR rate dependence Qdr(T) on plasma temperature T for multielectron ions are a very difficult problem since for complex atoms like tungsten they require to take into account a huge number of atomic states and configurations [1].The general expression for the DR rate represents itself the sum over branching ratios assembled from the total radiation ( ) R W i and autoionization (i, ) a W nl rates of initial doubly excited states "i" and the partial autoionization rates (i, ; ) a W nl f [1]. where 0 a is the Bohr radius, 2Ry is the atomic energy unit, Zi is charge of the ion, gi,gf are the statistical weights of the initial and final states of ion core.The autoionization states are described by the sets of quantum numbers of the initial states of the ion core (i) and the ones of external electrons (nl).The set of the quantum numbers (f) describes the final stationary states of the ion core. In the statistical approach of the multielectron ion is represented by a system of harmonic oscillators with the local plasma frequencies, expressed via the Thomas-Fermi electron density distribution (r; , ) where Zi is the ion charge and Z is the charge of ion nucleus, e,m are the charge and mass of electron respectively.This model allows to describe universally different chemical elements.The partial radiation decay rates ( ,f) R W i could be expressed via the oscillator's The partial autoionization decay rate (i, ; ) a W nl f could be represented via the averaged over resonances near threshold the electron excitation cross-section exc (f;i, n ) l For the simplification of the calculations it is conventional to average (i, ; ) a W nl f over the orbital momentum l of the external electron [1,[4][5], that gives with the use of quasiclassical asymptotics the result (i, ;f) As in the statistical approach, the oscillator's strengths also are expressed as the functions of electron density distribution the total DR rate is represented at the end as the functional of electron density. The comparison of the total DR rates for the tungsten ion W 36+ dependences versus T, calculated within the statistical approach and by the level-to-level package of codes [6], where finally the DR was evaluated using code FAC [7,8], is presented in Figure 1.As one can see, there is a good agreement between the statistical model and detailed calculations. As it could be seen from presented comparison the statistical approach provides the universal description of the DR rates for multielectron ions and allows to facilitate considerably its numerical evaluation. Fig. 1 . Fig.1.The total DR rate for the tungsten ion W 36+ as the function of electron temperature.Curve 1 is the calculation by statistical model; curve 2 is the data of detailed calculation[6].
1,092
2017-01-01T00:00:00.000
[ "Physics" ]
Deficiency of the HGF/Met pathway leads to thyroid dysgenesis by impeding late thyroid expansion The mechanisms of bifurcation, a key step in thyroid development, are largely unknown. Here we find three zebrafish lines from a forward genetic screening with similar thyroid dysgenesis phenotypes and identify a stop-gain mutation in hgfa and two missense mutations in met by positional cloning from these zebrafish lines. The elongation of the thyroid primordium along the pharyngeal midline was dramatically disrupted in these zebrafish lines carrying a mutation in hgfa or met. Further studies show that MAPK inhibitor U0126 could mimic thyroid dysgenesis in zebrafish, and the phenotypes are rescued by overexpression of constitutively active MEK or Snail, downstream molecules of the HGF/Met pathway, in thyrocytes. Moreover, HGF promotes thyrocyte migration, which is probably mediated by downregulation of E-cadherin expression. The delayed bifurcation of the thyroid primordium is also observed in thyroid-specific Met knockout mice. Together, our findings reveal that HGF/Met is indispensable for the bifurcation of the thyroid primordium during thyroid development mediated by downregulation of E-cadherin in thyrocytes via MAPK-snail pathway. Although of biological interest, and notably with use of advanced and sound methodologies of the fish part, there are several issues of concern in this manuscript that require consideration. 1.The bearing concept of comparing thyroid morphogenesis in zebrafish and mouse is flawed by the fact that in fish the thyroid does not form a solid organ but rather consists of a row of loosely connected follicles i.e. what is being studied is folliculogenesis of cells that are already functionally differentiated.Although this process takes place close to and is depending on pharyngeal vasculature it does not correspond to the the bilateral elongation manufactured by mainly undifferentiated progenitor cells during thyroid organogenesis in mice (and probably in all mammals including man that display a bilobed thyroid). 2. Blocked propagation of thyroid follicle formation in Hgf/Met deficient zebrafish related to EMT inhibition is interesting.However, no detailed information is provided on whether this effect depend on inhibited dissociation of cells from the first formed follicle or inhibited reassociation of cells that before undergo partial EMT.Moreover, in view of recent findings in mouse embryonic thyroid that folliculogenesis designating epithelial differentiation takes place much earlier than previously understood (Johansson et al, Front Endocrinol, 12:760541, 2021), comparison to mice should rather focus on folliculogenesis than proliferation/expansion of the thyroid primordium.In other words, it would be very interesting to learn if HGF/Met-dependent partial EMT might regulate thyroid folliculogenesis across species. 3. There is limited information on the extrathyroidal phenotype of mutants, and the potential influence of organogenesis as a whole.Although authors convincingly show that Hgf rescues follicle generation in mutants, the likely source of Hgf in vivo is the surrounding mesenchyme, supported by data provided in Fig. 2. Are overall embryo size and the development of major organ systems or, in particular, other anterior/pharyngeal endoderm derivatives (e.g.parathyroid and ultimobranchial gland) really unaffected in mutants?If mutant pharyngeal phenotype indeed is rectricted to thyroid this should be documented.4. Morphometric calculations of mouse thyroid shape and size (Fig. 6) infer a role of Hgf/Met pathway prior to E11.5, which represents the end of descensus and start of bilateral elongation of the midline thyroid primordium.What happens thereafter in mutants?Is thyroid phenotype further aggravated with impact on subsequent growth and glandular development, or do you see catch-up of cell proliferation and normal progression the bilobation process?Notably, Fgf10 regulates thyroid developmental growth from E12.5 and onwards (Liang et al Development 145:dev146829, 2018).Thus, it would should be possible to answer whether Hgf/Met action is limited to a previous developmental stage, as suggested by current zebrafish data,and also whether the mutant phenotype merely is the result of delayed thyroid development.Normal S-T4 and only doubled S-TSH levels postnatally are consistent with subclinical hypothyroidism and argue that thyroid tissue volume and function essentually are sufficent, but is the gland smaller than in wildtype?Minor: -maybe mutant "families" should be rephrased to mutant strains or lines -Figs.1g and h might be excluded or at least reduced in size to conform with other. -Analysis of TSH (thyrotropin) amounts should preferably by added as supplementary to Figs. 3i-l to estimate level (differences) of hypothyroidism among mutants. -Morphometric calculation of cell numbers (e.g. in Figs.3d-h and n) should be indicated.Both Methods section and figure legend(s) are lacking this information. -Figs.5a-j based on cell line analysis does not add much and may be replaced by other more of interest to the paper´s major content.The results are largely confirming previous wellknown effects of Hgf/Scatter factor and be removed in toto or switched to Supplementary. -Fig.6A, the IF images are generally of poor quality.Moreover, it is expected that phospho-ERK designating activation of the MAPK pathway should be translocated to the nucleus and captured by IF at least in some cells, which is not the case.This raises doubt on antibody specificity, alternatively on biological effect.English in need of editing preferably by native. Reviewer #3 (Remarks to the Author): Summary of the Manuscript The MS of Fang et al. describes defective thyroid gland development in three zebrafish mutant lines as a result of disruption of HGF-Met signaling.The mutant lines were previously generated by the same group in an ENU mutagenesis screen and were selected for further studies because of a unique thyroid phenotype.The group mapped the mutations to two loci encoding for hgfa and met, two genes that act in the same signaling pathway (hepatocyte growth factor signaling) as hgfa encodes a ligand for the receptor tyrosine kinase met.For both hgfa and met, the Zebrafish Information Network (ZFIN) database lists quite a number of mutant alleles but a thyroid developmental phenotype has so far not been reported.The authors provide limited data on the expression of the two genes showing expression within (met) or adjacent to (hgfa) the developing zebrafish thyroid primordium.A series of additional in vitro experiments confirmed that the met mutations p.I284 and p.E217 impaired met activity.Rescue experiments based on injection of WT met or WT hgfa could rescue the thyroid phenotypes in 5-day-old zebrafish.The authors next characterized the thyroid phenotypes in some more detail showing that late phases of thyroid development are specifically affected, that defective thyroid morphogenesis results in hypothyroidism and that adult mutant fish have a reduced number of thyroid follicles.Live imaging of transgenic thyroid-specific reporter lines convincingly visualizes the defective expansion of the zebrafish thyroid.The authors next used a small molecule screen and showed that inhibition of ERK but not PI3K or STAT3 intracellular signaling phenocopies the thyroid defects seen in mutant fish.In a very interesting section of the MS, the authors show that reduced cell adhesion is critical for thyroid expansion in zebrafish and that increased cell adhesion in mutants with perturbed HGF-Met signaling is likely one critical factor causing the thyroid defects.In the final part of the MS, the authors describe limited phenotyping studies of conditional Met-KO mouse (thyroid-specific invalidation of Met).The data presented appear to show impaired ERK signaling in the thyroid primordium of KO embryos but are not very convincing in the way the data were presented. General comments The identification of HGF-Met signaling as a critical pathway to regulate late expansion of the thyroid is a novel finding.To my knowledge, the HGF-Met pathway has not yet been linked to defects in thyroid development and function in mouse models or in the context of congenital hypothyroidism in human.A particular strength of the study by Fang et al. is the demonstration that HGF-Met signaling is linked to Snail-mediated regulation of cell-cell adhesion and that tight regulation of adhesion dynamics could be critical during certain phases of thyroid primordium expansion.I appreciate the attempts by the authors to translate their findings in zebrafish to mouse thyroid organogenesis to demonstrate the broader role of this morphogenetic process but the observations in embryonic mice are poorly presented.Despite the novelty of the findings and the potential relevance of the reported data to better understand thyroid morphogenesis, the manuscript in its current form has not sufficient quality to recommend it for publication in Nature Communications.Below, I summarize some of my major concerns. Major concerns The use of English language needs to be greatly improved.In several instances, a poor English prevented me from understanding what exactly the authors intended to describe.Unfortunately, in its current form, the MS is written for a very small community of readers interested or experienced in thyroid morphogenesis and the MS might fail to attract a broader readership.Due to my personal expertise in pharyngeal organogenesis in zebrafish and mouse embryos, I could follow most descriptions and explanations provided by the authors.However, when reading the MS, I had the strong expression that readers that are not familiar with zebrafish or mouse thyroid organogenesis might experience great difficulties in understanding the anatomical details or developmental processes described.The MS would benefit greatly from short introductory notes how thyroid development proceeds (particularly in zebrafish) with some more details on anatomical landmarks that characterize the environment in which the thyroid primordium develops. The authors should also improve the use of an appropriate terminology to describe anatomical details and morphogenetic processes.For example, avoid wording like: "dot-like" (line 91), "resembling a ball positioned at the bottom of the aortic arch," (line 118), "scattering and migration abilities" and "from a rounded ball to an elongated bar that projected bilaterally"(line 373) I also suggest not to use the term "congenital hypothyroidism" (line 211) in relation to zebrafish embryo models.Congenital hypothyroidism describes a condition of reduced circulating TH at birth, so the term is not applicable to zebrafish embryos.I strongly disagree with the description of the main phenotype as a defect in "rostral" elongation.As the authors demonstrate neatly in their live imaging experiments, there is clearly a defect in the "caudal" expansion of the primordium.The apparent expansion/elongation of the normal zebrafish thyroid primordium between day 2 and day 5 is due to the addition of cells to the caudal portion of the primordium (as the movie nicely show).The many WISH images in Figure 1 also show that the caudal portion of the thyroid is missing. Selected specific comments I am not satisfied with the quality of some WISH experiments.Particularly the stainings of hgfa and also of foxe1 are very weak and do not show all structures that the express these genes (i.e.there is labeling of the pharyngeal epithelium by the foxe1 probe).This comment relates to the statements by the authors that hgfa expression adjacent to the thyroid was only detectable between 48 and 55 hpf but the defects in thyroid expansion happened during the period from 55 and 120 hpf.This would imply that HGF-Met signaling is active only during the onset of thyroid expansion with lasting consequences even in the absence of continuous hgfa expression near the met-expressing thyroid. Graphs in Figure 3D and 3G visualize the number of "mature follicular cells".I see many more cells in the corresponding images in 3C and 3F.The same is true for the graph in Figure 3N.Please check. Line 246 -248: the authors note "These data indicated that the Hgfa-met pathway probably regulated rostral elongation along a pair of hypobranchial arteries in zebrafish during late thyroid development".It would be very informative to visualize the relative positioning of the thyroid and the paired hypobranchial arteries in WT and mutant fish during the process of elongation.A live imaging video and some images in Figure S3B are provided but only for the period before elongation.With respect to the specific anatomical relationship between the pharyngeal vessels and the thyroid the question arises if it is the vessels that express hgfa.From a larger conceptual perspective, this is an extremely relevant question because if vessels are the source of hgfa, then HGF-Met signaling could also be compromised in cases of vascular maldevelopment.Figure 6.I could not orient myself on the images shown in Figure 6.Are these really sagittal sections?How can a "bifurcation index" be calculated on sagittal sections?Please provide additional images that contain anatomical landmarks (such as the nearby vessels). Reviewer Comments: Reviewer Reply: We greatly appreciate your encouraging comments and for your suggestions. Thank you for your useful comments and suggestions.We have modified the manuscript accordingly, and detailed corrections are listed below point by point. Revised portion are marked in blue in the paper.Response:Thank you for your carefully reviewing our paper.We are sorry for the careless mistake.We have revised the Y-axis of figure 3E,F,G,H,P,Q as number of thyroid follicles. 1) The details related to quantification of thyroid follicular cells in adults ( 2) Similarly, in the same figure, for 5 dpf quantification (Fig. 3D, G), the Y-axis should be "number of follicles" rather than "number of mature follicular cells"). Response:Thank you for your carefully reviewing our paper.We are sorry for the careless mistake.We have revised the Y-axis of figure 3E,F,G,H,P,Q as number of thyroid follicles. 3) It would be good if the authors quantify the number of thyroid follicular cells at 5 dpf. And if possible the proliferation of thyrocytes. It could be possible that HGF/Met signaling regulates cell-cycle and thereby the shape and size of thyroid follicles. Response:Thank you for your valuable suggestions.As we have no transgenic fish line labelling thyrocytes nuclei, it is not easy to accurately quantify the number of thyrocytes.So we detected the proliferation of thyrocytes in wildtype and met mutant zebrafish embryos at 4 dpf by Edu staining and the results showed that thyrocytes proliferation in mutants remained relatively unchanged (Fig. S3). 4).Edits in summary: Line 37: It's -> It has been Response:Thank you for your carefully reviewing our paper.We have revised the mistake in the manuscript (line 55). Line 40: "in vivo" italicize Response:We felt sorry for our careless mistakes.We have revised the mistake in the manuscript (line 57). Line 44: restored -> rescued Response:Thank you for your carefully review.We have revised it in the manuscript (line 63).All transgenic lines are named as Tg(promoter:construct). It would be good to cite the article related to the single-cell atlas of zebrafish thyriod used in Response:Thank you for your suggestions.We have revised the nomenclature for zebrafish transgenic lines and marked blue in the manuscript. Response: Thank you for your valuable comments.The concerns of the reviewer #2 about that "the thyroid morphology in zebrafish is different from that in mice, and the differentiation of the thyrocytes is also different when folliculogenesis between zebrafish and mice, the folliculogenesis of thyrocytes that are already functionally differentiated in zebrafish" is correct.In fact, the thyroid follicles from zebrafish embryos at 55hpf to 72hpf could synthesized the thyroid hormone when we detected by whole-mount immunofluorescence assays using T4 antibody.We add the difference about the process of thyroid development in the introduction.(line 86-97).Response:Thank you for your valuable suggestions.In the revised manuscript, we furtherly observed the phenotype in Met-CKO mice at E11.5 to E15.5.The results showed that the bifurcation of thyroid primordium were significantly delayed in Met-CKO mice at the early or late E11.5 when compared with that in wildtype mice. 2) Blocked propagation of thyroid follicle formation in However, the bilobation process of thyroid in Met-CKO mice had caught up with that in wildtype mice at E12.5 and E13.5 (line361-363 and Fig. 6Da'-6Dd').The thyroid of Met-CKO mice had normal formation of the left and right lobes at E15.5 (line363-364 and Fig. 6De'-6Df').These findings suggested the HGF/Met pathway play key roles in the bifurcation of thyroid primordium, but did not influence the bilobation.Of course, the questions about whether HGF/Met pathway regulating folliculogenesis is a very interesting and we will investigate this topic in the future.Response:Thank you for your valuable suggestions.We measured the overall embryo size (including length, ear and eye) and the results showed that there was no obvious difference between the mutants and their siblings at 5dpf.We have added these results in the manuscript (line 254 and Figs.S2B-S2E). We observed the development of another two organs, parathyroid and ultimobranchial gland, which derived from pharyngeal endoderm, by WISH.The results showed that compared with siblings the ultimobranchial gland (marked by calca) and parathyroid (marked by gcm2) were unaffected in 3 dpf and 5 dpf mutants of zebrafish (line 255-259 and Figs.S2F-S2G), which data have been added to the revised manuscript.Response: Thank you for your valuable suggestions.We furtherly observed the phenotype of thyroid primordium in Met-CKO mice after E11.5.The results have been added at line 361-364 and line 377-379, and discussed the possible developmental mechanism as follows: "Notably, we found the bifurcation of thyroid primordium were significantly delayed in Met-CKO mice at the early or late E11.5 when compared with that in wildtype mice.However, the bilobation process of thyroid in Met-CKO mice had caught up with that in wildtype mice at E12.5 and E13.5 (Fig. 6Da'-6Dd'). Moreover, the thyroid of Met-CKO mice had normal formation of the left and right lobes at E15.5 (Fig. 6De'-6Df').These data suggested the HGF/Met pathway play key roles in the bifurcation of thyroid primordium, but did not influenced the bilobation.In fact, thyroid developmental growth from E12.5 and onwards in mice were regulated by -maybe mutant "families" should be rephrased to mutant strains or lines Response: Thank you for your advice.We have rephrased "families" to "strains" in the revised manuscript. -Figs. 1g and h might be excluded or at least reduced in size to conform with other. Response: Thank you for your suggestion.We have reduced the picture size (Figs.1g -1h). -Analysis of TSH (thyrotropin) amounts should preferably by added as supplementary to Figs. 3i-l to estimate level (differences) of hypothyroidism among mutants. Response: Thank you for valuable suggestion.We have added the analysis of TSH amounts in figures (Figs. 3M-3N and line267-269). -Legend to Fig. 3M is lacking key to abbreviations (PA and H) and numbers (1-5). Pleass check all legends for similar. Response: Thank you for your carefully reviewing our paper.We have added the abbreviations in legend (B, brain; p, pharyngeal; PA, pericardial aorta; H, heart; 1-5, cartilage; *, thyroid follicles).And we have checked all legends for similar.Response: Thank you for your suggestions.We have carefully examined the paper to the best of our ability and polished the English using Spring Nature Author Services (https://authorservices.springernature.com/language-editing/).These changes did not influence the content and framework of the paper. The identification of HGF-Met signaling as a critical pathway to We were sorry for the unclear description for the thyroid development proceeds of zebrafish and mouse embryo.We have provided the more details about the thyroid primordium development of zebrafish and mice in the introduction of the revised manuscript (line 86-97). 2) The authors should also improve the use of an appropriate terminology to describe anatomical details and morphogenetic processes.For example, avoid wording like: "dot-like" (line 91), "resembling a ball positioned at the bottom of the aortic arch," (line 118), "scattering and migration abilities" and "from a rounded ball to an elongated bar that projected bilaterally"(line 373).I also suggest not to use the term "congenital hypothyroidism" (line 211) in relation to zebrafish embryo models.Congenital hypothyroidism describes a condition of reduced circulating TH at birth, so the term is not applicable to zebrafish embryos. Response: Thank you for your careful review and giving many valuable suggestions. We have corrected all the inappropriate terminology that you listed above in the paper according to the reference (Opitz et al.Dev Biol.2012;372(2):203-216).And change the term "congenital hypothyroidism" to "hypothyroidism". 3) I strongly disagree with the description of the main phenotype as a defect in "rostral" elongation.As the authors demonstrate neatly in their live imaging experiments, there is clearly a defect in the "caudal" expansion of the primordium. 1 also show that the caudal portion of the thyroid is missing. The apparent expansion/elongation of the normal zebrafish thyroid primordium between day 2 and day 5 is due to the addition of cells to the caudal portion of the primordium (as the movie nicely show). The many WISH images in Figure Response: Thank you for your comments and we felt sorry for our poor English.We have corrected the description of "rostral" by "caudal". 4) Selected specific comments I am not satisfied with the quality of some WISH experiments.Particularly the stainings of hgfa and also of foxe1 are very weak and do not show all structures that the express these genes (i.e.there is labeling of the pharyngeal epithelium by the foxe1 probe).This comment relates to the statements by the authors that hgfa expression adjacent to the thyroid was only detectable between 48 and 55 hpf but the defects in thyroid expansion happened during the period from 55 and 120 hpf. This would imply that HGF-Met signaling is active only during the onset of thyroid expansion with lasting consequences even in the absence of continuous hgfa expression near the met-expressing thyroid. Response: Thank you for your valuable comments.We have repeated the WISH experiments for foxe1 to get much clearly pictures (Fig. S4A).The weak staining of hgfa by WISH were caused by the low level expression of hgfa in zebrafish embryos. In order to improve the quality of hgfa WISH experiments, we used Tg(tg:GFP) transgenic embryos to do fluorescence in situ hybridization and immunofluorescence staining to mark hgfa and tg more clearly.The results have been combined into the Fig. 2 in the revised manuscript.Again, the hgfa signal in 72hpf embryos around the thyroid has not been observed by in situ hybridization with fluorescence (data not shown).The consequence of hgf/met defects is obvious and we think the reason for this phenotype is either the sensitivity of detection method is limited and maybe there is some hgf expression but can not be detected by the method we used or the hgf/met signaling is active only during the onset of thyroid expansion, which has been supported by the results of the Erk pathway inhibitors (Fig 4D ).3D and 3G visualize the number of "mature follicular cells".I see many more cells in the corresponding images in 3C and 3F.The same is true for the graph in Figure 3N.Please check. 5) Graphs in Figure Response: Thank you for your carefully reviewing our paper and we felt sorry for the carelessness.We mean "the number of thyroid follicles".We corrected the mistakes (Fig. 3).Response: Thank you for your valuable comments.We have provided the images for the period of thyroid elongation of zebrafish (Fig. S4B).The results showed that truncated hgfa mutation in zebrafish does not affect the development of vessels surrounding the thyroid at 72 hpf embryos (Fig. S4B).The data have been added in the revised manuscript.7) Figure 6.I could not orient myself on the images shown in Figure 6.Are these really sagittal sections?How can a "bifurcation index" be calculated on sagittal sections?Please provide additional images that contain anatomical landmarks (such as the nearby vessels). 6) Line Response: Thank you for your valuable comments.We were sorry for the misleading statement.The pictures in Fig 6 are transverse sections instead of sagittal sections.We have provided new figures by the transverse sections of thyroid primordium in E11.5 mice embryo (Fig. 6A) and marked the relative position of the thyroid primordium (th) to the surrounding vessels (the third pharyngeal arch artery, aortic arch).We also observed the thyroid development in the mice embryo at E12.5 to E15.5.We found that HGF/Met pathway play key roles in the bifurcation of thyroid primordium, but did not influence the bilobation.These data have been added in the revised manuscript (line 363-364 and Fig. 6D). The authors have addressed all the concerns raised by me.I recommend the manuscript for publication. Reviewer #2 (Remarks to the Author): This reviewer has no further comments or suggestions to authors. Reviewer #3 (Remarks to the Author): The review of the revised manuscript of Fang et al. was challenging to me.I really think that the core findings of the study are novel and exciting.The hgf/met pair identification is an important example demonstrating the role of a precise stromal-derived factor and its cognate thyrocyte receptor in early thyroid morphogenesis.Thyroid follicles are epithelial structures and the described role of hgf/met in regulating an EMT-mediated process critical for formation of new naïve follicles is very interesting from the conceptual perspective.Moreover, some of the new data presented by the authors (Fig. 2A_a''',b''') suggest that pharyngeal vessels are a likely source of hgfa (as judged from the shape of the hgfa expression domain).If confirmed, hgfa would constitute a first candidate molecule that contributes to the so often "anticipated" vascular guidance role during early thyroid morphogenesis.The importance of identifying one specific vascular-born factor that affects morphogenic processes can not be overstated. Unfortunately, the quality of reporting is in to many places not satisfying.The English still needs a lot of editing.The methods need more care for details.Specific comments: In Fig. 2H; "Western blots analyzed the expression …. after treated with or without human HGFα protein".No HGFa conditions shown in this panel.Fig. 3O shows thyroid follicles in the hypobranchial region in 1.5 months old fish.TH levels indicate a hypothyroid condition but the authors did not mention any change in follicular morphology.There are numerous zebrafish studies that showed a classical goitrous thyroid phenotype in experimentally induced or genetically determined hypothyroidism.Was there any change in thyroid follicular cell morphology in these hypothyroid mutants (given the proposed increase in TSH; see comments below).The Tübingen background would be the STRAIN of zebrafish that was used for mutagenesis.The mutation procedure would generate mutant LINES.Please use the term LINE throughout the manuscript. The terms EMBRYO (up to 72 hpf) and LARVAE are sometimes used for the same group of fish in a single sentence.By convention, the zebrafish "embryo" takes the name of "larvae" from 72 hpf onwards. Change "continuously activated MEK2" to "constitutively active MEK" From the perspective of reproducibility, the methods section still lacks in many cases minimal information on procedures, validation efforts or how often specific analyses had been replicated A few examples on incomplete information in the methods section: It is a common standard to clearly state the origin of each transgenic zebrafish lines (reference to original publication describing the generation of the line is a standard in the field). The authors generated a new mouse lines (floxed Met allele).Apart from mentioning primer pairs, no information on genotyping results is presented.These mice were crossed with Pax8-Cre to generate a thyroid-specific KO model, again no information at all about recombination efficiency undermining any conclusion about the mild thyroid phenotype. The description of the zebrafish constructs reads awkwardly.The use of Gateway strategy is worth mentioning to understand the construction process.Experiments using pTol2 construct injections would benefit from a bit more detail on injection procedure (e.g., amount of plasmid). The method used to estimate TSH levels in zebrafish by means of aqueous extracts of whole zebrafish tissue is completely new to me and I am actually surprised that this approach could work for a peptide hormone.How did the authors validate that the human?/mouse?kits for TSH can indeed detect zebrafish TSH.The authors referred to a paper by Yung 2011 but this paper does not describe any TSH measurements. Counting of thyroid follicles in adult fish.Images in Fig. 3O show sagittal sections (not horizontal as described in methods).How was the counting done (manually section by section?).How many sections per fish were analyzed to derive a total number per fish?How many fish were analyzed?Western blotting.No information apart from incomplete description of antibodies that were used. Response to the reviewers point by point: REVIEWER COMMENTS Reviewer #3 (Remarks to the Author): The review of the revised manuscript of Fang et al. was challenging to me.I really think that the core findings of the study are novel and exciting.The hgf/met pair identification is an important example demonstrating the role of a precise stromalderived factor and its cognate thyrocyte receptor in early thyroid morphogenesis.Thyroid follicles are epithelial structures and the described role of hgf/met in regulating an EMT-mediated process critical for formation of new naïve follicles is very interesting from the conceptual perspective.Moreover, some of the new data presented by the authors (Fig. 2A_a''',b''') suggest that pharyngeal vessels are a likely source of hgfa (as judged from the shape of the hgfa expression domain).If confirmed, hgfa would constitute a first candidate molecule that contributes to the so often "anticipated" vascular guidance role during early thyroid morphogenesis.The importance of identifying one specific vascular-born factor that affects morphogenic processes can not be overstated. Response: Bifurcation during the thyroid development is the transverse elongation of the thyroid primordium along the third pharyngeal arch arteries after its descent (Fagman et al., 2004;Fagman et al., 2007;Kameda et al., 2009;Nilsson and Fagman, 2017), however, the factors responsible for this process remain unclear.A previous study revealed that sonic hedgehog (Shh) signals regulate the bifurcation, which is likely secondary to severe malformation of the vascular tree emerging from the outflow tract (Zhang et al., 2005).Thus, the reviewer's hypothesis that "hgfa derived from vascular as a first candidate molecule to contribute to the 'anticipated' vascular guidance role during the bifurcation during the thyroid development" is very interesting and meaningful.In fact, hgfa has been reported to be expressed in the endothelial cells (Leung et al., Oncogene.2017).However, we further analyzed the single-cell RNA-seq data of mouse thyroid (Yang et al., Nat Commun.2023) in our lab, and found that fibroblasts were the main source of Hgf in thyroid tissue and that there was almost no Hgf expression in endothelial cells (with cdh5 as a specific marker of endothelial cells).These data suggested that the expression of hgfa was more abundant in fibroblasts than that in endothelial cells in thyroid tissue.We thus presumed that the hgfa derived from the fibroblasts, rather than from endothelial cells of the surrounding vasculature in thyroid tissues, might play a key role in the bifurcation during thyroid development.However, additional specific experiments, such as conditional knockout of hgfa in distinct cells are needed in the future, to clarify whether the effect of hgfa on bifurcation during thyroid development was derived from endothelial cells of the vasculature surrounding the thyroid gland or from the fibroblasts in the thyroid stroma.We have Figures showed the expression of Hgf (hgfa in zebrafish) in mouse thyroid was mainly in the fibroblasts.Cdh5 was a specific marker for endothelia cells.Pdgfra was a specific marker for fibroblasts.Hgf was mainly expressed in the fibroblasts, followed by myeloid cells.There was almost no Hgf expression of in endothelia cells. REVIEWERS' COMMENTS Reviewer #3 (Remarks to the Author): The recent revision by the authors greatly improved the quality of the manuscript.From my perspective, it now meets the standards for publication in NatComm.I would like to congratulate the authors for their meticulous work.When reviewing the recent manuscript version, I only found some minor points that need to be adressed (outlined below). Minor comments Main text: Line 97 typo, nkx2.4bMethods: please add information to the methods section how the EdU labeling experiment was done. To understand the values that you obtained, it is critical to know the length of the pulse and a possible chase period. Figures: Figure 1 D: Please check the lower panel of Sanger sequencing results.All three graphs for "siblings" show hets, so labelling the group of panels as heterozygous siblings would be appropriate.The base sequences shown below each sequencing profile are not consistent.The left panel mentions A but the sequence profile shows a het with A/T, the middle panel mentions C but the sequence profile shows a het with C/T, and the right panel mentions A but the sequence profile shows a het with A/T.Response: Thank you for your carefully reviewing our paper.We have revised the typo in our manuscript. Methods: please add information to the methods section how the EdU labeling experiment was done.To understand the values that you obtained, it is critical to know the length of the pulse and a possible chase period. Response: Thank you for your valuable suggestion.We have added the information to the methods section how the EdU labeling experiment was done (Line 722-725).Response: Thank you for your valuable suggestion.We have revised the group of panels as heterozygous siblings and revised the base sequences.6: there is a mix of labels to annotate the embryonic age.Please use one consistent label; either E for embryonic day or dpc for days post-conception Response: Thank you for your carefully reviewing our paper.We have revised the labels and used consistent label to annotate the embryonic age in Figure 6. Supplemental Figures: Figure S3: Please provide a value for the length of the scale bar in Fig. S3H Response: Thank you for your carefully reviewing our paper.We have added the value for the length of the scale bar in Fig. S3H.Response: Thank you for your suggestion.We have revised typo "thyrocytes proliferation" and added information that "anterior is to the right" in Fig. S4; we have changed Edu to EdU in Figure panels. # 1 ( Remarks to the Author): Fan et al. identify HGF/Met pathway to be important for thyroid formation.The authors identify the pathway in a forward genetic screen performed in zebrafish.The authors validate the role of the pathway in mouse, using a tissue-specific knockout.Overall, the experiments conducted by the authors support with the message provided in the manuscript.The identification of the pathway adds new player in thyroid gland morphogenesis, an understudied topic. figure shows "number of thyroid follicular cells per fish". Reviewer # 2 ( Remarks to the Author): By whole-genome genetic screening of congenital hypothyroidism in a large zebrafish population, authors identified three "families" of spontaneously developed inactivation mutations in the Hgf and Met genes, which by various functional analyses were linked to growth of the thyroid primordium.Results infering a role of Hgf/Met pathway in thyroid developmental growth were also confirmed for mouse embryos.Although of biological interest, and notably with use of advanced and sound methodologies of the fish part, there are several issues of concern in this manuscript that require consideration. 1) The bearing concept of comparing thyroid morphogenesis in zebrafish and mouse is flawed by the fact that in fish the thyroid does not form a solid organ but rather consists of a row of loosely connected follicles i.e. what is being studied is folliculogenesis of cells that are already functionally differentiated.Although this process takes place close to and is depending on pharyngeal vasculature it does not correspond to the \ bilateral elongation manufactured by mainly undifferentiated progenitor cells during thyroid organogenesis in mice (and probably in all mammals including man that display a bilobed thyroid). Hgf/Met deficient zebrafish related to EMT inhibition is interesting.However, no detailed information is provided on whether this effect depend on inhibited dissociation of cells from the first formed follicle or inhibited reassociation of cells that before undergo partial EMT.Moreover, in view of recent findings in mouse embryonic thyroid that folliculogenesis designating epithelial differentiation takes place much earlier than previously understood (Johansson et al, Front Endocrinol, 12:760541, 2021), comparison to mice should rather focus on folliculogenesis than proliferation/expansion of the thyroid primordium.In other words, it would be very interesting to learn if HGF/Met-dependent partial EMT might regulate thyroid folliculogenesis across species. 3 ) There is limited information on the extrathyroidal phenotype of mutants, and the potential influence of organogenesis as a whole.Although authors convincingly show that Hgf rescues follicle generation in mutants, the likely source of Hgf in vivo is the surrounding mesenchyme, supported by data provided in Fig. 2. Are overall embryo size and the development of major organ systems or, in particular, other anterior/pharyngeal endoderm derivatives (e.g.parathyroid and ultimobranchial gland) really unaffected in mutants?If mutant pharyngeal phenotype indeed is restricted to thyroid this should be documented. 4 ) Morphometric calculations of mouse thyroid shape and size (Fig. 6) infer a role of Hgf/Met pathway prior to E11.5, which represents the end of descensus and start of bilateral elongation of the midline thyroid primordium.What happens thereafter in mutants?Is thyroid phenotype further aggravated with impact on subsequent growth and glandular development, or do you see catch-up of cell proliferation and normal progression the bilobation process?Notably, Fgf10 regulates thyroid developmental growth from E12.5 and onwards (Liang et al Development 145:dev146829, 2018).Thus, it would should be possible to answer whether Hgf/Met action is limited to a previous developmental stage, as suggested by current zebrafish data, and also whether the mutant phenotype merely is the result of delayed thyroid development.Normal S-T4 and only doubled S-TSH levels postnatally are consistent with subclinical hypothyroidism and argue that thyroid tissue volume and function essentually are sufficent, but is the gland smaller than in wildtype? 1 ) regulate late expansion of the thyroid is a novel finding.To my knowledge, the HGF-Met pathway has not yet been linked to defects in thyroid development and function in mouse models or in the context of congenital hypothyroidism in human.A particular strength of the study by Fang et al. is the demonstration that HGF-Met signaling is linked to Snail-mediated regulation of cell-cell adhesion and that tight regulation of adhesion dynamics could be critical during certain phases of thyroid primordium expansion.I appreciate the attempts by the authors to translate their findings in zebrafish to mouse thyroid organogenesis to demonstrate the broader role of this morphogenetic process but the observations in embryonic mice are poorly presented.Despite the novelty of the findings and the potential relevance of the reported data to better understand thyroid morphogenesis, the manuscript in its current form has not sufficient quality to recommend it for publication in The use of English language needs to be greatly improved.In several instances, a poor English prevented me from understanding what exactly the authors intended to describe.Unfortunately, in its current form, the MS is written for a very small community of readers interested or experienced in thyroid morphogenesis and the MS might fail to attract a broader readership.Due to my personal expertise in pharyngeal organogenesis in zebrafish and mouse embryos, I could follow most descriptions and explanations provided by the authors.However, when reading the MS, I had the strong expression that readers that are not familiar with zebrafish or mouse thyroid organogenesis might experience great difficulties in understanding the anatomical details or developmental processes described.The MS would benefit greatly from short introductory notes how thyroid development proceeds (particularly in zebrafish) with some more details on anatomical landmarks that characterize the environment in which the thyroid primordium develops. 246 -248: the authors note "These data indicated that the Hgfa-met pathway probably regulated rostral elongation along a pair of hypobranchial arteries in zebrafish during late thyroid development".It would be very informative to visualize the relative positioning of the thyroid and the paired hypobranchial arteries in WT and mutant fish during the process of elongation.A live imaging video and some images in Figure S3B are provided but only for the period before elongation.With respect to the specific anatomical relationship between the pharyngeal vessels and the thyroid the question arises if it is the vessels that express hgfa.From a larger conceptual perspective, this is an extremely relevant question because if vessels are the source of hgfa, then HGF-Met signaling could also be compromised in cases of vascular maldevelopment. Fig. 2A , Fig. 2A, 3C,D, 4A and 5A-B.the figure legends should include information on the orientation of the tissue sections shown (e.g., horizontal or sagittal sections, anterior is to the right) Figure 3 : Figure 3: Use capital letter O for labelling of panel O. Figure 6 :Reviewer # 3 ( Figure 6: there is a mix of labels to annotate the embryonic age.Please use one consistent label; either E for embryonic day or dpc for days post-conception Figures: Figure 1 D Figures: Figure 1 D: Please check the lower panel of Sanger sequencing results.All three graphs for "siblings" show hets, so labelling the group of panels as heterozygous siblings would be appropriate.The base sequences shown below each sequencing profile are not consistent.The left panel mentions A but the sequence profile shows a het with A/T, the middle panel mentions C but the sequence profile shows a het with C/T, and the right panel mentions A but the sequence profile shows a het with A/T. Figure 3 : Figure 3: Use capital letter O for labelling of panel O. Figure Figure6: there is a mix of labels to annotate the embryonic age.Please use one consistent label; either E for embryonic day or dpc for days post-conception Response: Thank you for your carefully reviewing our paper.We have revised the labels and used consistent label to annotate the embryonic age in Figure6. Figure S4 : Figure S4: Line 25 Typo "thyrocyte(s) proliferation", please add information that "anterior is to the right" in Fig. S4; recommend to change Edu to EdU in Figure panels;
9,328.2
2024-04-11T00:00:00.000
[ "Medicine", "Biology" ]
Fiber-based angular filtering for high-resolution Brillouin spectroscopy in the 20-300 GHz frequency range Brillouin spectroscopy emerges as a promising non-invasive tool for nanoscale imaging and sensing. One-dimensional semiconductor superlattice structures are eminently used for selectively enhancing the generation or detection of phonons at few GHz. While commercially available Brillouin spectrometers provide high-resolution spectra, they consist of complex experimental techniques and are not suitable for semiconductor cavities operating at a wide range of optical wavelengths. We develop a pragmatic experimental approach for conventional Brillouin spectroscopy integrating a widely tunable excitation-source. Our setup combines a fibered-based angular filtering and a spectral filtering based on a single etalon and a double grating spectrometer. This configuration allows probing confined acoustic phonon modes in the 20-300 GHz frequency range with excellent laser rejection and high spectral resolution. Remarkably, our scheme based on the excitation and collection of the enhanced Brillouin scattering signals through the optical cavity, allows for better angular filtering for low phonon frequency. It can be implemented for the study of cavity optomechanics and stimulated Brillouin scattering over broadband optical and acoustic frequency ranges. Conventional Raman spectroscopy techniques are used to study optical phonons in the THz range, which are spectrally far from the laser line. While these techniques are usually compatible with excitation sources over a wide range in optical wavelengths, they provide an insufficient straylight rejection in order to observe Brillouin modes as low as a few tens of GHz =~ 1 cm -1 . Thus, Brillouin spectroscopy techniques are frequently employed to probe the tailored broadband acoustic spectrum of nanostructures. In the past few decades, Brillouin spectroscopy has taken a giant leap forward in terms of measuring the Brillouin frequency shifts ranging from 0.1 GHz -1THz with ultrahigh resolution of 0.003 cm -1 = 0.1 GHz [19][20][21][22][23][24]. The widely used advanced Brillouin spectrometers providing superior performance are based on the virtually imaged phase array (VIPA) equipped with notch filters and on the scanning multiple pass tandem Fabry-Perot interferometers (TFPI) [24]. These spectrometers are very useful in characterizing the viscoelastic properties of matter with high accuracy and spectral resolution. However, the performance of these techniques strongly relies on an optimization of the optics and detectors to work at a fixed wavelength. Optical scans as a function of wavelength imply intricate optical alignments. Optophononic cavities present optical modes in a wide range of wavelengths. This limits the use of commercial Brillouin spectrometers in probing the confined acoustic modes in hybrid optophononic cavities. In this paper, we discuss a custom-built accessible and versatile Brillouin spectroscopy scheme to measure longitudinal acoustic phonons in the 20-300 GHz frequency range without optical wavelength restriction, thereby overcoming the limitations of existing Raman and Brillouin spectrometers. The experiments are performed on semiconductor optophononic planar cavities with a typical quality factor of 2000 where the Brillouin scattering signal is enhanced by the mode of the optical cavity [12,25,26]. In order to observe the confined acoustic modes in the Brillouin spectrum with sufficiently high spectral resolution and contrast, we propose a combination of angular and spectral filtering techniques. The angular filtering is implemented with a single-mode fiber to efficiently filter out stray-light from the laser and increase the signal to background ratio. The additional spectral filtering is implemented through tandem of an etalon and double Raman spectrometer. This combination has enabled us to observe the low frequency acoustic modes of a cavity at 20 GHz with a simple etalon, which are otherwise concealed in the excitation laser background. We have studied two samples (I and II) embedding acoustic cavities to confine phonons at 300 GHz and 18.3 GHz respectively through topological properties [27]. Each sample also consists of an optical microcavity with a cavity mode around 900 nm. This optical cavity is designed to increase the photoacoustic response of the structure [25]. For the 300 GHz sample, the acoustic cavity is embedded in the optical cavity as explained later on. For the 18.3 GHz sample, the same structure that allows to confine phonons also confines photons of the same wavelength [9,11]. The principal goal in Brillouin spectroscopy is the extraction of the low intensity Brillouin peaks with maximum suppression of the stray-light in the collected signal. Figure 1 shows the schematic representation of our home-built Brillouin spectroscopy setup. A collimated laser beam from a tunable continuous wave (cw) Ti:Sa laser (M2 SolsTis) operating at the wavelength in resonance with the fundamental optical cavity mode of the studied sample is used as an excitation source. The excitation laser with <5 MHz linewidth is focused on the sample with a spot diameter of 10 µm using a plano-convex lens OL (focal length, f = 13 mm). The same lens is used to collect the scattered signal from the sample. The excitation laser beam passes through a mirror M mounted on a translation stage that allows us to vary the incident angle with respect to the surface normal. The setup is designed in an approximated backscattering geometry with near normal excitation. The measurements are performed in the double optical resonance (DOR) condition to enhance the Brillouin scattering. The DOR condition is achieved by exploiting the in-plane photon dispersion of the optical cavity mode shown in Fig. 1. Under this condition, the excitation laser and the Brillouin scattered signal are coupled to the optical cavity mode. The wavenumber k of the optical mode in the cavity can be decomposed as k = kz + k// where k// is the in-plane component and kz is the normal component. The resonance is achieved for a given kz. The in-plane component k// increases with the angle of incidence. Therefore, keeping kz constant and varying k// implies a blue-shift of the optical cavity resonance when tuning the angle of incidence away from the surface normal. We optimized the collection of the Brillouin signal resulting from the Stokes process. In this process, the energy of the incoming beam is higher than the energy of the scattered Brillouin signal which means that k// of the incoming beam is larger than that of the scattered signal. The energy of the scattered signal depends on the frequency of the phonons involved in the process. For an incident in-plane wave vector kept constant, we can observe two configurations. For phonons in the 300 GHz range, the energy shift is large enough that the outgoing signal is normal to the surface. Whereas for phonons in the 20 GHz range, the energy shift is significantly smaller so the outgoing signal has k// ≠ 0 and is scattered with an angle. We benefit from the DOR to spatially filter the Brillouin signal by selecting the scattered signal for a given k// with a fiber coupler [25,26]. Excitation laser as well as scattering signal can be collected through the cavity mode by tuning the angle of incidence and the collection angle (θ), respectively [25]. The angle θ defines the angular offset with respect to that of the excitation laser in order to achieve the Brillouin scattered signal at the required phonon frequency. The spatial gradient of the sample under study enables us to maximize the coupling of the scattering signal with the optical cavity by changing the excitation laser spot position. The trade-off between the incident angle and sample position is set to obtain the maximal spectral coupling of the excitation and scattered Brillouin signal with the optical cavity mode. In our case for a certain position on the sample, the angle of incidence is fixed at 13° away from the normal for a 3 mm displacement of mirror M. The Brillouin signal is then collected at input port FC1 consisting of an 11.17 mm focal length fiber coupler connected to a single mode fiber with an NA of 0.13. The single mode fiber allows us to spatially filter the Brillouin signal and reject the reflected excitation laser. The filtering with the fiber is essential to attenuate diffuse laser light that is elastically scattered on the sample and on the multiple optical elements. For phonons at ~20 GHz, we take advantage of the small angular offset θ=2° to maximize the distance between the reflected laser beam and the Brillouin signal at the input port FC1. This is done by collecting the scattered signal on the same side of the normal as the incoming laser beam (see experimental setup in Fig. 1). In this second configuration, the angular filtering thus selects a circular section of NA=0.13*11.7/13=0.12 from the annular Brillouin radiation pattern. For the experimental quality factors and angle of incidence we estimate a maximum Brillouin collection efficiency of 14% inside the fiber. Note that this efficiency is substantially larger for Brillouin emission at normal incidence due to the circular radiation pattern and its larger overlap with the fiber NA. This signal is then passed through the etalon E and sent to the entrance slit S1 of a double grating spectrometer. Our setup consists of a tandem between a double spectrometer and a simple etalon E that permits us to reconstruct a Brillouin spectrum with high spectral resolution compared to the use of a single spectrometer. As the name suggests the double grating spectrometer is a combination of two monochromator chambers connected through an intermediate slit S2. The slit S2 allows us to select the measured optical wavelength range and is particularly used to reduce the level of the stray light generated inside the first chamber from the remaining excitation laser. The mode between the two acoustic DBRs. The two peaks at ~260 GHz and 335 GHz correspond to modes propagating in the acoustic DBRs [28]. The peaks at lower frequencies < 250 GHz correspond to acoustic modes propagating in the full optical structure. They are multiple harmonics of the mode at 37 GHz. The peak at 37 GHZ is related to the Brillouin mode arising due to the scattering from the GaAs substrate and is also due to modes propagating into the structure. We have compared our experimental results with a calculated Brillouin cross-section using a photoelastic model and transfer matrix method for the structure embedded in GaAs [28]. The calculated spectrum (red solid line) in In order to test the technique at lower frequencies, we studied Sample II which presents a confined acoustic mode at 18.3 GHz. The inset of Fig. 4(a) displays the schematic of the structure where two DBRs are concatenated to simultaneously confine the optical and acoustical fields at the interface. Sample II is also excited resonantly at an excitation laser wavelength of 924.72 nm, with an excitation power of 6.3 mW. The excitation laser is kept with the same angle of incidence of 13°. In this case to satisfy the DOR condition, the scattered signal is no longer collected at normal incidence but with an angle θ = 2° as shown in Fig. 1, designated for the Brillouin shift of 18.3 GHz which amounts to a difference in optical wavelength of 0.06 nm. Remarkably, the distance between the reflected laser and the signal at the input of the fiber is larger than in the previous case (see Fig. 1), enabling a better fiber-based angular filtering. Here, the mode of the etalon scanned the full FSR using a step of 0.01°. in DOR condition. The peaks at 56 and 90 GHz correspond also to interface modes between the two DBRs at higher frequencies and are the third and fifth harmonics. The acoustic displacement at the interface is displayed in Fig. 4(b) for the three harmonics observed in our measurement. As before, the peak at 37 GHz is related to the Brillouin mode arising due to the scattering from the GaAs substrate that agrees well with our calculations. The processed spectrum is obtained following the same reconstruction method as sample I. The calculated spectrum obtained for the structure is in complete agreement with the experimental result. The intense peak which appears close to the laser line on the Brillouin spectrum without etalon (grey dashed line) of Fig. 4(a) is due to laser light diffracted by the edge of the slit S2. Despite this scattered laser line, the peak corresponding to the interface mode at 18.3 GHz is already visible thanks to the efficiency of the angular filtering. However, its superposition with the remaining unwanted laser background leads to an artificial red shift of the peak. When adding the etalon to perform a better spectral filtering, this background is efficiently suppressed and the signal contrast is significantly improved for the mode at 18.3 GHz. A similar experimental technique has been discussed in ref [29], which presents a tandem of a gas-pressure controlled Fabry-Perot interferometer and a triple spectrometer to access broadband acoustic frequencies in DBR based microcavities. The technique presents Raman/Brillouin spectra with high resolution of ~0.01cm -1 = 0.3 GHz. In contrast to our scheme, the study reports, the tuning of angle of incidence to remain in the DOR condition while collecting the scattered signal at normal incidence. There, a small aperture is used to filter out the Brillouin signal from the reflected laser which results in insufficient stray-light rejection for modes at lower acoustic frequencies with strong parasitic laser lines in the spectrum. In our scheme, we gain better laser rejection at 20 GHz by exciting and collecting the scattered signal away from the normal leading to spatial separation of the reflected laser and of the signal which is filtered by a single mode fiber. Conclusion In conclusion, we have applied a combination of optical filtering techniques that allow us to access the spontaneous Brillouin scattering signal originated by the confined acoustic mode of a semiconductor microcavity. The technique relies on the angular filtering with a single mode fiber permitted by the angular offset between the incoming laser and scattered signal. It helps to selectively collect the Brillouin signal and to attenuate stray light from the laser. In addition, this technique works over a large range of optical excitation wavelengths using the same set of broadband optics. The etalon has two functions here. On one hand it is used to filter the signal to increase the spectral resolution. On the other it blocks the remains of the laser before the spectrometer to gain high-contrast. Despite the simplicity of the proposed experimental setup, the resolutions (~2 GHz) and stray-light rejection are remarkable in comparison to a Raman spectrometer. With an excitation power of 25 mW, we manage to get well defined peaks within 1 second per datapoint. The ratio signal/background measured at 40 GHz is of the order of 100. The signal/noise ratio and integration time could be further improved by using annular apertures in a different collection approach. Our signals profit from the enhancement of both acoustical and optical fields and provide a prospective tool for the study of stimulated Brillouin scattering at 20 GHz with high-resolution. The simplified detection of high frequency acoustic phonons has implications in the study of phonon-assisted emission from a gain medium such as quantum well or dot embedded in an optophononic cavity. Such high-resolution spectroscopy could also be valuable to probe any change in the density of acoustic modes by probing the phonon sideband of quantum emitters coupled to an acoustic cavity. The proposed experimental scheme thus offers an accessible and versatile platform for exploring cavity optomechanics and phonon lasing at broadband acoustic frequencies. Methods Both the samples are grown by molecular beam epitaxy on a (001) GaAs substrate. Sample I is made of two optical distributed Bragg reflectors (DBRs) enclosing an optical spacer with an optical path-length of 2.5λ [28] at a resonance optical wavelength around 910 nm. The optical spacer is interface mode at 920 nm [27,30]. The DBRs are formed by 14 (16) periods of 65.1 nm / 231.1 nm (195.5 nm / 77.0 nm) GaAs/Al0.95Ga0.05As layers for the top (bottom). The samples were grown with a spatial gradient such that they present position dependent resonance wavelengths.
3,686.2
2020-11-15T00:00:00.000
[ "Physics", "Engineering" ]
Image Quality Assessment Based on Contourlet and ESD Method In recent years, the development of the digital image processing promotes the research of the image quality assessment (IQA). A novel metric for full-reference image quality assessment is presented. The metric combined the contourlet transform with the energy of structural distortion (ESD), namely the CT-ESD. The calculation of the ESD is carried out in each subband of the contourlet transform. Then the comparisons between the reference and the distorted images on each subband are integrated by weighting sum. The superiority of the contourlet transform integrates well into new IQA metric. Experiments performed on the database TID2013 demonstrate that the CT-ESD can achieve high consistency with the subjective evaluation. Introduction Digital images are experiencing a tremendous growth in both theoretical developments and sophisticated applications.The primary issue in digital image processing is the image quality assessment (IQA) [1].Image quality assessment is the procedure mapping the changes of the images to corresponding visual preference.It can be divided into subjective evaluation and objective evaluation.Obviously, humans, as the ultimate receivers of the images, are the best candidate to assess the quality of images.Nevertheless, the inconvenience and huge consumption of subjective evaluation leads to its less favor.And the objective image quality assessment has attracted increasing attentions from more and more researchers. According to the dependence of the reference image, the objective image quality can fall into three categories: full-reference (FR), reduced-reference (RR) and noreference (NR).Full-reference image quality assessment algorithms has the strongest dependence on the reference image.They take both a reference image and a distorted image as input and produce a scalar which measure the quality of the distorted image.Reduced-reference image quality assessment algorithms has less dependence on reference images.Instead of reference images, certain features of reference images are available to evaluate the quality.No-reference image quality assessment (also known as blind image quality assessment) algorithms perform image quality estimation with no reference image information but only the distorted images.In this paper we focus on the full-reference image quality assessment. Conventional full-reference image quality assessment algorithms calculate the pixel-wise difference between a distorted image and its corresponding reference image, for example, the meansquared error (MSE) and the peak signal-to-noise ratio (PSNR).Despite its intensive usage, these metrics are not preferable choice because they correlate poorly with human visual subjectivity. More FR image quality assessment algorithms have been developed these years.As the ultimate objective of IQA is human visual preference, it is taken for granted to combine the characteristics of human visual system (HVS) with the IQA.The easy way to combine with HVS is weighting the frequency domain error with a contrast sensitivity function (CSF), which is called the weighting signal-to-noise ratio (WSNR) [ Besides HVS and statistics based IQA, there are structure based IQA metrics.Though 'image structure' remains non uniform definition, this class of metric is of more preference.Among these metrics, the universal image quality index (UQI) [10] and the structural similarity index (SSIM) [11] are of essential importance.The UQI employs cross-correlation and measures of luminance and contrast differences to estimate quality.The SSIM is actually the extended version of the UQI.The difference is that the SSIM added small constants to the numerator and the denominator of each measure.There are a lot of IQA metrics based on SSIM.Take CW-SSIM [12] as an example, it combines a complex wavelet with the SSIM and it is insensitive to translation, scaling and rotation of images in comparison with SSIM.Except for these SSIM based metrics, there are other structure based IQA.The energy of structural distortion (ESD) [13] is an IQA metric of this kind.Cailing Wang et al. extended ESD to the spectral image in spectrum aspect and spatial aspect [14]. In this paper, we proposed a full-reference image quality assessment metric based on the contourlet transform and the energy of structural distortion (ESD).Section presents the background knowledge of the contourlet transform and the ESD.Section presents our proposed IQA metric named as contourlet transform based ESD (CT-ESD).Section presents the subjective validation details.Then we conclude the paper in Section . Theory Basis It is known to all that 2-D wavelet transform is short of efficiency for image representation in spite of its excellent performance for 1-D signal processing.Therefore, to develop multiscale geometric analysis is necessary.Contourlet transform [15] is an attractive alternative.It is constructed originally in discrete domain, meanwhile, it has a precise connection with continuous domain expansions.Contourlet transform allows for a different number of directions at different scale while achieve nearly critical sampling.Its implement based on iterated filter banks leads to computationally efficient.Moreover, Contourlet transform can provide a sparse representation for natural images with smooth contours.It possesses the qualities of directionality and anisotropy which are important for image representations.Thus we applied Contourlet transform in IQA metric design. Contourlet transform is actually a double filter bank structure.The first filter bank is Laplacian pyramid (LP). It is aimed at capture the point discontinuities.Here we focus on the decomposition of LP.At each level, the input image is filtered by lowpass analysis filter H and downsampled by sampling matrix M to generate a lowpass version of the input.At the same time, a bandpass image is obtained by the subtraction between the input at this level and the prediction.The prediction is the lowpass version mentioned above with sequentially upsampling and filtering.The sampling matrix in upsampling is M and the filter is synthesis filter G.The second filter bank is directional filter bank (DFB).This filter bank links point discontinuities into linear structures.It decomposes the bandpass image from LP into 2 l subbands via l-level binary tree.Each It is noticeable that the square of i S is 1, in other words, the energy of i S is 1.The inner product is set to be the simplest one.When ^, 1,..., , 1,..., ^, 1,..., , 1,..., (3) It should be noted that in the calculation of i Ec , the object in the inner product is i S .The final score of ESD is defined as (4) in which K is the number of blocks.The lower the score of ESD is, the higher image quality is.The fact that the distorted image is identical to the reference one will result in the value of ESD being negative infinity.So we insist that the calculation of ESD should add a small constant before the logarithm operation. Metric Proposed The ultimate objective of IQA is to obtain an evaluation of image quality which approximate the perception by human visual system (HVS).(5) It should be noted that each subband of Contourlet transform generates an energy value matrix. At each scale , the variance of the energy is calculated (6) The sum ensures j EV is a scalar. The final metric proposed is defined as (7) in which D is a constant to avoid the logarithm of zero and small enough to have no influence on the result.j w is the weighting coefficient of the j EV at scale j. Moreover, the sum of all the j w is one ( 1 For different scale, its weighting coefficient can change according to its contribution to the HVS.The flow chat of the CT-ESD is in Figure 2. Experiments At the beginning of this section, we need to introduce the image database briefly.In this paper, we choose the Spearman rank-order correlation coefficient (SROCC) and the Kendall rankorder correlation coefficient (KROCC) just as the authors of TID2013 do.They prefer to employ rank order correlation coefficients to avoid fitting procedures which may be not unique. The value of the SROCC and the KROCC between the mean opinion score (MOS) and the proposed metric score is expected to be high which implies the new metric is a good substitution for HVS. As there are there are parameters j w , 1, 2,..., j J in our proposed metric, we choose the j w which can lead to higher SROCC and KROCC.Experiments prove that high parameter corresponding to medium frequency subband can lead to high rank-order correlation coefficients.This phenomenon is essentially in agreement with the charac-teristic of HVS.In this paper we choose 3 level Laplacian decomposition in the contourlet.It produces coefficients of four cells corresponding to one low frequency subband and three pyramidal levels.And each pyramidal level contains 3 2 frequency bands.Here, we set the parameters as follows: And the comparisons to the conventional IQA such as PSNR and MSE are meaningful and necessary.The data is provided in [17] for most of these metrics.And it also can be calculated by Metrix MUX Visual Quality Assessment Package [18]. The following two tables show the SROCC and KROCC between the metrics and the MOS respectively for each subset.In the tables, each column represents a subset of the database TID2013 except for the last column.The last column 'Full' means the whole set of the database TID2013.Each row represents a metric.For example, in the Table 1, the value at the row 'CT-ESD' and the column 'Full' is 0.7250.It indicates that the metric CT-ESD gets 0.7250 for the SROCC with the MOS on the whole set.As a result of the challenging difficulty of the database TID2013, the values of the SROCC and KROCC have not appear the high values very close to 1. From the tables, we can conclude that CT-ESD outperforms the other metrics in the table on the whole distorted images set.Though the CT-ESD fails to outperform the others on each subset, its performance is more evenly in different subset.Especially, the 'Exotic' subset contains distortions types which are not happen frequently but among the most difficult ones for IQA.In this subset the CT-ESD has the highest KROCC value 0.5021 of that column in the Table 2.In Table 1, the CT-ESD is 0.6894, only lower than the VSNR 0.7064 on this subset.On the 'New' subset and the 'Color' subset, which drag down the rank-order correlation coefficient for most metrics, the CT-ESD has preferable performance.It gets 0.6776 and 0.6021 for the SROCC which are both ahead in the corresponding column. It is noteworthy that compared to ESD, CT-ESD is better on the subsets 'Noise', 'Actual', and 'Exotic'.These three subset relate to distortion types most common and difficult respectively.And CT-ESD has a significant improvement on the subset 'Exotic'.Together these subsets, CT-ESD outperforms the ESD in the full database TID2013.Whether the SROCC or the KROCC, it wins the other metrics at the 'Full' column with the highest values 0.7250 and 0.5514 respectively. In Figure 3 we can see the scatter plot between the MOS and the metrics.From Figure 3(a) and (b), it is easy to conclude that the scatters in (b) are more compact and have more apparent tendency.Figure 3(b) represents the scatter plot between the MOS and the CT-ESD.So this means the CT-ESD has better evaluation of the image quality.Figure 3(c) and (d) show that different metrics may lead to the scatter plot into different shapes.From them we can know that the CT-ESD has better performance as it performs better on the compactness and tendency.The data and the charts above show that it is meaningful to introduce the contourlet transform into the IQA metric.We attribute the preferable performance of the CT-ESD to the attendance of the contourlet transform.The multi-resolution and the sparse representation of the contourlet transform make the IQA metric more effective and more similarity to the HVS.These advantages lead to the success of the IQA. Discussions In this paper, we proposed a novel full-reference IQA combined the ESD with the contourlet transform named Moreover, its preferable performance on the database TID2013 proves its validity and practicability, especially for the 'Exotic' type distortions.Our proposed metric still has the potential of improvement.We will focus on the utilization of the multi-direction of the contourlet in the IQA and the IQA metric aimed at the 'Color' type distortion.We insist that the contourlet transform as a 'real' two dimensional transform will be a powerful tool in the IQA. 2]. N. Damera Venkata et al. proposed the noise quality measure (NQM) which consists of the contrast pyramid processing and the signal-to-noise ratio computation [3].Karen Egizarian et al. presented the IQA metric named as PSNR-HVS [4].It removes the mean shift and then stretches the contrast by a scanning window before the calculation of the modified PSNR.The modification of PSNR is taking HVS into account when in the calculation of the MSE in PSNR.In the next year, these researchers proposed the PSNR-HVS-M metric [5] based on PSNR-HVS.The new metric is based on model of inter-coefficient masking of DCT basis functions and the modifications version of PSNR.Damon M. Chandler and Sheila S. Hemami presented an IQA metric based on near-threshold and suprathreshold properties of human vision called the visual signal-to-noise ratio (VSNR) [6].HVS-based IQA metrics have a high dependence on the accuracy on the models of HVS.So many researchers employ other models such as image statistics.Hamid Rahim Sheikh et al. took the IQA problem as an information fidelity problem.They proposed a new metric named as information fidelity criterion (IFC) [7] which measured the statistical information that a distorted image had of the reference image.Similarly, Hamid Rahim Sheikh and Alan C. Bovik presented visual information fidelity (VIF) [8] which was derived from a statistical model for natural scenes, a model for image distortions and a HVS model.In the other direction, Zhou Wang and Qiang Li found a novel weighting approach based on a Gaussian scale mixture (GSM) model.It had pretty well and consistent performance improvement of both PSNR and SSIM based IQA named as information content weighted PSNR (IW-PSNR) and SSIM (IW-SSIM) respectively [9]. Figure 1 . Figure 1.The flow diagram of the contourlet transform.First, the LP is employed to decompose the image into multiscale.Then in each bandpass channel, the DFB divides it into different directional subbands.After we learn about the contourlet transform, we will continue to introduce the energy of structural distortion (ESD) [13].The reference and distorted image are divided into blocks with size H L u .The blocks are actually 2D vectors denoted by i b and i bc respectively for the reference and distorted image.The element of i b is in distorted image is calculated the same way as follows: I. by HVS and Contourlet transform.Moreover, the Contourlet transform is sparse and effective for image representation which is fundamental to IQA.Subsequently, it is naturally inclined to employ the Contourlet transform into the IQA metric.We propose an IQA metric combined the Contourlet transform and the energy of structural distortion (CT-ESD).The procedures are list below.Input the distorted image D The superscript D indicates the distorted image and R indicates the reference image.The subscript j represents the scale index and k means the orientation index.For example, , D j k c denotes the Contourlet coefficients of the distorted image at the jth level in the kth direction.And the low frequency subband has no direction division.The Contourlet coefficients ^, way as the equation (1).Calculate the energy of structural information of the distorted image and the reference image which are denoted as ^; , Figure 2 . Figure 2. The flow chart of the IQA metric combined the Contourlet transform and the energy of structural distortion (CT-ESD). the low frequency subband and three pyramidal levels.Obviously, 2 w is the biggest one which means the weighting coefficient of the medium frequency subband is the biggest.The performance of the proposed CT-ESD metric will be validated and compared with some representative IQA metrics of the HVS based, statistics based and structural based IQA metrics respectively.These metrics are NQM [3], PSNR-HVS [4], VSNR[6], UQI[10], SSIM[11], IFC[7], VIF[8], IW-PSNR[9], and ESD [13]. Figure 3 . Figure 3.The scatter plot between the MOS and some metrics.(a) the plot between the MOS and the ESD;(b) the plot between the MOS and the CT-ESD;(c) the plot between the MOS and the IFC;(d) the plot between the MOS and the SSIM. ESD.It is a structural based IQA metric which measures the loss of the structure information to evaluate the image quality.The attendance of the contourlet lead to the improvement of the ESD image quality assessment. Table 1 the SROCC values of the metrics for the database TID2013 Table 2 the KROCC values of the metrics for the database TID2013
3,719.8
2017-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
High-rate systematic recursive convolutional encoders: minimal trellis and code search We consider high-rate systematic recursive convolutional encoders to be adopted as constituent encoders in turbo schemes. Douillard and Berrou showed that, despite its complexity, the construction of high-rate turbo codes by means of high-rate constituent encoders is advantageous over the construction based on puncturing rate-1/2 constituent encoders. To reduce the decoding complexity of high-rate codes, we introduce the construction of the minimal trellis for a systematic recursive convolutional encoding matrix. A code search is conducted and examples are provided which indicate that a more finely grained decoding complexity-error performance trade-off is obtained. Introduction The typical turbo code configuration is the parallel concatenation of two systematic recursive constituent convolutional encoders of rate 1/2 connected via an interleaver, resulting in a code of rate 1/3. However, higher rate turbo codes may be useful in modern wireless, magnetic recording and fiber optics applications [1]. The usual approach to increase the overall rate is to puncture selected bits of the turbo codeword [2]. An alternative is to use highrate constituent encoders [1,3,4] what, according to [3], offers several advantages, such as better convergence of the iterative process, higher throughput, reduced latency, and robustness of the decoder. If puncturing is still needed to achieve a required rate, fewer bits have to be discarded when compared to the conventional rate-1/2 constituent encoders, resulting in less degradation of the correcting capability of the constituent code [3]. In [1], a class of systematic recursive convolutional encoders restricted to be of rate k/(k +1) is proposed. The codes are optimized in terms of the pairs (d i , N i ), where d i is the minimum weight of codewords generated by input sequences of weight i and N i are their multiplicities, and thus suited to be used as constituent encoders of turbo codes [5]. Good systematic recursive encoding matrices with increasing values of encoder memory sizes for a fixed code rate are listed. Turbo codes constructed with the family of constituent encoders given in [1] are shown *Correspondence<EMAIL_ADDRESS>2 UFPE, Recife-PE, Brazil Full list of author information is available at the end of the article to outperform some high-rate turbo codes obtained by puncturing a rate-1/2 constituent encoder. One major drawback of high-rate constituent encoders is the decoding complexity since it increases exponentially with k and with the constraint length for various decoding algorithms. With the motivation of proposing a class of turbo codes with low complexity, high-rate constituent encoders, Daneshgaran et al. [4] constructed recursive constituent encoders of rate k/(k + 1) by puncturing a rate-1/2 recursive mother encoder. However, a reduction in decoding complexity is obtained at the expense of a reduced spectrum (d i , N i ) when compared to that of the best recursive encoder of rate k/(k + 1) [1]. An alternative to reduce the decoding complexity is to consider trellis representations for the constituent codes other than the conventional trellis usually adopted. For nonrecursive convolutional encodes, there is a trellis structure, the minimal trellis [6], which represents the coded sequences minimally under various complexity measures. The interest on the minimal trellis representation comes from its good error performance versus decoding complexity trade-off [7][8][9][10] and its potential power consumption and hardware utilization reductions [11]. However, the minimal trellis construction presented in [6] cannot be readily applied to turbo codes. The reason is that the mapping between information bits and coded bits produced by the minimal trellis corresponds to nonrecursive convolutional encoders, while systematic, recursive mappings are required in turbo coding. This article presents a method which can fill this gap. http://asp.eurasipjournals.com/content/2012/1/243 In this article, we introduce the construction of the minimal trellis for a systematic recursive convolutional encoding matrix, the encoding required for the constituent codes of a turbo code. Our goal is to reduce the decoding complexity of a turbo decoder operating with high-rate constituent encoders. We also conduct a code search to show that a more finely grained decoding complexity-error performance trade-off is achieved with our approach. We tabulate several new encoding matrices with a larger variety of complexities than those in [1], as well as code rates other than k/(k + 1). The proposed minimal trellis can be constructed for systematic recursive convolutional encoders of any rate. Thus, our approach is more general than that in [1], while allowing that a distance spectrum (d i , N i ) better than that of the punctured codes in [4] can be achieved. The rest of this article is organized as follows. In Section 2, we introduce some basic definitions and notations. Section 3 introduces the minimal trellis construction for a systematic recursive convolutional encoding matrix. In Section 4, we present search code results. Section 5 concludes the article. Preliminaries Consider a convolutional code C(n, k, ν), where ν, k and n are the overall constraint length, the number of binary inputs and binary outputs, respectively, while the code rate is R = k/n. Every convolutional code can be represented by a semi-infinite trellis which (apart from a short transient in its beginning) is periodic, the shortest period being a trellis module. The conventional trellis module conv consists of a single trellis section with 2 ν initial states and 2 ν final states; each initial state is connected by 2 k directed branches to final states, and each branch is labeled with n bits. The minimal trellis module, min , for nonrecursive convolutional codes was developed in [6]. Such a structure has n sections, 2 ν t states at depth t, 2 b t branches emanating from each state at depth t, and one bit labeling each branch, for 0 ≤ t ≤ n − 1. The trellis complexity of the module , TC( ), defined in [6] captures the complexity of trellis-based decoding algorithms [12]. It is shown in [6] that TC( conv ) = n k 2 ν+k and TC( min ) = 1 k n−1 t=0 2 ν t + b t symbols per bit. The state and the branch complexity profiles of the minimal trellis are denoted by ν = ( ν 0 , . . . , ν n−1 ) and b = ( b 0 , . . . , b n−1 ), respectively. It has been shown in [6] that for many nonrecursive convolutional codes the trellis complexity TC( min ) of the minimal trellis module is considerably smaller than the trellis complexity TC( conv ) of the conventional trellis module. A generator matrix G(D) of a convolutional code C(n, k, ν) is a full-rank k × n polynomial (in D) matrix that encodes/generates C, i.e., is realizable by a linear sequential circuit (called an encoder for C) [13]. Let G(0) denote the binary matrix obtained when substituting D with 0 in the matrix G(D). If G(0) is full-rank, then G(D) is called an encoding matrix and is of particular interest. where the non-zero elements of (D) are called the invariant factors of G(D), the k × k matrix A(D) and the n × n matrix B(D) are both polynomial with unit determinants. Let ν i be the constraint length for the ith input of a polynomial generator matrix G(D), defined as Then the overall constraint length (already mentioned) is given by . . , m are the k × n (scalar) generator submatrices. The scalar generator matrix G scalar is given by [6] The "matrix module" is the matrixĜ defined as The generator matrix G(D) is said to be in the left-right (LR) (or minimal span or trelis oriented) form, if no column of G scalar contains more than one underlined entry (the Leftmost nonzero entry in its row), or more than one overlined entry (the Rightmost nonzero entry in its row). If G(D) is in LR form, then it produces the minimal trellis for the code [6]. Construction of the minimal trellis for systematic recursive encoding matrices The construction of the "minimal" trellis for the systematic recursive convolutional encoding matrix G sys (D) involves two main steps. First, we find the minimal trellis of an equivalent nonsystematic nonrecursive minimalbasic generator matrix in LR form. Then, the mapping between the information bits and coded bits in this trellis is changed in order to construct a systematic minimal trellis. The complete algorithm is summarized in the end of this section. Minimal trellis for an equivalent encoder Let G sys (D) be a systematic recursive encoding matrix for a rate R = k/n convolutional code and let q(D) be the least common multiple of all denominators of the entries in G sys (D). We construct a nonsystematic nonrecursive basic encoding matrix, denoted by G b (D), equivalent to G sys (D), as follows ( [13], p. 44), ( [14], Theorem 4.6): • Find the Smith form decomposition of the polynomial matrix q(D)G sys (D): where the non-zero elements of (D) are called the invariant factors of q(D) G sys (D), the k × k matrix A(D) and the n × n matrix B(D) are both polynomial with unit determinants. Thus, the invariant factor decomposition of G sys (D) is where (D) = (D)/q(D). • Form the desired encoding matrix G b (D) as the k × n submatrix of B(D) in (5) consisting of the first k rows. We can then perform a sequence of row operations on G b (D) to construct a nonsystematic nonrecursive minimal-basic encoding matrix G(D) in LR form. Table IV] The invariant factor decomposition of G sys (D) is given by The basic encoding matrix G b (D) equivalent to G sys (D) is readily obtained from (7). Using the greedy-algorithm [15] we turn G b (D) into the LR form, or equivalently, into the minimal-span form [15,Theorem 6.11], resulting in the following generator matrix with overall constraint length ν = 3. The trellis complexity of the conventional module for G sys (D) is TC( conv ) = 85.33 symbols per bit. The minimal trellis module for the convolutional code with G(D) given in (8), constructed with the method presented in [6], is shown in Figure 1. It has state and branch complexity profiles given byν = (3, 4, 4, 4) and b = (1, 1, 1, 0), respectively. Thus, the trellis complexity (n, k, ν) = (4, 3, 3) convolutional code with G(D) given in (8). The solid/blue branches represent "0" codeword bits and the dashed/red branches represent "1" codeword bits. In the first three transitions, the upper (resp. lower) branches correspond to information bit "0" (resp. "1"), i.e., the standard convention. http://asp.eurasipjournals.com/content/2012/1/243 of the minimal trellis is TC( min ) = 32 symbols per bit. In the first three trellis sections, the upper (resp. lower) branches correspond to information bit "0" (resp. "1"), i.e., the standard convention [6]. The solid branches represent "0" codeword bits and the dashed branches represent "1" codeword bits. The mapping between information bits and coded bits in Figure 1 is not systematic, so this trellis does not represent the same encoder as G sys (D). The construction of a minimal trellis for systematic recursive encoders is discussed in the next section. Minimal trellis for systematic recursive encoders Originally, minimal trellises have been constructed for codes, not for matrices (or encoders). However, the convention that the upper branches refer to the information bit 0 and the lower branches refer to the information bit 1 yields a particular encoding which, in general, is not systematic. We note that the association of solid/dashed branches with codeword bits can not be changed, as any change in this regard would result in a trellis which would no longer represent the convolutional code. However, by enforcing a different convention on the association of the branches to the information bits, only a different encoding for the same code is obtained. Since in the systematic part the information bit and the coded bit must have the same value, all we need to do is to change, in the information sections of the minimal trellis, the standard convention to that where solid branches refer to the information bit 0 and the dashed branches refer to the information bit 1. Let us refer to this convention as the systematic convention. Getting back to the minimal trellis in Figure 1, for the nonsystematic nonrecursive convolutional encoding matrix G(D) given in (8), we only need to adopt the systematic convention in the first three sections to turn this minimal trellis into a systematic trellis. It remains to show that the minimal trellis in Figure 1 with the systematic convention is the minimal trellis for the systematic recursive convolutional encoding matrix G sys (D) in (6). We note that the generator matrices G(D) given in (8) and G sys (D) in (6) are equivalent in the sense that both generate the same code. Therefore, except for the edge convention, the two trellises are exactly the same. Assuming that the information bits at the input of the encoder associated with G sys (D) and the information bits associated with the minimal trellis in Figure 1 occupy the same positions, the systematic convention is unique. Consequently, the trellis in Figure 1 with the systematic convention in the first three sections is the minimal trellis for the systematic recursive convolutional encoding matrix G sys (D) in (6). The complete algorithm for the construction of the minimal trellis for a systematic recursive encoding matrix is summarized as: (n, k, ν) = (5, 4, 3) systematic recursive encoder. Solid/blue branches represent "0" codeword bits while dashed/red branches represent "1" codeword bits. The same convention (i.e., the systematic convention) applies to the first four trellis sections. New codes Graell i Amat et al. [1] variety of complexities compared to those listed in [1]. The main idea is to propose templates with polynomial generator matrix G(D) in trellis-oriented form [6,7] with fixed TC( min ). This can be done by placing the leading (underlined) and trailing (overlined) 1's of each row of the "matrix module" in specific positions, leaving the other positions free to assume any binary value. Example 3. The following "matrix module" is associated with an ensemble of nonsystematic nonrecursive convolutional codes of rate 3/4 and trellis complexity of the minimal trellis module TC( min ) = 21.33 symbols per bit. Remark. In our code search we enforce that the positions of the underlined 1's are in the first k columns of the matrix module in order to assure that the information bits of the corresponding minimal trellis are in the first k sections. By varying the free positions (marked with an "*") in this matrix, several polynomial generator matrices G(D) are produced. For each matrix G(D) in this ensemble, we apply Steps (3) and (4) of the algorithm, i.e., we construct the minimal trellis module for G(D) and adopt the systematic convention to the information sections, and then calculate the pairs (d i , N i ), for i = 2, . . . , 6. The dominant term, d 2 , is called the effective free distance of the turbo code [16]. Codes of rate R = 2/4, 3/4, 3/5, 4/5 are listed in Table 1, which indicates the relationship between TC( min ) and the error performance expressed in terms of the pairs (d i ,N i ). The matrix G(D) shown in the table together with the systematic convention is used to construct the minimal trellis that attains the corresponding (d i , N i ), i = 2, . . . , 6. The existing codes taken from [1] are also indicated in Table 1. Their (minimal) trellis complexity, shown in the table, was obtained by applying the complete algorithm of Section 3.2 to their respective systematic recursive encoding matrices. For example, the matrices G sys (D) listed in [1, Table IV] for R = 3/4 yield TC( min ) = 10.67 (for ν = 2), TC( min ) = 32 (for ν = 3) and TC( min ) = 64 (for ν = 4). New codes with a variety of trellis complexities are found with our code search procedure. The effective free distance and/or multiplicity of the code are gradually improved as more complex codes are sought. Conclusions We present a method to construct the minimal trellis for a recursive systematic convolutional encoding matrix. Such a trellis minimizes the trellis complexity measure introduced by McEliece and Lin [6], which applies to trellisbased decoding algorithms. As a contribution of this work, several new convolutional encoding matrices having an equivalent systematic recursive encoding matrix, optimized for turbo codes, are tabulated. They provide a wide range of performance-complexity trade-offs, to serve several practical applications.
3,872.4
2012-11-21T00:00:00.000
[ "Computer Science" ]
Noise within: Signal-to-noise enhancement via coherent wave amplification in the mammalian cochlea The extraordinary sensitivity of the mammalian inner ear has captivated scientists for decades, largely due to the crucial role played by the outer hair cells (OHCs) and their unique electromotile properties. Typically arranged in three rows along the sensory epithelium, the OHCs work in concert via mechanisms collectively referred to as the “cochlear amplifier” to boost the cochlear response to faint sounds. While simplistic views attribute this enhancement solely to the OHC-based increase in cochlear gain, the inevitable presence of internal noise requires a more rigorous analysis. Achieving a genuine boost in sensitivity through amplification requires that signals be amplified more than internal noise, and this requirement presents the cochlea with an intriguing challenge. Here we analyze the effects of spatially distributed cochlear-like amplification on both signals and internal noise. By combining a straightforward mathematical analysis with a simplified model of cochlear mechanics designed to capture the essential physics, we generalize previous results about the impact of spatially coherent amplification on signal degradation in active gain media. We identify and describe the strategy employed by the cochlea to amplify signals more than internal noise and thereby enhance the sensitivity of hearing. For narrow-band signals, this effective, wave-based strategy consists of spatially amplifying the signal within a localized cochlear region, followed by rapid attenuation. Location-dependent wave amplification and attenuation meet the necessary conditions for amplifying near-characteristic frequency (CF) signals more than internal noise components of the same frequency. Our analysis reveals that the sharp wave cutoff past the CF location greatly reduces noise contamination. The distinctive asymmetric shape of the “cochlear filters” thus underlies a crucial but previously unrecognized mechanism of cochlear noise reduction. I. INTRODUCTION In the 19th century, Bernhard Riemann made the remarkable observation that the sound of a foghorn could be heard from a distance of five miles.He concluded that the human ear must be capable of detecting sounds that generate only subatomic motions of the eardrum [1]. During the succeeding one and a half centuries, Riemann's conjecture has been repeatedly verified [2].The extraordinary sensitivity of the mammalian ear can be attributed to the coordinated, piezoelectric behavior of outer hair cells (OHCs) [3].Arranged in rows along the sensory tissue (the organ of Corti), these cells act as actuators capable of boosting sound-induced vibrations of the sensory tissue by more than two orders of magnitude [4].The prevailing belief in the field posits that OHCs actively amplify sound-induced waves as they propagate along the spiral structure of the cochlea.Collectively, the mechanisms involved are known as the "cochlear amplifier."However, whether cochlear amplification constitutes a viable strategy for enhancing the sensitivity of hearing remains controversial.Because the minimum signal level to which sensory neurons can meaningfully respond is inherently limited by the level of internal noise (see, e.g., Ref. [5]), it remains unclear how the cochlear amplifier, while amplifying signals, can avoid amplifying the accompanying internal noise [6].Although the dominant sources of intracochlear mechanical noise remain to be firmly identified-these necessarily include both thermal noise and mechanical noise generated by stochastic gating of hair-cells ion channels (see, e.g., )-intracochlear mechanical noise is both present and measurable, and it depends on the same mechanisms that control signal amplification [8]. While previous work has focused on the effects of noise on the mechanical sensitivity of inner-hair-cell stereocilia (see, e.g., Refs. [5,7]), propagation and amplification of intracochlear mechanical noise remains unexplored. In this study, we investigate the impact of spatially distributed amplification on both signals and internal noise using two distinct but complementary approaches: a mathematical model of spatially distributed amplification and an active model of the cochlea.We begin by examining the simplest scenario, which involves a highly anisotropic, one-dimensional (1D) medium comprising a series of cascaded "noisy" amplifiers in which signals and noise propagate in only a single direction.We then move to the more challenging but biologically relevant case where the medium is nearly isotropic, so that signals and noise propagate and are amplified in both directions.Finally, we investigate signal and noise amplification within a simplified but physically realistic linear model of the cochlea.Importantly, our analysis concerns only noise sources that are located within the cochlea: The ear processes external noise in the same way that it processes signals [9].Furthermore, as we all know from cocktail parties, which sounds are "signals" and which are "noise" depends entirely on what one wants to listen to. A. Propagation of signals and noise in one direction We start by considering the simple scenario of the distributed "one-way" noisy amplifier, depicted in Fig. 1(a).The model consists of a chain of amplifiers that multiply the input signal S 0 by a factor g, representing the amplifier gain.The medium's noise is represented by noise sources that are summed with the propagating signal after each amplification stage.To remove the ambiguity regarding whether noise should be included before or after the amplification stage, the model includes noise sources located both at the input of the first amplifier and at the output of last.This model approximates a strongly anisotropic medium, where signals and noise propagate only in one direction [from left to right in Fig. 1(a)].This scenario accurately represents what occurs in many human-made systems, such as cascaded electronic amplifiers or radio repeaters-indeed, the formulae we derive here are essentially the same used to calculate the noise figure of cascaded electronic amplifiers [10]. In this model we can turn amplification "off"-and thereby model signal propagation in a lossless, noisy medium-by imposing the condition g = 1.Or we can turn it "on" by setting g ≠ 1.When g > 1, the chain amplifies signals as they propagate.When g < 1, the distributed amplifiers become distributed, attenuating "brakes."By comparing signal and noise for the three conditions (g = 1, g > 1, and g < 1), we quantify the impact of amplification and attenuation on the signal-to-noise ratio (SNR) along the chain [i.e., at the nodes Out 1,2…n in Fig. 1(a)]. The root-mean-square (rms) amplitude of the signal at a given node n is simply the rms amplitude of the input signal passed through n multipliers S rms n = g n S rms 0 .Turning on the amplifier thus boosts the signal amplitude by the factor We focus our analysis on the physically relevant case where the noise sources are uncorrelated, meaning that the noise in the medium is spatially incoherent.For simplicity, we assume that the various noise sources are independent versions of the same stochastic process, with rms amplitude γ.In this case, the rms amplitude of the noise N rms at node n can be calculated by incoherent summation (i.e., linear summation of power) of the various amplified noise terms.Specifically, the noise power at node n can be expressed as a geometric series, where the m-th term represents the contribution of the n − m -th source, amplified (or attenuated) m times.The expression for N rms n can be simplified based on different scenarios: n + 1γ for g = 1. ( Hence, turning on the amplifier boosts the noise gain by a factor of The SNR at node n is given by R n = S rms n /N rms n .The effect of amplification on the system's sensitivity can be quantified by the SNR enhancement factor [11]: where R on and R off are the SNR with the amplifier on g ≠ 1 and off g = 1 , respectively.Figure 1(b) illustrates the enhancement factor as a function of g for two values of n.When R > 1 the signal is amplified more than the internal noise, and the SNR increases at the considered node.Conversely, when R < 1, the signal is amplified less than the noise, and the SNR decreases.It follows from Eqs. ( 1) and ( 3) that amplification g > 1 boosts signals more than internal noise, increasing the SNR at all nodes.In particular, the larger the gain, the larger R, resulting in a greater improvement in SNR at any node.Additionally, the longer the chain of amplifiers, the larger the benefit of distributed amplification on the SNR and the greater the increase in the system's sensitivity.Conversely, when the amplifiers act as attenuators g < 1 , R < 1, meaning that the signal is attenuated more than the internal noise. As the signal propagates along the line, noise from the growing number of contributing sources accumulates.A relevant measure of the resulting signal degradation is the noise factor F n = R n /R 0 , which quantifies how the SNR degrades along the transmission line.In our case which approaches 1 (i.e., no significant SNR degradation along the line) when g ≫ 1. Importantly, this result-namely that distributed amplification prevents signal degradationgeneralizes to the case when internal noise sources are spatially coherent [12]. B. Signal vs noise amplification in isotropic active media We now extend the simple chain-of-amplifiers model described above by considering the case of an active medium where waves propagate in both directions, as in the mammalian cochlea [13].In our simplified treatment, we assume that the medium is isotropic.Thus, we assume that the amplifiers boost signals propagating in either direction by the same amount [Fig.1(c)].We simplify the analysis further by ignoring potential scattering effects within the medium and by assuming that the various noise sources all have equal amplitudes.In this case, however, we allow the amplifier gain to vary along the line.When considering signal and noise propagation to node n, the system can be depicted as the combination of two "one-way" amplification models [Fig.1(d)], representing the contribution from sources located to the right and to the left of the node n.Note that whereas signals come only from the left, noise comes from both directions. Signal propagation from a source node n′ to a receiver node n is encapsulated by the discrete Green's function G n, n′ .In the simplified model, where each node n amplifies the signal by the factor g n : Note that the Green's function is symmetric: G n′, n = G n, n′ .In this model, the signal is effectively a source at node 0; its amplitude at node n is therefore The noise response at node n can be decomposed into the incoherent summation of noise from both the left and right sides of the node [Fig.1(d)]: In this case, unlike the simpler anisotropic model of Fig. 1(a), amplification is not necessarily beneficial for the SNR.When the goal is to maximize the SNR at node n, the optimal gain distribution along the amplifier chain is In this case, the system approaches the performance of the one-way amplification model at the nth node.Unlike the one-way model, however, it is not possible to increase the SNR at all nodes simultaneously [see Fig. 1(e)]. III. SIGNAL-VS-NOISE AMPLIFICATION IN THE MAMMALIAN COCHLEA A. Preliminaries Figures 2(a) and 2(b) illustrate the general function of the mammalian ear.Briefly, soundinduced vibration of the stapes (the third of the three middle-ear ossicles in the chain that connects the eardrum to the cochlea) displaces the fluid in the inner ear, launching hydromechanical waves that propagate slowly from the base (i.e., the entrance) toward the apex (i.e., the "end") of the cochlea.Cochlear wave propagation is frequency dependent, so that waves peak on the BM at locations that depend on frequency.In this way, the cochlea maps frequency into position, with higher frequencies mapping closer to the stapes.As they travel apically beyond their peak location, cochlear waves are dramatically attenuated.Cochlear wave propagation is also nonlinear (intensity dependent) and varies with cochlear health [e.g., in vivo vs postmortem, see Fig. 2(b)].In particular, the location of maximal vibration depends both on sound level and on physiological status.However, at sound levels near the threshold of hearing, where issues concerning SNR are most pressing, cochlear mechanical responses are approximately linear.For this reason, we employ linear models for our analysis.At any location we define the characteristic frequency (CF) as the frequency that evokes the largest in vivo BM response at low sounds levels; conversely, we define the characteristic place as the location where a wave of given frequency peaks on the BM at low sound levels. In vivo, the cochlear amplifier boosts waves as they propagate towards their characteristic places, producing stronger and more spatially localized responses than in a dead cochlea [Fig.2(b)].Equivalently, because of the well-established symmetry between spatial and frequency tuning [14], the cochlear amplifier narrows the bandwidth of BM frequency responses measured at a given location (colloquially, these frequency responses are known as "cochlear filters").By narrowing the bandwidth of the cochlear filters, amplification enhances cochlear sensitivity through well-known principles [15].Indeed, narrowing the bandwidth of a receiver means reducing its response to background broadband noise relative to the response to a signal within the receiver passband.However, because it is theoretically possible to narrow the bandwidth of the cochlear filters without resorting to amplification (e.g., Ref. [16]), we make a dedicated effort to isolate the effects of signal amplification from the effects of amplifier-induced bandwidth reduction. B. Cochlear amplification In our analysis of cochlear mechanics, we consider a general linear model that describes the frequency-domain relationship between the velocity of the cochlear partition V CP and the pressure difference P 0 across it.The cochlear partition comprises the organ of Corti and the overlying tectorial membrane, and V CP denotes the velocity of its center of mass.The pressure-velocity relation is characterized by a phenomenological admittance, Y , defined as V CP = Y P 0 .(For simplicity, the implicit frequency dependence is not shown.)By applying mass conservation and Newton's second law, we have that (see Appendix A and Ref. [17]) In this equation, P ‾ is the pressure difference between the "upper" and "lower" fluid chambers [see Fig. 2(a)] averaged over their cross-sectional area A .The term Z = iωM represents the "longitudinal" impedance due to the effective acoustic mass M of the fluids, and the complex function α = P 0 /P ‾ relates the driving pressure to the scalae-averaged pressure [18]; it depends on wavelength and on model geometry.For simplicity, we assume 1D wave propagation, which allows us to set α = 1 and P ‾ = P 0 .The equations for 2D and 3D models are more complex and can be found in Appendix A. However, and as we will illustrate through numerical simulations [19], the qualitative implications derived from the 1D model remain applicable in more realistic 2D and 3D geometries. For simplicity, our analytic treatment focuses on pressure, whose spatial amplification is similar to that of BM velocity [20].The numerical simulations we show in Figs.2(d) and 2(e) verify that the main results apply to BM velocity in a more complete model [21].Importantly, the signal enhancement mechanism we elucidate here relies on active amplification that boosts the energy of sound-induced traveling waves more than that of internal noise; in these types of models, pressure amplification serves as a proxy of power amplification [22]. When we assume "reflectionless" boundary conditions at the apical and basal ends of the cochlea, the 1D Green's function becomes (see Appendix) where k x is the complex wave number.The pressure response when the cochlea is driven from the stapes is simply [23] When the spatial gradients of cross-sectional area A and wave number k are gentle enough, the gain per unit length g is primarily determined by Im k , the imaginary part of k.Specifically, the log-gain per unit length can be approximated as d log G /dx Im k .When Im k > 0, the gain per unit length is greater than 1, and the wave undergoes power amplification.On the other hand, when Im k < 0, the gain per unit length is less than 1, indicating attenuation.When the cochlear amplifier is inactive, Im k is everywhere negative Im k < 0 .But when the amplifier is maximally active, Im k is positive basal to the characteristic place and negative apical to it.In other words, the wave peaks near the point x ˆ where Im k = 0, with Im k > 0 for x < x ˆ and Im k < 0 for x > x ˆ [24].Importantly, waves cut off dramatically just apical to their characteristic place [see Fig. 2(b)], so that g ≪ 1 for x > x ˆ.In summary, whereas traveling waves are amplified g > 1 before they reach their characteristic place x ˆ, they are rapidly attentuated g ≪ 1 as they pass beyond it.According to our analysis of the bidirectional amplifier [Eq.( 9) and Fig. 1(c)], this arrangement fulfills the conditions necessary for boosting the SNR at the characteristic place. C. Amplification of narrow-band signals and noise For the purposes of analyzing the effects of spatial amplification on SNR enhancement, we focus on a narrow frequency band centered around the signal frequency.Within an arbitrarily narrow frequency band, the internal noise can be approximated using spatially incoherent sinusoidal sources with randomly distributed amplitudes and phases.In particular, we assume that the noise sources are sinusoids with phases uniformly distributed on 0, 2π and magnitudes given by a non-negative random variable with mean μ and variance σ 2 .Using this simplified noise model allows us to examine the impact of signal amplification on SNR without the confounding effects of bandwidth reduction induced by amplification.The rms noise pressure at a given location x can be approximated as where γ 2 = μ 2 + σ 2 .This expression represents the statistical average of the noise pressure implied by the amplitude distribution of incoherent sinusoidal sources.The integral 2 dx′ captures the propagation of noise power from basal and apical noise sources to the location x.Assuming that the wave number at the cochlear entrance k 0 is independent of cochlear amplification, we have that the SNR is where the two integrals, ∫ 0 x G x, x′ Figure 2(d) depicts the differential effects of amplification on signal and internal noise in the 2D cochlear model for frequencies of 10 and 30 kHz.As expected from the analysis of the bidirectional amplifier, turning on the cochlear amplifier boosts the signal more than the internal noise near the characteristic place.This is evident in the plot, where the in vivo signal amplitude is larger than that of noise near the region of maximal BM response.(Note that signal and noise levels are normalized so that postmortem they are the same at the characteristic place.)However, as one moves basally away from the characteristic place towards the cochlear entrance, amplification becomes more pronounced for the internal noise compared to the signal.The differential effect of amplification on signal and internal noise highlights the selective enhancement of the signal relative to the noise at the characteristic place, where the cochlea achieves optimal sensitivity for sound detection. IV. DISCUSSION While the inner ear possesses astounding mechanical sensitivity, the origin of this sensitivity within the context of amplification has been largely overlooked.Indeed, the text-book view in the field is that the cochlear amplifier increases the sensitivity of hearing by boosting the mechanical vibrations that displace the stereocilia of the sensory neurons.This simplistic account ignores the fact that the sensitivity of a system depends on the internal noise [6,26].The handful of previous attempts at relating cochlear amplification with (true) cochlear sensitivity (e.g., Refs.[27,28]) ignore the contributions of wave propagation, relying instead on nonequilibrium oscillator models whose relevance to cochlear mechanics remains uncertain. We have shown here that established mechanisms of cochlear wave amplification produce significant signal enhancement.The mechanisms are analogous to human-made wave-based systems such as lasers and active transmission lines [11,29].Indeed, the cochlear amplifier has been likened to the gain medium of a laser amplifier [30].By amplifying different frequencies in different regions, the cochlea effectively employs narrow-band "laserlike" amplification to boost sensitivity to both narrow-and broad-band signals [Fig. 2(d)].The waveguide structure of the cochlea allows it to act as an inhomogeneous transmission line in which the cutoff frequency changes with location [31].In this way, waves within the operating frequency range are greatly attenuated before reaching the apical end (see also Refs. [32,33]).Consequently, the cochlea eliminates noise "build-up" due to scattering from the apical termination, an effect which can greatly degrade the performance of active transmission lines [29]. Our results also highlight the functional importance of the asymmetric shape of the cochlear filters (i.e., of the BM frequency response measured at each location).The cochlear filters have a steep high-frequency flank arising from the wave cutoff apical to the CF place.As a result, near-CF waves coming from more basal locations are amplified while those arising at more apical locations-where there are noise sources but no signal-are squelched.Thus, the steep wave cutoff underlies a peculiar form of spatial filtering of near-CF components, optimized to reject noise [34].It is worth noting that the ear-horn-like geometry of the cochlea contributes significantly to this "optimized spatial filtering."The tapered geometry facilitates the propagation of waves from the base to the apex, allowing for efficient signal propagation and amplification [23]. The strategy elucidated here for enhancing signal to noise within the cochlea is compelling because it is simple, robust, and consistent with established facts of active cochlear mechanics: first and foremost, that traveling waves are initially amplified and then dramatically attenuated as they propagate.But to what extent does this mechanism boost the sensitivity of hearing in actual practice?Although a precise answer to this question is currently out of reach-it requires details that are largely unknown and are likely to remain unknown for a long time (e.g., the power of the dominant intracochlear noise sources)considerable insight can be gained by reviewing the empirical evidence in light of our findings.Specifically, Nuttall and colleagues [8] Numerical and semianalytical calculations We cross-checked the quality of our calculations by comparing the 2D WKB approximation of the Green's function against numerical calculations performed in a tapered 2D finitedifference model [45,46], some of which are shown in Fig. 3(a).Because calculating the WKB approximation for α requires iterative methods that introduce various inaccuracies, we calculated α numerically, driving the finite-difference model from the stapes.3(a).While the agreement between the WKB approximation and the numerical solution is generally excellent, the WKB approximation can introduce significant errors (due to the nonuniqueness of the WKB solution in the cutoff region [47]), rendering the calculations noisy, especially at high frequencies.For this reason, in the main text we present results obtained using the 2D finite-difference model-the differences between 2D and 3D models are relatively minor, although it is worth mentioning that in 3D the enhancement factors are slightly larger thanks to the more dramatic tapering of the cross-sectional area in 3D than in 2D models. APPENDIX B: MODELING DETAILS We performed all calculations using an "overturned model" of the mouse cochlea [41], whose parameters are the same as those used in Ref. [48].In this model, unlike in classic models where the organ of Corti does not deform, the transverse (up-down) velocity of the center of mass is V CP = V BM + V top /2, where V BM and V top indicate the velocity of the bottom (BM) and the top-side (the reticular lamina and tectorial membrane) of the organ of Corti-their differential velocity is V int = V top − V BM .Postmortem, V BM and V top are similar, so that to a first approximation V int ≈ 0 in a passive cochlea, while V int ≠ 0 in vivo.The center-of-mass velocity can be rewritten in the compact form V CP = V BM + V int /2, where V int is attributed to the piezoelectric action of the OHCs and is effectively the (velocity) source of wave amplification in the model. Because the BM stiffness is about one order of magnitude larger than that of the structures surrounding the OHCs, OHC forces produce large displacements of the top side of the organ of Corti while having secondary effects on local BM motion [49,50].We therefore assume that internal OHC forces have negligible effects on BM motion so that the mechanical admittance of the BM Y BM = V BM /P 0 is constant, independent of whether the cochlear amplifier is turned "on" or "off."For simplicity, we also assume that Y BM represents the admittance of a damped harmonic oscillator.By exploiting the relationships between V CP , V BM , and V int , we can express the admittance of the organ of Corti admittance as V int /V BM .Following previous results, we assume that in vivo at low sound levels 1 2 V int /V BM ≈ iβτ, where β = f /CF is normalized frequency and τ is a (real) constant. Following Ref. [23], we assume that Y CP is scaling symmetric (i.e., a function only of normalized frequency, β) throughout the cochlea. represent the propagated contributions of noise sources located basal and apical to x, respectively.The values of these integrals, calculated using a previously developed 2D model (see figure caption and Appendix B for details), are shown in Fig.2(c).The figure shows that at the characteristic place, the contribution of apical noise sources is negligible compared to that of basal noise sources. Figure 2 ( Figure 2(e) shows the enhancement factor as a function of distance along the cochlea when both signals and noise are broadband.In these simulations, signal and noise have Figure 3 ( b) shows the WKB solution for the Green's function of a 3D model with the same wave number k and height H as the 2D model in Fig. FIG. 3 . FIG. 3. (a) Example of Green's function for a 2D model with reflective basal boundary R st ≈ 0.14 , calculated numerically in a finite-difference model (solid line) or with the WKB approximation [Eqs.(A11) and (A12), dashed lines].The source locations for the various curves are indicated with vertical arrows; the source frequency is 10 kHz.(b) Approximate Green's function for a simplified 3D model (see text). FIG. 1 . FIG. 1.(a) Effect of spatially distributed "one-way" amplification on signal and internal noise.The model consists of a chain of linear amplifiers (multipliers) with gain g; the effect of internal noise is simulated by adding noise before and after each amplification stage.(b) SNR enhancement (R) at the Nth node of the amplifier chain (shown for N = 10 and N = 20) as a function of the amplifier gain, g.(c) Bidirectional noisy amplification model.In this model, internal noise propagates and is amplified identically in both directions.(d) Equivalent oneway amplification model to the study noise and signal response at the nth node.(e) Example of enhancement factor at different nodes in a chain of N = 10 bidirectional amplifiers.In this example, the amplifier gain is chosen to improve SNR at node 5 (see text) by setting g m = 3 for m < 5 and g m = 0.1 for m ⩾ 5. FIG. 2 . FIG. 2. (a) Simplified anatomical view of the mammalian cochlea.(b) BM magnitude responses in vivo (amplifier on) and postmortem (amplifier off) to stimulus tones of 10 kHz and 30 kHz calculated in a 2D finite-difference model of the mouse cochlea.(c) Apical and basal noise propagation functions for narrow-band noise centered around 10 kHz.At each location, these functions quantify the expected noise power due to distributed basal and apical noise sources of equal strength, respectively.The gray vertical line marks the characteristic place.(d) BM response magnitude to sound signal and narrow-band internal noise at 10 and 30 kHz for both postmortem and invivo models.The curves are normalized so that the signal and noise magnitudes at the characteristic places (vertical gray lines) are the same postmortem.The difference between in vivo signal and noise responses demonstrates that turning on the amplifier boosts the SNR at the characteristic place.(e) Enhancement factor (i.e., the ratio between the SNR with the amplifier on and the amplifier off) along the cochlea calculated for narrow-band near-CF signals and noise and for broadband signals and noise (assumed white over the band from 4 to 70 kHz).The figure shows that the near-CF positive SNR enhancement caused by turning on the amplifier produces a global, broadband increase in SNR. measured BM-velocity noise in the base of sensitive guinea-pig cochleae, carefully minimizing external interferance to ensure that the recordings were dominated by internal cochlear noise sources.At frequencies near CF, they found a BM mechanical noise floor approximately 15 dB below the BM vibration amplitude produced by tones at intensities corresponding to neural threshold.More recent recordings[35], in the apex of the mouse cochlea, yield similar results for the tectorial membrane[36].In a nutshell, the experimental data suggest that the cochlear mechanical SNR, measured for narrow-band frequencies near-CF in response to threshold-level tones, is on the order of 15 dB.Strikingly, in our model amplification enhances the SNR of the BM responses by a similar amount [Fig.2(e)].In other words, our results suggest that without amplification cochlear mechanical responses to faint but detectable sounds would fall perilously close to the internal noise floor.Although there is no scarcity of factors that impact the neural encoding of sound-including hair-cell noise[37,38]and the stochastic nature of auditory-nerve firing [39]-our analysis suggests that spatially distributed cochlear amplification plays a central role in enhancing the sensitivity of hearing.
6,741
2024-01-23T00:00:00.000
[ "Physics", "Engineering" ]
Experimental demonstration of robust self-testing for bipartite entangled states Wen-Hao Zhang,1, 2 Geng Chena,1, 2 Peng Yin,1, 2 Xing-Xiang Peng,1, 2 Xiao-Min Hu,1, 2 Zhi-Bo Hou,1, 2 Zhi-Yuan Zhou,1, 2 Shang Yu,1, 2 Xiang-Jun Ye,1, 2 Zong-Quan Zou,1, 2 Xiao-Ye Xu,1, 2 Jian-Shun Tang,1, 2 Jin-Shi Xu,1, 2 Yong-Jian Han,1, 2 Bi-Heng Liu,1, 2 Chuan-Feng Lib,1, 2 and Guang-Can Guo1, 2 CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, 230026, China Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China I. INTRODUCTION In contrast to theoretical schemes of quantum information processing (QIP), where the imperfections of the involved devices are generally not taken into account, practically we often do not have sufficient knowledge of the internal physical structure, or the used devices cannot be trusted. The researches on this topic open a new realm of quantum science, namely, "device-independent" science [1-9], in which no assumptions are made about the states under observation, the experimental measurement devices, or even the dimensionality of the Hilbert spaces where such elements are defined. In this case, the only way to study the system is to perform local measurements and analyze the statistical results. It seems to be an impossible task if we still want to identify the state and measurements under consideration. However, assuming quantum mechanics to be the underlying theory and within the no-signalling constraints, a purely classical user can still infer the degree of sharing entanglement due to the violation of the Bell inequalities [10][11][12], simply by querying the devices with classical inputs and observing the correlations in the classical outputs. Such a device-independent certification of quantum systems is titled "self-testing", which was first proposed by Mayers and Yao to certify the presence of a quantum state and the structure of a set of experimental measurement operators [13]. In the past decade, although different quantum features such as the dimension of the underlying Hilbert space [14,15], the binary observable [16] or the overlap between measurements [17] can also be tested device-independently, most previous researches focus on the problem of certifying the quantum entangled state that is shared between the devices [18][19][20][21][22]. In particular, intensive studies have been devoted to the maximally entangled "singlet" state, which is the cornerstone for QIP. Meanwhile, a large amount of progress has been made overall on the self-testing of other forms of entangled states. For example, Yang and Navascué propose a complete self-testing certification of all partially entangled pure two-qubit states [23,24]. In addition, the maximally entangled pair of qutrits [25], the partially entangled pair of qutrits that violates maximally the CGLMP3 inequality [26,27], a small class of higher-dimensional partially entangled pairs of qudits [28], and multi-partite entangled states [6,29] and graph states [30] are also shown to be self-testable. Another interesting application is the possibility of self-testing a quantum computation, which consists of self-testing a quantum state and a sequence of operations applied to this state [31]. Self-testing aims to device-independent certifications of entangled states from measured Bell correlation. A typical self-testing protocol generally contains two steps, namely, self-testing criterion and self-testing bound. A self-testing criterion underlies the entire protocol, with which one 3 can uniquely infer the presence of a particular ensemble of entangled states, when observing the maximal violation of certain Bell inequality. These states should be different from each other by only a local unitary transformation, which does not change the Bell correlations. In a self-testing language, all the states in this ensemble have the same Schmidt coefficients and are identical up to local isometries. The state certification becomes much more complex if a non-maximal violation occurs in a black-box scenario. This non-maximal violation can result from a strongly nonlocal state with a non-optimal measurement, or a weakly nonlocal state with a (nearly) optimal state. As a result, one cannot infer the Schmidt coefficients of the tested state from the given criterion. It is of practical significance if one can still give a commitment about the lowest possible fidelity to the target state violating the inequality maximally, namely, self-testing bound. Combining the criterion and this lower bound, self-testing can be used to test the quality of entanglement in a device-independent way. Previous self-testing protocols for entangled states are limited to several special maximally entangled states of low-dimension. A further generalization to non-maximally entangled states [32] and multidimensional quantum entanglement [33][34][35] enables self-testing to be a more powerful tool in practical quantum information processes. Recently, A. Coladangelo et al. [36] have provided a general method to self-test all pure bipartite entangled states by constructing explicit correlations, which can be achieved exclusively by measurements on a unique quantum state (up to local isometries). In other words, this generalized method allows complete certifications not only for singlet state, but also for any pure bipartite entangled state of arbitrary dimensions. The criterion presented above is still a proof of ideal self-testing, which only considers ideal situations in which the correlations are exact. Practically, however, the robustness to statistical noises and experimental imperfections is essential for self-testing proposals. Despite of considerable progresses made in self-testing theory, the relevant experimental work is very rare due to the weak robustness. Recently, self-testing has been used to estimate the quality of a large-scale integrated entanglement source [37]. Nevertheless, it is necessary to verify the reliability of a certain self-testing criterion and relevant lower bound before utilizing them. In this work, under the fair sampling assumption, we implement a proof-of-concept experiment of Coladangelo's complete self-testing protocol with various entangled states up to four dimensions. We show that even in the appearance of practical imperfections, we can give a trustable description of the tested states, i.e., to quantify how far the tested state from the pure target state can be, in terms of fidelity, when the violation is not maximal. II. RESULTS Theoretical Framwork. The self-testing of arbitrary entangled two-qubit systems was resolved by Yang and Navascué [23]. In their work, they considered a scenario in which Alice and Bob share a pure two-qubit entangled state |ϕ , and one can record the probabilities in a For a general two-qubit entangled state it has been proved that it can maximally violate a particular family of Bell inequalities [32], parametrized as where 0 ≤ α ≤ 2 and the maximum quantum violation of it is b(α) = √ 8 + 2α 2 . If such Bell correlations are duplicated by Alice and Bob on an unknown state |ϕ , it is possible to construct an isometry satisfying that follows equations [23] Π A , M A and Π B , M B here stand for projective operators on the Alice and Bob sides. From these relations, it can be concluded that these Bell correlations can self-test |ϕ target (θ) . Intuitively, self-testing means proving the existence of local isometries which extract the target state |ϕ target (θ) from the physical state |ϕ . Hereby, Yang and Navascué gave the criterion to self-test all pure two-qubit states: When one observes a Bell correlation causing b(α 0 ) − β(α 0 ) = 0, the corresponding quantum state is identical, up to local isometries, to a certain entangled state |ϕ target (θ) and and θ is determined by the following equation: Hence, it is proved that any pure two-qubit state can be self-tested [23], since its violation of Bell inequality Eq. (2) is unique, up to local isometry. The authors also show that it is possible to generalize this method to bipartite high-dimensional maximally entangled states. However, it is still unclear whether arbitrary pure bipartite entangled state is self-testable. Recently, A. Coladangelo et al. successfully addressed this long-standing open question by constructing explicit correlations built on the framework outlined by Yang and Navascués [36]. The concrete process of this self-testing process is illustrated by Fig. 1. The uncharacterized devices are assigned to Alice and Bob, and they share a pure state |ψ which is identical to (up to local isometries): Initially they receive the inputs x and y deciding their choice of measurement settings and return the outcomes a and b, respectively. states is to decompose |ψ into two series of 2 × 2 blocks, and thus, each block can be self-tested through the two-qubit criterion described above [23]. Referring to the four-dimensional states we test in the experiment, this decomposition can be implemented as shown in Fig. 1(b). Grouping P(a, b|x, y) elements of which x, y ∈ 0, 1, with a, b ∈ 0, 1 and 2, 3, one can self-test c 0 |00 + c 1 |11 and c 2 |22 + c 3 |33 respectively. Similarly, using measurement settings x ∈ 0, 2 and y ∈ 2, 3, one can self-test c 1 |11 + c 2 |22 and c 0 |00 + c 3 |33 . Such a decomposition procedure can determine the relative ratio of the two modes in each block. The weight of each block in the joint state can be further estimated by the total photon counting on this block, and eventually, the form of the tested state can be inferred. Experimental implementation and results. In order to implement this generalized selftesting criterion, a versatile entangled photon-pair source is constructed which can generate bipartite states up to four dimensions, as shown in Fig. 2. Initially entangled photon pairs are generated by pumping a PPKTP crystal in a Sagnac interferometer (SI) [38]. Afterward, both photons are encoded into polarization and path modes, and therefore, the joint two-photon can be flexibly prepared into the product, two-qubit, two-qutrit and two-qudit entangled states (see Methods for details). Accordingly, the detecting apparatus is also properly designed in order to perform projective measurements on these two modes. Profiting from this setup, we can implement 6 both self-testing and state tomography on the tested unknown states. As discussed above, the two-qubit self-testing criterion is the foundation for the entire theoret- In the experiment, we initially generate four two-qubit entangled states ρ j (j=0,1,2,3), each of which is close to a pure state |ϕ target (θ j ) . The value of θ j is selected to maximize the fidelity ϕ target (θ j )|ρ j |ϕ target (θ j ) . For each ρ j , three randomly generated local operators are applied, leading to a ensemble three unknown states ρ j(k) (k = 0, 1, 2) that we want to self-test. For a certain ρ j(k) , we perform Bell measurements and measure its violation β(α j ) with α j = 2/ 2tan 2 (2θ j ) + 1 (the inverse function of Eq. (5)). As b(α j ) − β(α j ) approaches 0, a self-testing conclusion can then be reached that the tested state have the same Schmidt coefficients with |ϕ target (θ j ) . In principle, the three states in each ensemble all violate the inequality β(α j ) maximally. In practical experiment, due to the imperfections in state preparing and unitary transformations, each realistic state is slightly mixed and the measured ultimate violation approaches b(α j ) with a tiny difference. When these imperfections are negligibly small, we can either assume the tested state to be an ideally pure state and infer the Schmidt coefficients of it from the given self-testing criterion, or give a lowest possible fidelity F S to the target state according to the self-testing bound (blue squares in Fig. 3). For four target states investigated here, we give the self-testing bounds through a semidefinite program (SDP) following the method of Ref. [22,23], and the results are shown in Fig. 3. The self-testing bound suggests a nearly-unity fidelity when the observed violation approaches the maximum. The tightness of the calculated bound becomes weaker when the tested states tend to be a product state. This is expected because, as α increases, the range of tolerable error gets smaller since the local bound and quantum bound coincide at α = 2 [22]. These self-testing conclusions can be verified by measuring the the actual fidelity F T of ρ j(k) to its corresponding target state |ϕ target (θ j ) , which can be calculated as F T = ϕ target (θ j )|ρ j(k) |ϕ target (θ j ) . For each ρ j(k) , the density matrix is reconstructed through a state tomography process. These actual fidelities are also labeled in Fig. 3 (red circles in Fig. 3), all of which are well above F S on self-testing bound. This result clearly suggests that the given self-testing criterion is valid for device-independent certification of two-qubit pure states, and the lower fidelity bound is also reliable for entanglement quality verification in a black-box scenario. For high-dimensional scenarios, we decompose the joint state into several 2 × 2 blocks, each of which can be self-tested into a pure target state with high fidelity, similar to the procedures implemented for two-qubit scenarios. The probabilities needed to self-test these blocks are mea-sured according to the strategy shown in Fig. 1(b). When d = 4, in the self-testing process, Alice has 12 possible projective measurements, and Bob has 16, which results in a total of 192 values of P(a, b|x, y) (see Supplementary Note 1 for details). For each block, 16 values of P(a, b|x, y) are recorded and the parameter α is ascertained to minimize b(α) − β(α) , and thus, the pure target state for each block is determined. Totally, 64 measurements are required to self-test the four decomposed blocks to be the closest pure target states. The summation of the counts in each group of 16 projective measurements can be used as the weight of the corresponding block. Therefore, the four two-qubit target states comprise the joint target state |ψ target with respective weights. Considering the fact that only a non-maximum violation is attained in self-testing each block, this inferred joint states should also deviate from the actual tested states. In principle, a high-dimensional self-testing bound can also be obtained from an SDP method. However, due to the appearance of significantly higher order moments in the expression of fidelity, the computing complexity is far beyond the limits of a single computer (see Supplementary Note 3 for details). Nevertheless, we can still verify the reliability of the self-testing results by measuring the density matrix of the tested state and calculating its fidelity F T to inferred target state |ψ target . We investigate totally 10 high-dimensional bipartite entangled states (see Supplementary Note 2 for details), and the self-testing results of their pure target states are shown in Fig. 4. The tested states are all approximately identical (up to local isometries) to |ψ target with a high fidelity F T . Device-independent certifications require no-signalling constraints [39] on the devices, which can be tested through the influence of the measurement of Alice (Bob) side on Bob (Alice) side [12]. Concretely, no-signalling constraints require the following relations to be satisfied: In experiment, all the 192 probabilities need to be recorded to verify above equations. In Supplementary Note 4, we give the results when the tested state is |ψ 0 . III. DISCUSSION Normally, a self-testing criterion is feasible for an ideal case where the involved Bell inequality can be maximally violated by a pure target state. However, due to unavoidable errors in experiment, the realistic states cannot attain the maximum violation, and thus, it becomes difficult to give an 8 exact description about the states. Fortunately, it is still possible to lower-bound the fidelity to the pure target state as the function of Bell violation. Especially, we show that when the utilized Bell inequalities are almost maximally violated, the generated states from our versatile entanglement source can be self-tested into respective target states with pretty high fidelities. We shall note that, as an initial experimental work for self-testing, our experiment constitutes to the proof of principle studies of self-testing. How to close the loopholes of Bell measurement in self-testing, such as detection, locality and freewill loopholes, are interesting open questions for future works. There are several ways that a purely classical user can certify the quantum state of a system, among which the standard quantum tomography is the most widely used method. However, it depends on characterization of the degrees of freedom under study and the corresponding measurements, thereby, it becomes invalid if the devices cannot be trusted. Furthermore, it requires performing and storing data from an exponentially large number of projection measurements. Recently, Chapman etal. proposed a self-guided method which exhibits better robustness and precision for quantum state certification [40]. Unsurprisingly, all of these benefits rely on the ability to fully characterize the measurement apparatus. The presence of self-testing suggests that the state certification can also be implemented in a device-independent way. Two main obstacles preventing self-testing from practical applications are the weak robustness and an extremely small quantity of self-testable states, and thus, the relevant experimental work is very rare. Thanks to the consecu- Concretely, when the four HWPs are set to be 0 • or 45 • , the outcome is a path mode entangled state cos θ|aa + sin θ|bb . When HWPs rotate single H or V polarized photons to a superposition state α|H + β|V (α, β = 0), the result is a two-qutrit state |ψ (d = 3) = c 0 |00 + c 1 |11 + c 2 |22 . When both H and V polarized photons are rotated to be superposition states, we can finally obtain a two-qudit state |ψ (d = 4) = c 0 |00 + c 1 |11 + c 2 |22 + c 3 |33 . All of the coefficients c i are decided by the rotating angle of the four HWPs and the value of θ. to self-test a four-dimensional state in Eq. (6). This joint state is decomposed into four 2 × 2 blocks, and each block can be self-tested according to the measurement strategy illustrated, while the whole state can be self-tested by combining these results.
4,197.2
2019-01-11T00:00:00.000
[ "Physics" ]
Facile Synthesis and Optical Properties of Small Selenium Nanocrystals and Nanorods Selenium is an important element for human’s health, small size is very helpful for Se nanoparticles to be absorbed by human's body. Here, we present a facile approach to fabrication of small selenium nanoparticles (Nano-Se) as well as nanorods by dissolving sodium selenite (Na2SeO3) in glycerin and using glucose as the reduction agent. The as-prepared selenium nanoparticles have been characterized by X-ray diffraction (XRD), UV-Vis absorption spectroscopy and high resolution transmission electron microscope (HRTEM). The morphology of small Se nanoparticles and nanorods have been demonstrated in the TEM images. A small amount of 3-mercaptoproprionic acid (MPA) and glycerin play a key role on controlling the particle size and stabilize the dispersion of Nano-Se in the glycerin solution. In this way, we obtained very small and uniform Se nanoparticles; whose size ranges from 2 to 6 nm. This dimension is much smaller than the best value (>20 nm) ever reported in the literatures. Strong quantum confinement effect has been observed upon the size-dependent optical spectrum of these Se nanoparticles. Background Nanomaterials have become the focus of many research areas due to their unique physical and chemical properties. Various nanoparticles, such as titanium oxide, silver, gold, and cadmium selenide nanoparticles, are already being used in catalysis, stain-resistant clothing, sunscreens, cosmetics, and electronics [1][2][3]. Pure selenium, as well as selenium containing nano-materials, has excellent photoelectrical characteristics, semiconductor properties, and high biological activity [3]. Selenium nanomaterials with 1D structure are one of the key materials by virtue of their broad applications in optoelectronics devices such as rectifiers, photocopiers, photographic exposure meters, xerography, and solar cells due to its high photoconductivity [4][5][6]. As an important inorganic material, selenium also has attracted a great deal of attention for owing good semiconducting behavior with band gap value of 1.6 eV [7,8]. What is more important, nano selenium play an important role on the biology and medicine, in virtue of their excellent biological activities and low toxicity [9][10][11][12][13][14], which makes it species being capable of selectively killing cancer cells constitutes an urgent priority [15,16]. Selenium is an essential trace element, which is present in most foods, for human health. Selenium is present in foods mainly as the amino acids selenomethionine and selenocysteine. Selenium compounds are anti-oxidants that delete free radicals in vitro and improve the activity of the seleno-enzyme, glutathione peroxidase, which can prevent free radicals from damaging cells and tissues in vivo [17][18][19]. Recently, selenium nanoparticles have been used as additives in growth of corn and cereals as well as in vitamins replacing organic selenium compound to replenish the essential trace selenium in human's body. Over the past few years, Se nanoparticles, nanorods, nanowires and nanotubes [20][21][22][23][24] have been generated by many strategies [5,24,25]. For example, the hydrothermal method reported by Rao's group [26], the carbothermal chemical vapor deposition route suggested by Zhang's group [27], all required relatively rigorous reaction conditions. The chemical methods based on solution-phase procedures seem to provide an excellent route to fabricate Nano-Se. However, the size of these Se nanoparticles prepared by the above methods is very big (>20 nm), some of them are larger than 100 nm, which could somehow reduce the absorbance efficiency of selenium in human's body. Obviously, developing effective and environment friendly routes to fabricate large quantities of small Se nanoparticles with small particle size (<10 nm) is still facing challenges, but it is essential for health care application. Herein, we present a controllable and rapid approach to fabrication of small Se nanoparticles with a size less than 6 nm by using glucose as the reduction agent and glycerin as the stabilization agent. In comparison with the previous studies, this method is green and environment friendly since glycerin and glucose are compatible with the cells in the human bodies. Smaller size would improve the absorbance efficiency of selenium nanoparticles in a human's body and therefore would be widely used in supplying the trace selenium element in foods, vitamins and other medicines. Methods Na 2 SeO 3 powder, glycerin glucose powder, ethanol, 3mercaptoproprionic acid (MPA) (99%, Alfa Aesar) were all used without additional purification. Firstly, a stock precursor solution of Na 2 SeO 3 was prepared by dissolving 0.023 g Na 2 SeO 3 powder in a mixture of 20 mL distilled water and 2 mL ethanol, then 18mL glycerin was added into the above solution. The reduction agent was prepared by dissolving 1.0076 g glucose powder in a mixture of 20 mL distilled water and 1 mL MPA. The precursor solution of Na 2 SeO 3 was heated to 60°C, then the reduction agent of glucose was injected into the precursor solution. Afterwards, the mixture solution was gradually heated to 120°C for 3 min, the dispersion solution became dark red from limpidity, indicating the formation of Se nanoparticles through the following reduction reaction: In this way, Se nanoparticles have been fabricated, the residue solvent was composed of Na 2 O, gluconic acid, MPA and water. Excess glucose was applied so as to ensure the reduction reaction was fully completed. At different temperature steps, small amount (7 mL) of dispersion solution with Se nanoparticles was suctioned into a small glass bottle for optical and TEM measurement. Small selenium nanoparticles dispersing in the glycerin solution were thus obtained. The dispersion solution was aged for 45 days and then washed several times with distilled water. The selenium nanoparticles grew gradually into nanorods during the aging process. The as-prepared products were characterized by using various methods. The sample for the X-ray diffraction (XRD) was prepared by centrifugation of the dispersion solution with selenium nanoparticles at 12,000 rps/s for 30 min, and then powders were heated at 400°C for 1 h to fully crystallize the nanocrystals. The microstructure features of as-prepared Se nanoparticles were measured by a JEOL 2100F high resolution transmission electronic microscopy (HRTEM). UV-vis optical spectra of the dispersion solution with Se nanoparticles or nanorods have been collected by a Phenix 1900PC UV-Vis-NIR Spectroscopy. Structure Identification of Se Nanoparticles For the XRD measurement of Se nanparticles, some of the dispersion solution was purged by water and alcohol for three times following the centrifugation process. Se nanoparticles lost its activity and became dark when they are separated from the dispersion solution and exposed to air. In order to obtain information about the structure and the size of Se nanoparticles, two types of samples were prepared for XRD and TEM measurement, respectively. The first one was separated from the freshly synthesized Se colloid suspension which was heated at 80°C for 3 min and the second one from nanopowders, which were calcined from the centrifuged colloid dispersion solutions at 400°C for 1 h. The freshly prepared Se nanoparticles are amorphous (a-Se), while the other Se nanoparticles being annealed at 400°C are well crystallized. The XRD diffraction peaks ( [24,28]. Careful analysis of the above XRD pattern revealed that the Se nanoparticles have been crystallized in pure trigonal phase. Lattice constants are determined to be a = 0.437 nm and b = 0.496 nm from this XRD pattern, being consistent with those values reported in the literatures (a = 0.436 nm, b = 0.495 nm) [29]. Optical Properties of Se Nanoparticles The whole experiment process was going along with the color change. Firstly, when the MPA and the glucose solution were just added into the Na 2 SeO 3 precursor solution, the mixture became pellucid; lately, when the temperature increases from 60 to 120°C, the color of the dispersion solution changed from pale yellow to bright orange, following by blood orange and finally to deep red. Such a color change of Se nanoparticles dispersion could be more clearly manifested by the UVvisible absorption spectra in Fig. 2, which displayed the optical spectra of the specimens being prepared at the temperatures of 60, 80, 100, and 120°C. In order to prevent the loss of activity of fresh Se nanoparticles, the optical measurement was carried out upon the freshly prepared dispersion solution. The remaining solvent (gluconic acid, MPA, glycerin) are all transparent colorless solution and would show a absorption peak at around 240 nm, but would not present any absorption peaks within the visible wavelength region. Therefore, the absorption peaks in Fig. 2 are all contributed from Se nanoparticles. It can be seen that the original absorption peak of Se nanocrystals locates at 292 nm (a), which shifts to 371 nm (b) when the reaction temperature rises to 80°C and further red shifts to 504 nm (c) and 618 nm (d) when the nano-Se suspension was heated up to 100 and 120°C, respectively. These multiple absorbance peaks of Nano-Se are coupled with the reaction temperature. The higher is the temperature, the larger is the particle size. The red-shifted optical spectra of Se nanoparticles with particles size was actually confined by the quantum size effect (Fig. 2). Selenium is a typical direct semiconductor with a band gap energy of 1.6 eV (775 nm). When particle size is smaller than its Bohr excitation radius, the band gap will be enlarged due to the quantum confinement effect. Therefore, the optical absorption spectrum demonstrates a big blue shift of the band gap energy for Se nanocrystals in comparison with its bulk counterpart. The absorption peak shifts from 775 nm (bulk Se) to 292 nm for Se nanoparticles (being fabricated at 60°C). When the reaction temperature rises to 80°C, the absorption peak for the Se nanoparticles red shifts to 371 nm and further to 504 nm when the reaction temperature increases to 80 and 100°C, respectively. Finally the absorption peak moves to 618 nm when the suspension of Se nanoparticles was heat treated at 120°C for 30 min and aged for another 45 days. The total shift of the band gap energies for Se nanoparticles values 483 nm (0.39 eV) in comparison with that of bulk counterpart. The band gap energy of Se nanocrystals decreases with the particle size, which changes with the reaction temperature. The bigger is the particles size, the smaller is the band gap energy. The origin of the shift of the absorption peaks with temperature is induced by the well-known quantum confinement effect, which leads to the color change of the Se nanoparticles suspension. Microstructure of Se Nanoparticles The microstructure and morphology of the as-prepared Se nanoparticles are exhibited in Fig. 3 which shows TEM images of as-prepared Se nanoparticles using MPA as the stabilizing agent at the pH value of 11. The particle size ranges from 2~10 nm; averaging at 4.8nm. This image exhibits lots of small Se nanoparticle with a little aggregation, The insets are three HRTEM images of three individual Se nanoparticles, whose lattice fringes are clearly seen. Image (a) shows a very small Se nanoparticles with size less than 3 nm, image (b) demonstrates one Se nanoparticle in size of 5 nm and image (c) a little large particle with size of around 10 nm. The lattice fringes are clearly seen in these nanocrystals, most of the fringes are assigned to {101} lattice planes in hexagonal structure. The lattice spacing for these one dimensional fringes has been determined to be 2.978 Å from the Fast Fourier Transformation of the HRTEM images in reciprocal space, the value matches lattice spacing of {101} lattice planes. It is hard to determine the orientation of these nanoparticles due to the appearance of only one-dimensional lattice fringes. The HRTEM images of individual nanoparticles further confirm the hexagonal structure of the as-prepared Se nanoparticles, being consistent with the XRD results. The smallest Se nanoparticles observed in our HRTEM images is about 2 nm in diameter, as being seen in Fig. 3d. It can be seen from the HRTEM images that these nanoparticles are well crystallized with rare defects. Dislocations stacking faults and twins are not observed in these particles, indicating that these kinds of water soluble Se nanoparticles are almost defect free. Until now, fabrication of small Se nanoparticles smaller than 10 nm was proved to be very difficult. The size of Se nanoparticles was reported to be larger than 20 nm [30], some of them are even larger than 50 nm [31][32][33]. It seems to be very hard to control the rapid growth of Se nanoparticles with reaction time in traditional chemical process. In our case, the size of Se nanoparticles are well controllable. These Se nanoparticles exhibit homogeneous size distribution, which ranges from 2 to 6 nm with occasionally occurrence of big particles over 6 nm, as being shown in Fig. 4. Actually, the HRTEM images in Fig. 4 were taken directly from the suspension solution of Se nanoparticles after the specimen was aging for 3 weeks, indicating that Se nanoparticles are stable in the glycerin containing solution. However, when these Se nanoparticles were cleaned up for several times by water, the dispersion solution changed to black color due to the growth of the particle size, which was larger than 50 nm. Some of the particles even grew up to nanorods with length of several hundreds of nanometers. Once glycerin was removed away from the surface of Se nanoparticles after cleaning process and let the particles age in air for more than 3 months, Se nanoparticles would lose activities and then grew up rapidly into nanorods along [022] or [110] direction (Fig. 5). The morphology of these nanorods developing from aging of small Se nanoparticles are demonstrated in Fig. 5. This kind of Se nanorods are rarely reported in the literatures [28][29][30]. The HRTEM image as well as Fourier transformation of the images for these Se nanorods are shown in Figs. 6 and 7, which display the hexagonal and monoclinic structure respectively. There are two nanorods in Fig. 6, both in hexagonal structure. Rod A is in orientation of 0111 Â Ã , while rod B in orientation of 1213 Â Ã . The growth direction of rod A and B are (110) and (001), respectively. However, the Se nanorod in Fig. 7 is in monoclinic structure, which grows in the direction of (022). Therefore, the aging Se nanoparticles all transformed into nanorods, which are actually composed of two crystal structures, one is hexagonal structure and the other one is monoclinic structure. When Se nanoparticles were dispersed in glycerin solution, small Se nanoparticles were pretty stable and they would not grow up into big particles or nanorods with aging time.Glycerin plays a key role on suppressing the growth of Se nanoparticles and could keep Se nanoparticles in high activity in the solution. After the particles were cleaned up, the glycerin was removed away, Se nanoparticles lost its activity and grew rapidly into nanorods alone a certain direction. Meanwhile, glycerin is a kind of biology compatible organic compound; Se nanoparticles being stabilized by such a biology friendly agent would somehow have potential application in health products to provide Se sources for human body. HRTEM image for Se nanorods in hexagonal structure after the particle specimen was cleaned up and aging for 3 months Fig. 7 HRTEM image for Se nanorod in monoclinic structure after the particle specimen was cleaned up and aging for 3 months
3,508.4
2017-06-12T00:00:00.000
[ "Materials Science", "Physics" ]
Review on globularization of titanium alloy with lamellar colony The globularization of titanium alloy with lamellar colony during hot working is an important way to obtain fine and homogeneous microstructure which has excellent mechanical properties. Because of its great technological importance, globularization has captured wide attention and much research. This paper conducts a systematic study on state of art on globularization of titanium alloy, which mainly includes globularization mechanism, prediction model and the effects of hot-working parameters and microstructure parameters. Firstly, the shortcomings of the well-known globularization mechanisms (dynamic recrystallization, boundary splitting, shearing mechanism and termination migration) were summarized. Moreover, the comparison and analysis of prediction models were accomplished through tabular form. In addition, the effects of hot-working parameters (strain, strain rate, temperature) and microstructure parameters (alpha/beta interface, geometry necessary dislocation and high temperature parent beta phase) were systematically summarized and analyzed.Meanwhile, this study also explores those difficulties and challenges faced by precise control on globularization. Finally, an outlook and development tendency of globularization of titanium alloy are also provided, which includes microstructure evolution of three-dimensional lamellar alpha, the relationship between lamellar colony and mechanical properties and the effect of severe plastic deformation on globularization. Introduction Titanium alloys have become an indispensable structural material of advanced aircrafts, due to its high strength, good thermal stability, strong corrosion resistance and welding performance [1]. The globularization of titanium alloy with lamellar colony during hot working can obtain fine and homogeneous microstructure extensively used in aerospace field, which can ensure both optimal creep and optimal fatigue properties [2,3]. The inhomogenous globularization [4] often leads to the formation of microdefects (mixed grains, microstructure heredity and macrozone [5,6]) which seriously affect mechanical properties of aviation components. Therefore, precise control on globularization of titanium alloy with lamellar colony would exert a great significance for optimizing hot-working parameters, predicting and regulating microstructure and improving properties of titanium alloy products. The shortcomings of the well-known globularization mechanisms make it difficult to control globularization during hot working. The mains are follows: (a) the applicable conditions of every globularization mechanism are inconsistent (dynamic recrystallization at high temperature and low strain rate [7], shearing mechanism [8] at low temperature and high strain rate); (b) the microstructure parameters considered in every globularization mechanism are different (dihedral angle of the boundary in boundary splitting [9,10], solute concentration gradient in termination migration [11,12]); (c) multiple mechanism coexistence due to the diversity of lamellar colony and meso-scale heterogeneous deformation [13][14][15][16]. Therefore, a unified globularization mechanism of coupling multiple factors needs to be further developed. The limit application of prediction models cannot effectively guide the control process of globularization of titanium alloy with lamellar colony. The advantages of every prediction models are different. The relationships between hot-working parameters and globularization fraction are predicted by empirical model [17,18] and neural network model [19]. The processing map [20,21] can be used to determine the process parameter interval of steady globularization state without micro-defects. The internal variable model [3,22] can precisely predict the evolution of globularization fraction, dislocation density and flow softening. Meanwhile, the phase field model [23,24] can predict the morphology evolution of single alpha lamella during static globularization without consideration of hot deformation. The meso-scale heterogeneous deformation can be well captured by microstructure-based finite element model [13,16,25,26]. The hot deformation of titanium alloy in the (alpha + beta) phase field is a complex thermal mechanical process with coupling effects of multi-fields and multi-factors [27]. The thermal mechanical process is very sensitive to hotworking parameters and microstructure parameters, thus resulting in the difficulty in precisely controlling globularization. The detailed characteristics are described as follows: (a) complex and diverse morphologies (initial widmanstatten alpha [4], secondary alpha [28] and fine acicular martensitic alpha [29]) lead to the difference in globularization dynamics; (b) the meso-scale heterogeneous deformation (strain localization at lamellar-scale, strain partitioning behavior at colony-scale, macro-deformation bands at polycrystal-scale [15]); (c) the complex coupling effects of hot-working parameters and microstructure parameters (flow softening due to loss of interfacial coherency [3], evolution of geometry necessary dislocation(GND) at alpha/beta interface [29], the effect of lattice rotation [14,30] on meso-scale heterogeneous deformation). In general, an in-depth understanding of the effects of hot-working parameters and microstructure parameters can allow better control globularization of titanium alloy with lamellar colony. To deepen understanding and precisely controlling on globularization, this paper summarizes research on globularization, mainly including globularization mechanism, prediction models and the effects of hot-working parameters and microstructure parameters. Those puzzles and challenges ('sluggish' problem [31] and macrozone) faced by the control globularization process and the corresponding solutions are also put forward. 2 State of art on globularization mechanism 2.1 Globularization mechanisms of lamellar colony 2.1.1 Dynamic recrystallization Some researchers believed that the dynamic recrystallization was the major mechanism triggering the globularization in the (alpha+beta) phase region. The experiment of He et al. [7] found that lamellar colony was split into necklace strings with similar orientation, which was the major symbol of continuous dynamic recrystallization shown in Figure 1a. Lamellar colony was squashed to fractured short-bar alpha by large deformation, which is feature of geometric dynamic recrystallization [35]. Nonetheless, dynamic recrystallization and dynamic globularization cannot be equally viewed for following reasons. Firstly, the kinetics of dynamic recrystallization and globularization are similar but slightly different. The critical strain of dynamic globularization is larger than the strain corresponded by peak stress [36], which is shown in Figure 2. Secondly, dynamic recrystallization is an important flow softening mechanism, while the flow softening of titanium alloy during globularization is mainly caused by kinking, rotation and loss of Hall-Petch effect [3,37] of lamellar colony. In addition, the sub-grain [7]; (b) boundary splitting [32]; (c) termination migration [33]; (d) shearing mechanism [34]; (e) unified globularization mechanism of coupling multiple factors. boundary and the high-energy unstable interface formed in dynamic recrystallization are necessary preconditions for globularization. Boundary splitting Boundary splitting [9,10] is generally accepted as a globularization mechanism of titanium alloy shown in Figure 1b. Sub-boundaries would form in the initial lamellar alpha or under the effect of shearing after certain deformation by cold/hot working. The subboundary's dihedral angle is of 90°and unstable. In order to lessen interfacial tension, that dihedral angle would decrease gradually along with the wedging of beta phase into alpha/beta interface. Meanwhile, alpha/beta interface would reverse and lamellar structure would transform into globular structure, thus gaining globularization. Nevertheless, boundary splitting does not consider the effects of dislocation density [39] and solute concentration gradient [40] on globularization. Since the globularization is thermal diffusion process and in agreement with the Fick's second diffusion law. Different solute concentration affects the migration kinetics of interface, thus influencing the wedging of beta phase. Moreover, the dislocation density accumulated at alpha/ beta interface offered channel and force for solute atom's diffusion. Shearing mechanism According to shearing mechanism shown in Figure 1d [8], under the effect of shear strain, the lamellar alpha colony with favorable orientation is conducive to shear deformation while the rotation is aroused in alpha phase with unfavorable orientation. Moreover, dislocations would gather along the shear line latter. Dynamic recovery leads to dislocations of opposite signs annihilating. Due to dislocations of same signs, interfaces generate along shear lines and migrate to minimize the surface energy, hence promoting the form of the globular alpha. The limited applicable condition is apparent shortcoming of shearing mechanism. Under the condition of low temperature and high strain rate, the slower rate of atomic diffusion or inadequate diffusion time lead to the difficulty in globularization through interface migration for lamellar alpha with long axial ratio. Lamellar alpha with the low axial ratio can get globularization due to the strong shear fracture. Therefore, shearing can provide micro-defects and condition for interface migration of dynamic globularization. Termination migration Termination migration [11,12] is an important static globularization mechanism shown in Figure 1c. The lamellar colony is taken into account as a whole. By contrast, single alpha lamella is considered in all those models described previously. The concentration gradient would exist and cause mutual atom diffusion in alpha lamella as driving force. Over time, some alpha lamellas get coarsened and others get shortened, thus gaining globularization. And this kind of termination migration would happen apparently at the termination of the lamellar alpha. Although the termination migration can achieve the final globularization, it cannot be completely separated from the previous dynamic globularization. Because the lamellar colony of titanium alloy is very stable, it cannot be spheroidized by the cycle heat treatment as steel does. Only the alpha lamella with high-energy instability interface generated by the strong hot deformation can be spheroidized by the termination migration at high temperature. Shortcomings of globularization mechanism The research status and results of globularization mechanism described above indicates that the shortcomings of globularization mechanisms are apparent. Firstly, globularization mechanisms belong to qualitative descriptions [32], yet lacking quantitative characterization on microstructure parameters, such as dislocation density, crystal orientation and substructure etc. Secondly, those four globularization mechanisms mentioned above are only applicative to specific conditions of globularization behavior (dynamic recrystallization at high temperature and low strain rate, termination migration at heat treatment). Therefore, in light of the coexistence of multiple mechanisms, new globularization mechanism needs to be further developed. Unified globularization mechanism In light of the coexistence and advantages of multiple mechanisms, a new unified globularization mechanism of coupling multiple factors is pointed out on the work of Roy [41] shown in Figure 1e. The detailed features are described below: Firstly, the meso-scale deformation inhomogeneity in alpha and beta causes strain localization to emerge at phase interface, leading to the occurrence of micro-defects (unstable phase interface and sub-grain boundary with high energy) and dislocation density accumulated on phase interface. Secondly, with aggravation of deformation degree, due to the differences in solute concentration gradient, solute atom passes the dislocation channel and crosses interface, triggering the formation of thermal grooving. Thirdly, the diffusion rate and path of solute element is affected by curvature effect, solute atom's dragging effect [42] and solute concentration gradient. The alpha stable solute atom Al in the interface near the thermal grooving are concentrated and the concentration of beta stable solute atom Mo is lower. The concentration difference of the solute drives the opposite diffusion, leading to the gradual dissolution of the alpha phase near the thermal grooving, the deepening wedging of the beta phase. 3 Status of prediction models on globularization 3 .1 Limited applications of prediction models The development of modeling and simulation tools was very important and could provide guidance for microstructure and mechanical property optimization. The computerbased simulation tools could develop next-generation components and processes in the aerospace industry [43]. Based on the study of the globularization mechanism, many researchers have established the prediction models of globularization. These models are summarized as follows: empirical model [17,18], neural network model [19], internal variable model [3,22], processing map [20,21], phase field model [23,24] and microstructure-based finite element model [13,16,25,26]. Moreover, the comparison and analysis of these prediction models are accomplished through tabular form shown in Table 1. These models possess prominent advantages respectively. Based on globularization mechanism of dynamic recrystallization, empirical model and neural network model can describe the quantitative correlation of the hotworking parameters (temperature strain and strain rate) and microstructure evolution parameters (dynamic/static globularization volume fraction, initiation strain, completion strain, annealing completion time). To some extent, internal variable model, and phase field model could reflect the microstructure evolution and the complex physical mechanism of globularization during hot working. However, the shortcomings of these models lead to limited applications. Failing to consider the effects of hot-working parameters, the phase field model cannot effectively predict the rate of globularization, the initiation position of globularization and volume fraction on the basis of Termination migration. Nonetheless, these models do not have the capacity to couple microstructure parameters with hot-working parameters on globularization. It is impossible to precisely control the globularization of titanium alloy with lamellar colony and guide the production practice through theses prediction model. Multi-scale CACPFEM modeling of globularization Currently, studies on multi-scale model of globularization during hot working are scarce. A multi-scale model which combines crystal plasticity finite element with cellular automaton [44][45][46][47] could simulate microstructure evolution and meso-scale deformation under the effects of hotworking parameters and microstructure parameters. This provides a good idea and reference for the modeling of globularization. Therefore, a framework scheme of multi-scale cellular automation and crystal plasticity finite element model (CACPFEM) on globularization was proposed shown in Figure 3. The microstructure-based crystal plasticity finite element model (CPFEM) includes the evolution of GND [48][49][50][51][52][53] in the alpha/beta interface, the reconstruction of high temperature beta phase [54][55][56][57][58] and three-dimensional reconstruction of alpha lamella [59,60]. Local variables such as dislocation density, deformation storage energy and grain orientation could be worked out. Then those local variables would be inputted into the cellular automation (CA) model, which considers the curvature effect of phase interface and solute concentration gradient to conduct morphological evolution of alpha lamella. Therefore, this multi-scale coupling model could not only gain meso-scale deformation such as stress and strain field, but also obtain micro-scale morphological distribution. Effect of hot working and microstructure The globularization of lamellar colony is affected by the coupling effects of hot-working parameters and microstructure parameters. However, there are few studies on the influence of micorstructure parameters on lamellar globularization, which basically remain in lath thickness [62], burger orientation [63], morphological effects on anisotropic deformation [16]. In this section, the effects of hot-working parameters (strain, strain rate, temperature) and microstructure parameters (alpha/beta interface [64,65], GND, high-temperature beta phase [2,66]) on globularization are systematically summarized. Moreover, these puzzles and challenges('sluggish' problem [31] and macrozone) faced by the control globularization process are also pointed out, and some innovative solutions are proposed for these puzzles. Effect of hot working on globularization 4.1.1 Effect of strain Without the effect of strain, the globularization of lamellar colony cannot be accomplished only by heat treatment. From micro-perspective, the strain can promote the formation of micro-defects and the accumulation of distortion energy, destroy the stability of the alpha/beta interface and increase the probability of lamellar cutting. Moreover, the slip systems in the close-packed hexagonal structure alpha phase are limited and the critical resolved shear stress (CRSS) of each slip system is relatively large [67,68]. From the perspective of macroscopic deformation, strain also affects the kinetics of globularizaiton [69,70]. The globularization fraction increases with the increase of strain [71]. The initiation strain and completion strain of globularization increase with the thickness of the lamellar alpha. The impact of strain on globularization is also Digital material representation method The constitutive relation measured by the nanometer indentation The effects of microstructure features on strain localization bands [25] Model equation Pros and cons ■ Predict the grain size volume fraction □ Predict the morphology evolution □ Predict Meso-mechanical response ■ Predict Macro-mechanical response ■ Consider hot working parameters □ Predict the grain size volume fraction ■ Predict the morphology evolution □ Predict Meso-mechanical response □ Predict Macro-mechanical response ■ Consider hot working parameters □ Predict the grain size volume fraction □ Predict the morphology evolution ■ Predict Meso-mechanical response □ Predict Macro-mechanical response ■ Consider hot working parameters related to the composition of the alloy. In the (a+b) working, the initiation strain and completion strain of globularization of near alpha titanium alloy are bigger than that of near beta titanium alloy. The work of Wu [72] and Wang [31] are well to verify this conclusion. Abundant beta stable solutes result in not only relatively small lamellar alpha, but also high solute diffusion rate. Effect of strain rate Strain rate has a significantly effect on globularization of titanium alloy with lamellar colony, especially on the rate of globularization. The high strain rate leads to meso-scale inhomogeneous plastic deformation (flow instability, strain localization and adiabatic shear band [73]). Although high strain rate can promote fracture and brings more micro-defects, inhibits interfacial migration, its globularization efficiency is not high. The interfacial migration will be more complete at low strain rate which could accelerate the wedging of beta phase and cause fracture of single alpha lamella due to the faster diffusion rate and sufficient diffusion time. Therefore, high temperature and low strain rate are applicable hotworking parameters for globularization during hot working. The globularization mechanism is also different with different strain rates. Some study showed that high strain rate easily caused shearing of alpha lamella, and the dynamic recrystallization was usual the globularization mechanism at low strain rate. Effect of temperature The effect of temperature on globularization is complex. The detailed characteristics are descriebd as follows: (a) Temperature affects not only the deformation inhomogeneity, but also the interface diffusion rate; (b) The temperature also has a great influence on complex and diverse morphologies of lamellar colony (the thickness, volume fraction and the kinetics of globularization). In detail, fine lamella such as martensite becomes thicker with the increase of temperature in the two phase region [74], while coarse lamella generated by the furnace cooling microstructure changes in an opposite way; (c) The temperature may also affect globularization mechanism. The globularization mechanism at high temperature was the dynamic recrystallization, while at low temperature was always the boundary splitting; (d) The effect of temperature not only embodies during the hot deformation process but also in the subsequent heat treatment process. The lamellar alpha structure transforms into short-bar shape or band-like shape after deformation in the two phases region and alpha grains with these shapes further get split or globular via annealing treatment. Effect of microstructure on globularization 4.2.1 Effect of alpha/beta interface This burger orientation (OR) [63,64] of titanium alloy with colony shown in Figure 4a provides the basis for the formation of coherent or semi-coherent interphase boundaries which have been verified by high-resolution transmission electron microscopy (TEM) [64]. At present, the research is focused on the influence of the lose coherency shown in Figure 4b on the flow softening. In fact, interfacial coherency is accompanied by the loss of Hall-Petch strengthening, which exhibit strong flow softening [3]. The lose coherency of alpha/beta interface becomes the hindrance of dislocation slipping and the interface energy could provide driving force for the formation of microdefects in the globularization process. However, the effect of the alpha/beta interface on the diffusion of solute atom during globularization is scarce. The diffusion of the solute atoms especially the Mo and V atoms are affected by the solute concentration gradient [33,40], solute atom's dragging effect [42] on the interface migration and the lose coherency of alpha/beta interface. Although literature [75] suggested the loss of coherency increased the rate of diffusion along the interphase boundaries resulting in an acceleration of the rate of globularization of the two phases, specific experimental research and quantitative analysis are needed. In addition, the effect of curvature on the phase interface could cause the difference of the diffusion driving force of the solute concentration, thus affecting the dynamics and morphology of the beta wedging. Effect of geometry necessary dislocation The study of Hiroaki [29] found that the GND was reduced after globularization, whereas still high in lamellar alpha. The evolution of the GND on interface has an effect on the formation of micro-defects and the thermal diffusion of solute. Moreover, the effect of GND [14,30] on meso-scale heterogeneous deformation of lamellar colony is very important. It is necessary to study the effect of GND on globularization of lamellar colony. At present, the experimental research on GND in alpha/beta interface is still limited to small strain and local grain size. According to the research of He [76], GND in alpha/beta interface was high. However, the deformation distortion of beta phase was serious, which resulted in the calculation of GND of beta phase was difficult shown in Figure 5. Therefore, the value of stress/strain calculated by cross-correlation in the grain is relative value relative to the reference point [77][78][79]. This means that the absolute value of the stress/strain cannot be calculated. The comparison between different grains has no practical significance. Quantitative determination of full-field GND at large strain is currently not available. In addition to the experimental method, the simulation modeling analysis is also an important research method, thus shedding new light on solution of measurement problems on the full-field GND under large strain. An effective strategy is that the evolution law of GND on the small strain was quantitative characterized through the experiment measurement described above, and the finite element simulation model was proposed to study evolution of GND on large strain, thus studying the effect of GND on globularization. Due to the nonlocal characteristic of GND [50,80,81], the numerical implementation of GND into the FEM framework would raise a number of fundamental problems. The specific research work can refer to the literatures [51,52,82]. Effect of high-temperature parent beta phase As well known to all, the beta phase plays an important role in the deformation compatibility between alpha lamella at high temperature. Moreover, beta phase is soft phase, experiencing severe plastic deformation. Beta phase also plays an important role in promoting wedging and solute atom's thermal diffusion in the globularization process. Studies on the effects of beta phase on the globularization during hot working are limited. In Germain's study [5], local characterization of the beta phase orientation from the lamellar colonies was hard to be obtained by existing measurement techniques. The reason is the lattice distortion caused by beta phase and low content of residual beta caused by phase transformation. These brought difficulty in the characterization of beta phase orientation through Electron Back-Scattered Diffraction (EBSD) measurement. How to reconstruct beta phase orientation is the key to deeply study the effect of beta phase on globularization of lamellar colony. The difficulty in the reconstruction of high temperature beta parent phase rely on the determination from 12 possible beta parent phase [54][55][56][57] and the strategy of gathering the variants using a misorientation criterion [54]. According to the characteristic of alpha phase precipitation and burgers orientation relation, 12 possible alpha variants can randomly form from parent beta phase. The strategy to gather the variant from the same parent is also very important in the process of high temperature beta reconstruction, as shown in Figure 6. The specific details can refer to the literatures [54][55][56][57]. Moreover, the lattice distortion of beta phase get weakened through light corrosion and elimination of residual stress in the experimental process and the calibration rate of beta phase and secondary alpha phase get enhanced on EBSD. Therefore, the effect of beta phase on the globularization of alpha lamella can be studied by means of the hightemperature beta orientation reconstruction described above. Puzzles and challenges faced by globularization control 4.3.1 Unified control strategy of hot deformation and heat treatment The 'sluggish' problem of globularization of lamellar colony is that the globularization efficiency during hot working is low [31]. What's regrettable is that most of the techniques (changing loading paths [83] and beta pre-deformation [84]) to improve the globularization rate are not yet well accepted in the industrial domain due to high cost and low efficiency. The evolution of alpha lamella including the morphology of precipitation for phase transformation, globularization for hot deformation and termination migration for heat treatment, and the interact relationship between these are very complicated and sensitive. At present, the study on the effects of combining hot deformation and heat treatment processing parameters are not much. The joint control strategy of hot deformation and heat treatment is used limitedly on the process of globularization on cogging and forging of billets and manufacturing of double performance disk. Precise control of multi-process parameters of the whole process with coupling effects of multi-fields and multifactors is core of unified control strategy of hot deformation and heat treatment [1,27]. This kind of strategy is employed by Stefansson et al. [40] and the conclusion is that pre-imposed strain of hot deformation and annealing temperature made great effect on the globularization efficiency. Moreover, the globularization efficiency is improved by the interrupted multi-pass compression with inter-pass heat treatment in the work of Fan et al. [85]. As a consequence, precise control of the whole process with an appropriate process parameter is a promising strategy for microstructure control. Quantitative relation of the globularization and macrozone Inhomogeneous globularzaiton of initial coarse lamellar colony could easily lead to microstructure heredity. Relevant literatures [5,6] indicated that many globular alpha grains still remain similar orientation, resulting in macroscopic defects with size up to mm which is called macrozone shown in Figure 7a. The macrozone has been observed in semi-fihished products such as billets as well as forged disks. The macrozone can act as sites of multiple initiating cracks and reduce the fatigue performances. At present, researches into effects on macrozone from microstructure parameters including lamellar colony's initial morphology, distribution and orientation are insufficient especially the evolution of the local microtexture during globularization. Therefore, the quantitative relation of the globularization and macrozone is unclear. In the research of Germain [5], The compression test performed in this study cannot track in a fully accurate manner on the thermomechanical history of macrozone. Traditional deformation experiments involving uniaxial compression, tension or simple torsion are not suitable for in situ measurement of micro-texture heterogeneity at high temperature and large strain. During hot working, interrtupted in situ measurement method shown in Figure 7b-c put forward by Quey [86][87][88], Dancette [89] and Martin [90] can track the evolution history of microstructure parameters, such as microstructure morphology, distribution and orientation. The studies in initial morphology, distribution and orientation of lamellar colony and micro-texture evolution during the hot working With the development of analytical testing equipment and computer programming algorithm, the three-dimensional test characterization method of microstructure improves remarkably as well. As for three-dimensional reconstruction of titanium alloy lamellar colony, three main methods probably used are as follows: multiple levels of grinding and polishing technology [92] shown in Figure 8 (a), using three dimensional X rays diffraction [93,94] and combining focused ion beam (FIB) with EBSD [95,96]. Up to now, studies taking advantage of FIB-EBSD on morphology and orientation of lamellar colony after globularization have not been reported yet. These factors such as morphology features of single alpha lamella, distribution relationships of alpha variants, three-dimensional interweave distribution characteristics and deformation compatibility of alpha colonies, would render a remarkable significance for deepening our understanding in lamellar colony's deformation mechanism, globularization laws and its mechanical and service performance as well. The relationship between lamellar colony and mechanical properties The quantitative research in microstructure and properties of titanium alloy during hot working renders an important foundation for optimizing design of titanium alloy microstructure. It is of great significance not only for effectively adjusting and controlling components' microstructure and properties but also for optimizing hotworking parameters. As for titanium alloy with lamellar colony, the effects of the average quantitative indexes (grain size, thickness and fraction of alpha) on mechanical properties of titanium alloy could be predicted in terms of the artificial neural model [97] shown in Figure 8b. But the effects of microstructure morphology, distribution or micro-texture on mechanical properties were scarce. Therefore, a computer-based simulation model [98] needs to be developed to study the quantitative relationship between microstructure parameters and mechanical properties. The effect of severe plastic deformation on globularization Severe plastic deformation (SPD) [99,100] technique means that the material undergoes very large strain, which results in abnormal refinement of grain, but with unchanged geometry size of billets. The free flow of material is limited by the special mold shape in equalchannel angular pressing (ECAP) [101] shown in Figure 8c. The hydrostatic pressure can results in higher strain and high-density micro-defects. After several extrusions, the grain size can be refined after large plastic deformation. Shear deformation can promote microstructure fracture and globularizaiton of titanium alloy. The characteristics of unchanged shape and size are very suitable for cogging and forging of billets and pre-blocking of forged disk. The technique of ECAP can effectively increase the globularization efficiency of lamellar colony. Therefore, SPD may be a promising solution to 'sluggish' problem of globularizaiton. But the research on this is scarce. The application of ECAP techniques may be accepted and adopted in precisely controlling globularization by the engineering field. Conclusions This paper summarized the research work of globularization of lamellar colony, which mainly includes the globularization mechanisms, the prediction models and the effects of hot-working parameters and microstructure parameters. Some results are presented in the following section. [92]. (b) Artificial neural model of mechanical properties through the microstructure information [97]. (c) Equal-channel angular pressing [101]. -In light of the coexistence and advantages of multiple mechanisms, a unified globularization mechanism of coupling multiple factors is summarized. -As for the prediction model, a framework scheme of multi-scale CACPFEM modeling on globularization is also introduced. Moreover, the coupled effects between micro-scale microstructure evolution and meso-scale deformation anisotropy can be well captured by this multi-scale CACPFEM. -A unified control strategy of hot deformation and heat treatment is pointed out for the 'sluggish' problem of globularization during hot working. As for the formation of macrozone, an interrupted in-situ measurement method could be used to characterize the micro-texture evolution of globularization. Consequently, these results described above is of great significance to in-depth understanding of globularization mechanism, predict and precise control on globularization and optimize the hot-working parameters and improve properties of titanium alloy products.
6,788.6
2020-01-01T00:00:00.000
[ "Materials Science" ]
Effects of baclofen on insular gain anticipation in alcohol-dependent patients — a randomized, placebo-controlled, pharmaco-fMRI pilot trial Rationale One hallmark of addiction is an altered neuronal reward processing. In healthy individuals (HC), reduced activity in fronto-striatal regions including the insula has been observed when a reward anticipation task was performed repeatedly. This effect could indicate a desensitization of the neural reward system due to repetition. Here, we investigated this hypothesis in a cohort of patients with alcohol use disorder (AUD), who have been treated with baclofen or a placebo. The efficacy of baclofen in AUD patients has been shown to have positive clinical effects, possibly via indirectly affecting structures within the neuronal reward system. Objectives Twenty-eight recently detoxified patients (13 receiving baclofen (BAC), 15 receiving placebo (PLA)) were investigated within a longitudinal, double-blind, and randomized pharmaco-fMRI design with an individually adjusted daily dosage of 30–270 mg. Methods Brain responses were captured by functional magnetic resonance imaging (fMRI) during reward anticipation while participating in a slot machine paradigm before (t1) and after 2 weeks of individual high-dose medication (t2). Results Abstinence rates were significantly higher in the BAC compared to the PLA group during the 12-week high-dose medication phase. At t1, all patients showed significant bilateral striatal activation. At t2, the BAC group showed a significant decrease in insular activation compared to the PLA group. Conclusions By affecting insular information processing, baclofen might enable a more flexible neuronal adaptation during recurrent reward anticipation, which could resemble a desensitization as previously observed in HC. This result strengthens the modulation of the reward system as a potential mechanism of action of baclofen. Trial registration Identifier of the main trial (the BACLAD study) at clinical.gov: NCT0126665. Supplementary Information The online version contains supplementary material available at 10.1007/s00213-022-06291-6. Introduction A hallmark of the neurobiological basis of addictive disorders is an altered recruitment of the brain's reward system (Adinoff 2004) during processing of drug-associated as well as non-drug associated cues in patients with alcohol use disorder (AUD) (Bruguier et al. 2008;Lorenz et al. 2014). Specifically, an increased salience attribution to alcohol-associated cues (Grüsser et al. 2004;Heinz 2002;Berridge 1993, 2000) and a concomitantly decreased salience attribution towards non-alcohol associated cues (e.g., monetary rewards) have been observed in AUD (Beck et al. 2009;Volkow et al. 2016;Wrase et al. 2007). Moreover, in patients with addictive disorders compared to healthy individuals (HC), reduced activation of striatal circuits during the presentation of non-drug-associated rewards has been described in a meta-analysis (Luijten et al. 2017). During reward anticipation, an increased dopaminergic firing rate (Di Chiara 1997) is demonstrated in the ventral tegmental area (VTA) (di Volo et al. 2018) prior to reward delivery or outcome (Schultz et al. 1997). Via mesolimbic and mesocortical pathways, the VTA is closely connected with other essential regions of the reward network including the prefrontal cortex (PFC), ventral striatum (VS), putamen, and insula (Bruguier et al. 2008). The insula is highly relevant for addictive behavior, as insular lesions have been shown to immediately abolish tobacco dependence (Bechara 2001). The insula is known to be specifically involved in the processing of uncertainty and proprioceptive self-awareness as well as in avoidance behavior, risky decision-making, gambling, and purchasing scenarios (Bruguier et al. 2008;Lorenz et al. 2014;Tsakiris et al. 2007). Structurally, the insula is indirectly connected to striatal areas via projections to medial prefrontal areas (Haber and Knutson 2010). There is evidence for two functional insular sub-units primarily involved in (1) the subjective processing of emotionally relevant content (Singer et al. 2009) and (2) the processing of uncertainty (Huettel et al. 2005;Preuschoff et al. 2008). Both processes are induced by the here-used slot machine paradigm: when performing the task, a potential win option represents an emotionally salient and arousing event (e.g., two congruent cylinders; C1 = C2) (Craig 2009), which is often accompanied by a perception of uncertainty during gain anticipation because the outcome of the third and final cylinder is uncertain (C1 = C2 = C3 or ≠ C3) (Lorenz et al. 2014). Past research showed in HC, playing a slot machine lead to high fronto-striatal activation (Knutson et al. 2001;Lorenz et al. 2015a, b;Luijten et al. 2017), whereas playing the slot machine repeatedly was associated with decreased frontostriatal brain activation ( (Lorenz et al. 2015a, b). The authors concluded that specifically, a reduction in salience and uncertainty at t2 might contribute to a desensitization of the reward system that is accompanied by the observed reduction in fronto-striatal activation. In other words, this desensitization of the reward system seems to represent an adaptative mechanism which might be impaired in AUD. In this study, we focused on the described comparison between gain anticipation (two congruent cylinders; C1 = C2) and no-gain anticipation trials (two incongruent cylinders; C1 ≠ C2 (Lorenz et al. 2014) when performing the task a second time (t2). Regarding pharmacological treatment options for AUD, baclofen -a gamma-amino-butyric acid-B (GABA-B ) receptor agonist -received attention as a potential addition (Agabio et al. 2018) to currently approved pharmacological interventions (i.e., naltrexone, acamprosate, and disulfiram) (Shen 2018), since the latter showed only limited effectiveness (Jonas et al. 2014). So far, baclofen effects for the treatment of AUD are still controversial. To date, three recent meta-analyses assessed the efficacy of baclofen of which two found evidence for significant efficacy of baclofen in AUD with a higher percentage of abstinent patients at the study end and a longer time to lapse compared to placebo (Pierce et al. 2018;Rose and Jones 2018). In contrast, Bschor and colleagues (2018) found no superiority of baclofen versus placebo. However, an expert consortium suggests individual titration in patients with the treatment goal of achieving abstinence and/or reduced drinking (Agabio et al. 2018), taking into account the individual severity and history of the disease (Shen 2018). It is assumed that AUD leads to a downregulation of the GABAergic system and simultaneously to an alteration of fronto-striatal information processing (Beck et al. 2012;Volkow 2002, 2011). Thus, baclofen as a GABAergic agonist, might have the ability to interfere indirectly with fronto-striatal circuits. Regarding the neuronal mode of action, a preclinical study by Fadda and colleagues showed that baclofen suppressed the alcohol-induced dopamine release in rodents' nucleus accumbens (Nacc), assuming an indirect mechanism via inhibitory GABA-B receptors in VTA and the associated suppression of dopaminergic signaling towards the Nacc or the VS (Fadda et al. 2003). In humans, baclofen affects neural reward processing (Beck et al. 2018;Boehm et al. 2002), although the exact mechanism of action remains to be elucidated (Müller et al. 2015). Studies investigating nicotine dependence observed that baclofen decreased resting state blood flow in the insula and VS (Franklin et al. 2011(Franklin et al. , 2012 while in cocaine-dependent patients, baclofen diminished activation in response to subliminal cocaine cues in bilateral VS, ventral pallidum, amygdala, midbrain, and orbitofrontal cortex (OFC)regions known to be involved in motivated behavior (Volkow et al. 2016;Young et al. 2014). These findings support the hypothesis that baclofen acts in similar ways with respect to different drugs of abuse. Additionally, a recent pharmaco-fMRI study in patients with AUD found increased abstinence rates together with high-dose baclofen-induced functional reductions of alcohol cue-associated brain responses in OFC, amygdala, and VTA -areas known to be involved in the processing of salient stimuli (Beck et al. 2018). In line, another pharmaco-fMRI study showed decreased alcoholassociated cue-elicited BOLD signal mostly in frontal areas, i.e., precentral gyri and ACC in patients with AUD treated with 75 mg/daily baclofen dosage compared to placebotreated patients (Logge et al. 2019). In our current study, we investigated whether baclofen modulates the neural activity during rewarding cues in general. The applied slot machine paradigm (Lorenz et al. 2014) enables the assessment of alterations in the fronto-striatal network during the anticipation of monetary (non-alcoholassociated) rewards. In particular, based on the former findings in HCs (Lorenz et al. 2015a, b), we hypothesize that baclofen decreases ("desensitizes") neuronal activation in the fronto-striatal network including the insula in patients with AUD when performing the slot machine a second time, representing a more flexible adaptation towards reduced emotional arousal or uncertainty. To test whether our observed effects were related to the administration of baclofen, we investigated associations between brain response and baclofen blood serum level in exploratory analyses. Study description The study was a preregistered, randomized, double-blind, and placebo-controlled pharmacological trial with 56 AUD patients ((clinical.gov: NCT01266655; published BACLADstudy; (Müller et al. 2015) with an individual titration up to high-dose baclofen or placebo (range from 30 mg/day up to 270 mg/day) for 12 weeks and an embedded functional magnetic resonance imaging part (fMRI) with two scanning sessions: one at baseline, before starting the titration (t1) and one recurrent after 2 weeks of individual high-dose intake (t2). Placebo capsules contained mannitol (99.5%) and silicium dioxide (0.5%). Patients were titrated from week 1 to 4 until they reached their individual dosage, followed by a 12-week stable individual high-dose phase, and were tapered in the same way as they were titrated. We here report data of the fMRI sample (n = 28), who participated at t1 and t2. All patients were recruited during in-and outpatient detoxification treatment and were randomly assigned to one of the two study groups after completion of detoxification. This study was conducted in accordance with the Declaration of Helsinki and approved by the ethics committee of Charité-Universitätsmedizin Berlin. The BACLAD study (Müller et al. 2015) was approved by the ethics committee of the state of Berlin and the Federal Institute for Drugs and Medical Devices (BfArM). All patients gave written fully informed consent for participation. Participants We included patients with AUD between 18 to 65 years, with a weekly minimum of two heavy drinking days (defined as drinks per day: men ≥ 5/day, women ≥ 4/day; 1 standard drink equals 12 g absolute alcohol). We excluded other psychiatric axis I-disorders ((SCID-interview, (Fydrich 1997), gambling disorders and substance dependences other than nicotine dependence, as well as AUD patients with abstinence durations longer than 21 days and with irremovable ferromagnetic material (for detailed inclusion and exclusion criteria please see Beck et al. (2018) and Müller et al. (2015)). No study participant had used baclofen before. Finally, from 56 patients who participated in the BACLAD trial (Müller et al. 2015), a total of 28 (13 BAC; mean age = 47.54 ± SD = 0.83 and 15 PLA; mean age = 47.0 ± SD = 0.26, p = . 861; see Table 1) were included in our analysis. Blood serum levels of the study medication were assessed in the BAC group at t2. One blood sample has not been analyzed due to missing data at this time point. When replacing the missing value with the median, the result remained significant. A detailed overview of the study allocation and reasons for exclusion is given in Fig. 1. After patients terminated medically supervised detoxification, groups did not differ significantly in abstinent days prior to study inclusion. All patients were right-handed as confirmed by the Edinburgh Handedness Inventory (Oldfield 1971). Intellectual premorbid capacity was measured by the German vocabulary test "Wortschatztest" (WST) (Metzler and Schmidt 1992). Nicotine dependence was permitted and assessed by the Fagerström questionnaire for nicotine dependence (FTND) (Heatherton et al. 1991). For alcoholrelated measures, lifetime total alcohol consumption in kilogram was evaluated by the life time drinking history (LDH) interview (Skinner and Sheu 1982). The amount of daily absolute alcohol consumption in gram was recorded for the last 30 days before the baseline scan via timeline followback (TLFB) (Sobell and Sobell 1992). The severity of illness was assessed by the alcohol dependence scale (ADS) (Skinner and Horn 1984) for the last 12 months. Craving was measured using the obsessive compulsive drinking scale (OCDS-G) (Mann and Ackermann 2000) for the last 7 days before t1 and the last 14 days before t2. Clinical and personality measures included the state-trait anxiety inventory (STAIstate) (Laux et al. 1981), the beck depression inventory (BDI-II) (Beck et al. 1961), and the Barratt impulsiveness scale (BIS-11) (Patton et al. 1995) each at t1 and t2. Detailed sample characteristics are provided in Table 1. All behavioral, clinical, and sociodemographic data to describe the two groups (BAC, PLA) were analyzed using non-parametrical linear model with the group as between subject-factor tests with permutation-based alpha-error (p-value) estimation (using R package lmPerm (https:// cran.r-proje ct. org/ web/ packa ges/ lmPerm/ index. html), with max. 1 million iterations and Ca = 0.001, i.e., package's early stopping routine, namely when the estimated standard error of the estimated p is less than Ca*p). Permutation-based p-value computations are useful because they do not assume a certain distribution of the residuals and hence do not require a test for normal distribution of the residuals as would be the case for the t-tests for each covariate for sample characterization. The mean difference of each covariate (BAC minus PLA) was computed as well as the confidence interval of that difference using bootstrapping with 10,000 sampling repetitions (stratified per group) (using R package boot and bias-corrected 95% CI computation). Bootstrapping is useful to compute CIs without assuming any distribution of the covariates. Table 1 Differences between groups in demographic and clinical data M, mean; SD, standard deviation; CI, confidence interval of effect statistic (95% lower, upper); df, degree of freedom; p-value was permuted, except: a exact chi-square test; b number of abstinent patients *significant differences; t1, baseline scanning session; t2, second scanning session; number of missing values due to technical error or refusal by subject to answer, replaced by median of respective group, in detail: A) 1 missing case, B) 2 missing cases, C) 3 missing cases; AUD, alcohol use disorder; WST, IQ measured by German version of vocabulary test ("Wortschatztest"); handedness measured by Edinburgh Handedness Inventory; FTND, Fagerström test for nicotine dependence; PY, pack years of nicotine use; LDH, life time drinking questionnaire; ADS, severity of dependence measured by alcohol dependence scale; OCDS-G, craving measured by obsessive compulsive drinking scale; BDI-II, depression measured by Beck's depression inventory II; STAI, state anxiety measured by Spielberger's state trait anxiety inventory; BIS-11, impulsivity measured by Barrat Abstinence rates were analyzed by using the exact chi-square test, using R software (www.r-proje ct. org). Slot machine paradigm To investigate reward anticipation-related brain responses in the fronto-striatal reward system, we used a virtual slot machine paradigm with three moving cylinders (C1, C2, C3) with two different sets of fruits (cherries and lemons, or melons and bananas) at two time points (t1, t2) (Lorenz et al. 2014(Lorenz et al. , 2015a. Stimuli were presented in a pseudo-randomized manner. The slot machine was programmed using Presentation software (Version 14.9, Neurobehavioral Systems Inc., Albany, CA, USA). The paradigm is a so-called passive reward task since participants have no influence or control regarding reward outcome as they would have in performance-driven reward tasks (e.g., monetary incentive delay tasks, MID, Knutson et al. 2000;Richards et al. 2013). Prior to the MR-scanning session, patients were familiarized with the task for 5 trials in the scanner. During the experimental phase, patients were asked to play the slot machine 60 times with a wager of 0.10 € per trial; each participant had a total wager amount of 6.00 € before starting the task. The aim in each trial was to get three equal fruit cylinders in a horizontal line (C1 = C2 = C3; win trial), which was rewarded with 0.50 €. In all other cases (early loss C1 ≠ C2 and late loss C1 = C2 ≠ C3), patients lost 0.10 € additionally to the wager of 0.10 €. Each trial started with gray color bars indicating the inactive state of the cylinders, which turned blue to notify participants of the start signal. Participants started the slot machine by pressing the start button with their right index finger. The three cylinders were then accelerated using an exponential profile from the left to the right cylinder. In a total of 1.66 s after pressing the start button, the maximum speed was reached, and the color bar turned from gray to green -the signal to stop the machine during the next 4 s via the same button press. After pressing the button, the cylinders stopped from left to right. The left (first) cylinder stopped after 0.48 to 0.61 s, the middle cylinder stopped after an additional 0.73 to 1.18 s, and the right (and third) cylinder stopped after another 2.63 to 3.24 s. The stop of the right cylinder terminated the trial and the current win/ loss and total amount of money (minus the 0.10 € wager) was presented above the slot machine. After a variable inter-trial interval (ITI) between 4.0 and 7.43 s, the color bar turned from gray to blue again, indicating a new trial. In this publication, we analyzed the phase of gain anticipation which indicates the time period between the stop of the second and third cylinder: Two equal fruit cylinders (C1 = C2) indicate the possibility to further win (gain anticipation; GA) while two unequal cylinders (C1 ≠ C2) indicate an upcoming loss (no gain anticipation; noGA). To guarantee an equal number of trials for each of the three conditions (win: C1 = C2 = C3, 20 trials; early loss: C1 ≠ C2 ≠ C3 or = C3, 20 trials; late loss: C1 = C2 ≠ C3, 20 trials), the win probabilities were equalized to p = 0.33. Due to the self-paced nature of the experiment, the total execution time ranged between 12 and 14 min. In this time period, 360 to 420 functional brain images were acquired (see Fig. 2). Image processing Image preprocessing and statistical analyses were conducted using the Statistical Parametric Mapping software package (SPM12, Wellcome Department of Imaging Neuroscience, London, UK, http:// www. fil. ion. ucl. ac. uk/ spm/ softw are/ spm12/). Before preprocessing, DICOM data were converted into Nifti format via MRIConvert (University of Oregon). Next, imaging data were manually inspected for image artifacts, the origin was set to the anterior commissure, and the images were oriented to roughly match the orientation of the Fig. 2 Slot machine paradigm used brain template. Additionally, spatial matching between functional and anatomical scans was checked via MRIcron (www. mricro. com). After these preparatory steps, EPIs were corrected for acquisition time delay as well as head motion and the MPRAGE was co-registered to the mean EPI and segmented and transformed into the stereotactic standard space as defined by the ICBM (International Consortium for Brain Mapping) brain atlas implemented in SPM12 (http:// www. loni. usc. edu/ resea rch/ atlas es) with the unified segmentation approach developed by Ashurner and Friston (2005). Using linear and nonlinear parameter estimates from this step, EPIs were warped and resampled with a voxel size of 3 × 3 × 3 mm 3 . Finally, EPIs were spatially smoothed with a moving isotropic 3D Gaussian kernel of 7 mm full width at half maximum. Modeling of individual brain responses Data were analyzed voxel-wise within the framework of the Generalized Linear Model (GLM) in a mass univariate manner. On single subject level, we modelled periods of anticipatory brain activity by means of box-car functions with an onset at the stop of the second cylinder and a width equal to the time gap between the second and third cylinder stop. To model the BOLD responses, these functions were convolved with the canonical hemodynamic response function (hrf) as implemented in SPM12 and served as regressors of interest in single-subject models. Additionally, regressors of no interest were constructed in the same way for the win, late loss, and early loss. On the single-subject level, the model contained separate regressors for gain anticipation (C1 = C2) and no gain anticipation (C1 ≠ C2) as well as the following regressors of no interest: gain (C1 = C2 = C3), loss (C1 = C2 ≠ C3), early loss (e.g., (C1 ≠ C2 = C3), button presses (after color bar changed to blue as well as green), visual flow (rotation of the wheels), and the six rigid body movement parameters. In addition, start and stop button presses were modelled by means of stick functions and convolved with the hrf. To account for the different amount in visual flow, we used the number of cylinders in motion convolved with the hrf as a related proxy (Lorenz et al. 2015a, b). Nuisance variance in voxel-time series caused by susceptibility × motion interactions was modelled by means of the six motion parameter estimates derived from motion correction. Finally, to model the mean signal within the session, a constant was added to the GLM. After bandpass filtering and restricted maximum likelihood fit of the model to the data, two contrasts were calculated (Wiers et al. 2015). Contrast 1: gain anticipation at baseline (t1) ([GA > noGA] before treatment). This includes gain possible (e.g., C1 = C2) and no gain possible (C1 ≠ C2). Contrast 2: gain anticipation at baseline minus gain anticipation after 2 weeks of individual high-dose pharmacological treatment [GA > noGA at t2 -GA > noGA at t1]. Statistical group analysis On a group level, t-tests were used to calculate gain anticipation before treatment in both groups (effect of task) and between groups (effect of treatment). In detail, for analyzing the effect of task before treatment, we used a one-sample t-test for contrast 1 for the whole sample (n = 28). Only the effects passing a statistical threshold of p < 0.05 family-wise error (FWE) whole brain corrected at the cluster level were considered for report and discussion and were specified in MNI coordinates in x, y, and z dimensions. We additionally assessed potential group differences using a two-sample t-test. For analyzing the effect of treatment, we conducted a two-sample-t-test using contrast 2 to calculate differences in gain anticipation between groups before and after treatment: [(GA > noGA (t2-t1) PLA ] > [(GA > noGA) (t2-t1) BAC ] (Wiers et al. 2015). Only the effects passing a statistical threshold of p < 0.001 family-wise error (FWE) corrected at the cluster level were considered for report and discussion. Due to higher anxiety scores (STAI-state) and higher amounts of alcohol consumed over a lifetime (LDH) prior to treatment in patients receiving baclofen, the robustness of fMRI results was tested by introducing those variables as covariates of no interest into the model (Miller and Chapman 2001). Since our results remained robust after adjusting for both potential confounding variables, we followed authors who recommended not to co-vary baseline group differences (de Boer et al. 2015;Miller and Chapman 2001). De Boer and colleagues consider the adjustment of significant baseline differences "to be inappropriate and erroneous because it might ignore the fact that the prognostic strength of a variable is important even when there is an interest in adjusting to confounding effects" (de Boer et al. 2015). Nevertheless, we calculated the results with the covariates STAI-state and LDH (see the "Imaging result" section) but only discuss the non-covariate data in detail. Imaging results were anatomically identified using SPM implemented brain atlas "Neuromorphometrics". In order to achieve a more precise result, the brain structure of the red nucleus was identified via Talairach Client and displayed in Fig. 3 using Surf Ice software (www. nitrc. org/ proje cts/ surfi ce). Violine plots were displayed using Matlab 2014b (www. mathw orks. com). Correlation between insula response and baclofen blood serum level We further extracted the first Eigenvariate within a sphere of 4 mm radius centered around the peak voxel (MNI = x = 39, y = 17, z = 8) of the most pronounced treatment effect (2-sample-t-test between BAC and PLA based on contrast 2 [(GA > noGA (t2-t1) PLA ] > [(GA > noGA) (t2-t1) BAC ] in the right anterior insula (see the "Results" section below). This Eigenvariate is the vector of beta coefficients of the T2-T1 contrast at peak voxel and thus reflects the post-pre difference in brain response being a measure of the desensitization effect as it has also been previously described (Lorenz et al. 2015a, b) during repeated slot machine gaming in the fronto-striatal network. To test whether this desensitization is influenced by the baclofen treatment, we explored its correlation with baclofen blood serum levels in the baclofen group using R software (www.r-proje ct. org) as an additional biological marker. Serum levels of baclofen were assessed 2 weeks after reaching the individual high dose using liquid chromatography/mass spectrometry in blood serum. We had a one-sided hypothesis (the higher the serum level, the higher the desensitization) and hence computed a one-sided p-value by bootstrapping (using R = 10,000 repetitions) the correlation coefficient r and checking in how many cases n is r < = 0, hence p(r < = 0) = n(r < = 0) / R. A) Effect of task at t1: Brain responses of contrast 1 ([gain possible (GA)>no gain possible (noGA)] passing a statistical threshold of pFWE(cluster)<.05. All patients (n=28) showed significant fronto-striatal activation in bilateral putamen and thalamus, bilateral precentral gyrus and right anterior insula. B) Effect of treatment at t2: Treatment specific differences in gain anticipation associated brain responses after 2 weeks individual high-dose baclofen or placebo treatment ] > [(GA > noGA) (t2-t1) ] at pFWE(cluster)<.001. Most pronounced treatment effect was observed in the right anterior insula (peak voxel: MNI= x,y,z = 39,17,8). C) Violin plots for the first Eigenvariate derived from the right anterior insular peak voxel show the individual desensitization scores (post-pre differences of insular Eigenvariate values). The correlation between the desensitization and the individual baclofen blood serum level is displayed in the scatterplot (with 95% confidence interval) below. Fig. 3 The effect of treatment and its correlation with baclofen blood serum level There were no significant baseline group differences between the BAC and PLA groups. Effect of treatment and correlation of insula response and baclofen blood serum level At t2, the BAC group showed a significantly decreased activation in the right anterior insula compared to the PLA group (T (28)= 5.30, MNI = x = 39, y = 17, z = 8, p FWE(cluster) = 0.013), which can be interpreted as an indication of desensitization. This post-pre-difference of insular Eigenvariate values correlated significantly with baclofen blood serum level in the BAC group. In other words, the lower the functional insula activation, the higher the baclofen level in the blood serum. Additionally, the mean dosage of baclofen (mg/d) and baclofen blood serum levels were significantly correlated (r = 0.557, p bootstrapped (one-sided)= 0.011). When including covariates (STAI-state (t2-t1) and LDH from t1), the results remained significant when covarying for these factors (T (28)= 5.15, MNI = x = 33, y = 17, z = 5, p FWE(cluster) = 0.005). The effect of treatment and its correlation with baclofen blood serum level is provided in Fig. 3. Discussion The main finding of our study is a decrease in insula activation following baclofen versus placebo medication in a slot machine paradigm among patients with alcohol use disorder. Prior to study inclusion, all patients showed high ADS and OCDS scores, high chronic alcohol consumption (LDH), and a high number of detoxifications, representing a severely affected sample of AUD patients. The BAC group showed even higher lifetime alcohol consumption (LDH) than the PLA group. In this context, it is important to note that Pierce and colleagues (2018) showed that those "ailing," heavily drinking patients may specifically benefit from high-dose baclofen (> 60 mg/d) which held true for our present study. Table 2 Effect of task: brain regions R, right and; L, left hemisphere; k, cluster size; x, y, z, MNI Montreal neurobiological institute space; t, t-value; p, p-value at statistical threshold of p < .05 family-wise error (FWE) whole brain corrected at cluster level and reported via SPM implemented brain atlas "Neuromorphometrics." A) Reported via Talairach Client On a clinical level, we observed significantly higher abstinence rates in the BAC compared to the PLA group, i.e., higher percentage of abstinent patients as well as longer abstinence durations during the 12-week high-dose phase, which has also been observed in our larger sample from our pharmacological trial (Müller et al. 2015). These results support the findings of the first studies by Addolorato and colleagues and also recent findings by de Beaurepaire and colleagues, who supported baclofen as a promising treatment approach, especially for patients with moderate to severe alcohol disorders (Addolorato et al. 2002;Colombo et al. 2004;de Beaurepaire et al. 2019). A recently published study also shows the superiority of baclofen compared to placebo administration while observing a sex × dose interaction effect with men having a greater benefit from high-dose and women from low-dose administration (Garbutt et al. 2021). As mentioned initially, there are mixed results and some of the studies showed no superior outcomes for baclofen (e.g., Colombo et al. 2004;Garbutt et al. 2010)). However, it should be taken into account that all studies varied regarding study designs, focuses, doses of baclofen, and differently severely affected patients, which makes the comparability difficult. On the neurobiological level, the slot machine paradigm evoked robust brain responses during gain anticipation at baseline (t1) in bilateral striatal areas (especially in the putamen and NAcc), thalamus, and insula -areas known to be involved in reward processing (Haber and Knutson 2010;Liu et al. 2011;Lorenz et al. 2015a, b). As an effect of treatment, we observed a significant decrease of insular activation at t2 (re-test condition) in the BAC but not in the PLA group during gain anticipation. Diminished insula function has previously been shown to drastically reduce addictive behavior (Bechara 2001). Here, this reduced activation was observed during the performance of a slot machine task. A reduction in fronto-striatal activation during recurrent reward anticipation in a slot machine task has been previously observed in HC (Lorenz et al. 2015a, b); a typical re-test characteristic of HC as the authors interpreted this phenomenon indicating a neurobiological correlate of reduced uncertainty and/or arousal at repetition. This finding could be interpreted in the framework of the incentive salience theory (Robinson and Berridge 2000): repetition of the slot machine might decrease its relevance, i.e., salience. However, it might also reflect a possible motivational lack at re-test condition (Shao et al. 2013) and missing novelty by knowing the course of the task. In this context, it has been shown that even one training session before the scanning procedure reduced the striatal BOLD signal within the reward circuit such as ventral striatum and caudate nucleus in HC (Shao et al. 2013). Interestingly, in the current study, only patients treated with baclofen showed a significant desensitization effect (Lorenz et al. 2015a, b). This strengthens the interpretation that baclofen as a pharmacological aid might help to "normalize" those brain processes enabling AUD patients to flexibly adapt their reward anticipation and thus improve their adjustment during changing reinforcement contingencies (e.g., desensitization processes). Two other pharmaco-fMRI studies conducting perfusion fMRI during either acute baclofen administration (20 mg) or a daily dosage of baclofen (80 mg) in smokers also observed a blood flow reduction in the insula (and VS) in the treatment group, confirming baclofen's effects on the reward circuitry (Franklin et al. 2011(Franklin et al. , 2012. This is further supported by our own previous study (Beck et al. 2018) and neuroimaging findings of Logge and colleagues, who observed reduced alcohol-related cue reactivity using fMRI and applying 75 mg baclofen per day (Logge et al. 2019). Of note, this reduction in frontostriatal brain activity was not observed in the low-dose-treated group (30 mg baclofen). Thus, these and our present results indicate that higher dosages of baclofen might have a significant beneficial effect on reward processing in AUD patients. In accordance with our findings, Naqvi and colleagues (2010) suggested that "the insula is a critical neuronal substrate" in nicotine dependence and found in this regard that smokers with insular brain lesions were able to quit smoking easier than smokers with no structural damage in this region. In the exploratory analyses, we observed a correlation between the degree of desensitization and baclofen blood serum level. A stronger decrease in insula activation from t1 to t2 was associated with higher levels of baclofen blood serum. This finding further corroborates the assumption that baclofen is involved in the modification of brain reward processes, especially in insular regions, putatively via effecting dopaminergic neurotransmission. Thus, one mechanism of action of baclofen in the treatment of AUD might be a GABAergic modulation of the dopaminergic neurotransmission within the mesolimbic reward system (Beck et al. 2018;Fadda et al. 2003). However, future (preclinical and human) studies are warranted to investigate in depth the specific modes of action of baclofen in AUD. Although, our results extend the knowledge about the effects of baclofen in the treatment of AUD, some limitations of the present study need to be considered. First, the small sample size limits the statistical power of our analyses, and the robustness of our results needs to be proven in replication studies. Although we observed statistically relevant differences between baclofen and placebo treatment and their respective neurobiological correlates in a longitudinal design, this observation requires replication in a larger sample. Since the here-included sample is a subsample of Müller et al. (being a preregistered clinical trial), we were not able to further increase the sample size (Müller et al. 2015). Secondly, our study design with individual highdose baclofen differed from some other RCTs (except Beraha et al. 2016;Müller et al. 2015;Reynaud et al. 2017), which used fixed categories of dosages and mostly lower dosages (> 60 mg/daily) (for review, see de Beaurepaire (2018) and Pierce et al. (2018)), thus reducing comparability. Furthermore, it would be recommended that future studies examine the effect of assumptions regarding the study medication taken (BAC vs. PLA) in order to focus on efficacy expectancy. Thirdly, there were unspecific group differences regarding lifetime consumption of alcohol and anxiety at baseline (t1). However, there was no effect on the results when covarying for those variables. Regarding anxiety, some studies showed that baclofen has anxiolytic effects which could have been a significant mediator in our sample. Although we did not see anxiety-specific effects (neither in assessment via questionnaire nor as a covariate in imaging analysis), future studies focusing on different (baseline) levels of anxiety are desirable to elucidate the current inconsistency in findings. Fourthly, although the used slot machine task as an ecologically valid passive reward task is known to evoke robust activations of the so-called reward system (Dreher et al. 2008;Lorenz et al. 2014Lorenz et al. , 2015a, it is strongly recommended that future research replicates our findings via performance-driven reward tasks as, e.g., monetary incentive delay tasks. Lastly, due to the individual titration of the study medication, not all BAC patients reached the maximum dosage of 270 mg/d. Nevertheless, in flexible dosage studies like ours, it is common that the placebo group reaches the maximum dosage more likely (e.g., Reynaud et al. 2017: 88%, Müller et al. 2015.9% of patients) compared to the intervention group (Reynaud et al. 2017: 67%;Müller et al. 2015: 35.7% of patients) (Müller et al. 2015;Reynaud et al. 2017), most likely due to the lack of effect of placebo medication and a tendency to therefore increase placebo dosage. Taken together, the present neuroimaging results contribute to the understanding of underlying neurobiological mechanisms of (individually titrated) baclofen treatment, in particular within the domain of AUD-related altered reward anticipation processing. Our results indicated that baclofen might desensitize right insula activation during recurrent reward processing in AUD patients after 2 weeks of individual high-dose baclofen treatment as well as increase the abstinence rate. Baclofen might thus enable a more flexible adaptation of neuronal activation in terms of repetition effects similar to HC, in accordance with the key role attributed to insula function in addiction (Droutman et al. 2015). Furthermore, observed correlations between baclofen dosage level in the blood and desensitization of the right insula support the dose-related effects of baclofen on the (dopaminergic) reward circuitry (Boehm et al. 2002). Data Availability The participants of this study did not give written consent for their data to be shared publicly, so due to the sensitive nature of the research supporting data is not available. Declarations Ethics approval and consent to participate This substudy is subordinate by the main (BACLAD-) study, which has been a preregistered, randomized, double-blind, placebo-controlled pharmacological trial (clinical.gov: NCT01266655; published in Müller et al. 2015), which have been approved by the Ethics Committee of the Charité -Universitätsmedizin Berlin and the Federal Institute for Drugs and Medical Devices (BfArM) and was conducted in accordance with the latest version of the Declaration of Helsinki. All patients gave written fully informed consent for participation and included in this fMRI substudy after randomization for the main trial. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
8,709.4
2022-12-20T00:00:00.000
[ "Psychology", "Biology", "Medicine" ]
Protecting the Coastline from the Effects of Climate Change: Adaptive Design for the Coastal Areas of Gangneung, Korea The purpose of this research is to present design strategies to enable coastal areas to adapt to climate change and maintain the coastlines by addressing the environmental and urban issues. Gangneung is a tourist attraction situated on South Korea’s east coast, and there is an urgent need for integrated research on strategies to prevent the loss of sandy beaches and the damage caused by storm surges and high swell. This research has two objectives: The first is to offer an overview and describe the characteristics of exemplary projects carried out to manage the storm damage while maintaining the coastlines. The second is to propose a design model that can be applied to coastal areas susceptible to climate change by analyzing the design strategies and the current conditions of the Gangneung coastal area. In the case of Gangneung, the damage caused by the storm surges and high swells are more severe compared to inundation caused by sea level rise because of the steep slope and deep water. Therefore, adaptive design strategies are mainly focused on accommodation and retreat strategies that consider these characteristics by moving the coastal roads behind the pine forest and raising the coastal buildings to connect the coast to the forest and to prevent coastal erosion. This research has the potential to be used as an exemplary design adaptation for coastal erosion as well as a basis for regulating the land use policy in areas susceptible to flood by establishing guidelines for publicly funded developments, and preparing long-term relocation plans for the existing coastal developments to create a sustainable and resilient future for the coastal areas. Introduction The need for design strategies in coastal areas to adapt to climate change has become a worldwide priority [1] [2]. Damage caused by flooding and rising sea levels has been directly seen in coastal cities and regions, threatening the sustainability of cities, major infrastructure, and coastal wetlands [3] [4]. South Korea has actively implemented the development plans for coastal areas, particularly as the Korean population is accustomed to living near the coasts for the sake of having a mountain in the back and facing the sea [5]. The majority of the country's major infrastructure, including ports, airports, roads, and power plants, are located in coastal areas, and coastal cities with high populations have been created on reclaimed lands and landfills. This trend in coastal development continues today, and large-scale plans in coastal areas are still being implemented despite the threats from climate change and rising sea levels. The rate of the sea level rise in South Korea is higher than the global average, hence, nationwide integrated research on ways to adapt to this change in coastal areas is urgently required [6] [7]. While previous research has focused on the categorization of the methods and planning strategies of adaptation to rising sea levels for the coastal cities [8], the purpose of this research is to present the design strategies to enable the coastal areas to adapt to climate change and maintain the coastlines by addressing the current environmental and urban issues. The research site is Gangneung, a tourist attraction situated on South Korea's east coast, and there is an urgent need for integrated research on strategies to prevent the loss of sandy beaches and damage from storm surges and high swells. This research has two objectives: The first is to offer an overview and describe the characteristics of exemplary projects carried out to manage storm damage while maintaining the shorelines. The second is to propose design models that can be applied to the coastal areas of Gangneung that are susceptible to climate change by analyzing the adaptation strategies and the site conditions. Review of Previous Research and Projects Previous research categorized the adaptation strategies responding to rising sea levels with axonometric diagrams including the representative systems and site advantages as shown in Table 1 [8]. Among them, the attack strategies are not applicable to the east coast owing to the steep slope and deep water. Therefore, hard/soft protection, accommodation and retreat strategies are predominantly used as adaptation strategies for the coastal areas of Gangneung. While Table 1 provides an overview of the applicable adaptation strategies [8], Table 2 shows the characteristics of exemplary coastal projects designed to cope with the storm damage while protecting the coastal areas and maintaining the shorelines. For the town of Cleveleys in Lancashire, United Kingdom, a new sea wall was designed to address the visual unattractiveness of the existing sea wall and to prevent the division between the town and its coast caused by the previous wall. The newly created sea wall was designed to provide a high-quality coastal space that would also protect the town from storm and flood, and furthermore, it became a new tourist attraction contributing to their local economy [9]. The project includes street furnishing, a pavilion, public art space, and lighting system, and an exhibition space, and has provided an improved public space offering people both accessibility and views of the seaside through promenades and shelters. Wahyun Village of Geoje, Korea, suffered significant damage due to a typhoon called Maemi to the extent that the majority of houses on the coast were destroyed. Owing to government support, widespread public encouragement, and local residents' participation, the relocation project to move the old village toward uplands away from the seaside was completed in 4 years [10]. The houses on the coast were newly constructed, and a 500-m-long and 35-m-wide sandy beach was restored Many coastal houses in New York and New Jersey have been damaged by Hurricane Sandy in 2012. The adaptive action taken to prevent flood damages and preserve the coastal communities involved lifting the buildings using pier structures. This remedy enables the residents to obtain the elevation certificate in order to mitigate future insurance claim for flood damage [11]. Lifting the buildings above the hazardous level can be considered as a temporary solution to coastal vulnerability when committed to maintain the coastal communities in place prior to relocating them. These exemplary projects were implemented in response to storm damages, and they provided relevant ideas for the adaptive coastal design. Cleveleys Seawall utilized the landform as a protective buffer minimizing the division between the village and the coast by improving the scenery and creating interrelated public spaces. Wahyun Village was relocated from its highly vulnerable location in the coast to the low-risk uplands. Projects that involved lifting the houses upgraded the low-elevation houses along the coast to withstand storm surges and temporary inundation. Through these case studies, this research focused on three adaptive design strategies for the coastal area-soft protection, retreat, and accommodation-and investigated the possibility of defensive coastal designs considering the current storm damage as well as future climate change. Research Site Analysis The east coast of Korea runs from the inlet of the Tumen River to Busan Quays, which is 1723 km long, and the section belonging to Gangwon Province is 318.3 km long, approximately 18.4% of the total length [12]. The total population of the Gangwon Province was approximately 1,150,000 as of 2010, and approximately 46% of the population, which is over 530,000 people, was living in the Yeongdong region [13]. While multiple scenarios of rising sea levels are under investigation, this research took its core data from the inundation map created by the Korea Environmental Institute (KEI) in 2012 by considering the storm surges based on the prediction that sea levels will rise by 1.36 m by 2100. The land cover map used in this research was produced in 2007 and modified in 2009. The medium classification land cover map developed by the Ministry of Environment and a digital map based on the ITRF 2000 ASTER DEM map were also used. Coastal Land Uses There are a number of sandy beaches and cities along the ports throughout the coast in Gangwon Province as shown in Figure 1. Many ports facilities were developed and expanded for fisheries and sea transportation, and coastal structures such as breakwaters and sea walls were constructed. The roads were builtalong the entire shoreline to accommodate the local travel and to appreciate coastal scenery. The Coastal Flood and Erosion Over the 43 years from 1964 to 2006, wave heights of the east coast have reached 7 m, as the sea level of the east coast has risen by 22 cm. However, sea walls are only about 4 to 5 m high in the majority of ports on the east coast as they were built based on the heights of waves when designs for ports and coastal structures are set up in 1988. Thus, damage from flooding and erosion is increasing since overtopping occurs even during storms or heavy rain, let alone during powerful typhoons [14]. Therefore, the climate change adaptive strategy in Gangwon Province was developed focusing on areas susceptible to erosion rather than responding to the sea level rise inundation because higher and more powerful waves are anticipated with climate change. In contrast to the southwest coast where reclamation projects have been implemented relying on the rias shoreline and the widespread tidelands, the shoreline of the east coast is well-defined and water depth is high while the movement of sand caused by waves dominates the beach development. Because the slope between the land and the ocean is steep, the seaside on the east coast mainly consists of sand and rock cliffs, and tidal mudflats have not developed. As indicated by the characteristics of the east coast, coastal erosion due to high waves is a major problem in the coastal areas of Gangwon Province, and 18 out of 34 sandy beaches on the east coast are severely eroded as shown in Figure 3 [14]. Although the amount of sand influx should remain constant in the coastal areas through the natural inflow of soil and sand from the adjacent mountainous areas, this is not the case because residential and commercial areas as well as shoreline roads are located throughout the east coast. This means that erosion is increasing throughout the coastal region, and artificial coastal projects and coastal structure designs are still being implemented. However, coastal structures are also continuously eroded by waves, which can accelerate the coastal erosion by blocking the natural flow of soil and sand. Hence, Gangwon Province needs to develop the design solutions to respond to the erosion problem and protect its coastline. Adaptive Design Strategies To analyze the erosion in the coastal areas of Gangneung, the largest coastal city in Yeongdong Region of Gangwon Province, two coastal areas with accelerating erosion problems were selected as research sites. The proposed design focuses on moving the shoreline roads as well as the residential and commercial areas adjacent to the coastal in land in order to prevent coastal erosion by facilitating the natural influx of soil and sand. The systematic adaptation plans for each area were created to reveal the topographical relationships and material movements of each area. The mean sea level of the section diagram was set according to the mean sea level in 2009, and the expected sea level was predicted by taking into account a sea level rise of 1.36 m along with storm surges. Youngjin Village Area Youngjin is a small village in the northern part of Gangneung; approximately 60 households are engaged in fishery business utilizing the nearby Youngjin port. To protect the port from storm surges, lengthy sea walls, sand groins, and lighter's wharves were recently built in this port. Since the construction of these protective structures, tidal currents and sand flows have changed the sedimentation patterns blocking the port entrance. Shoreline road Coastal erosion Squeezed beach The beaches at Youngjin and Yeongok are located in the north and south of the port, respectively (Figure 4). The white sand of Youngjin beach is 600 m in length and 1300 m 2 in area. Although its size is small, the beach is consistently popular with tourists because it has ecological and leisure programs as well asa campsite [15]. Yeongok, a small-sized beach, is 700 m in length, 50 m in width, and approximately 35,000 m 2 in area. On the north side of the beach, there is a wide pine forest behind the beach. Parking areas and shoreline roads are adjacent to these beaches and commercial facilities are located along the shoreline roads. As this aggravates the impact of waves, the coast has been eroded, revealing the substructure of the shoreline roads. These roads block the influx of soil and sand from the pine forest located behind the beach. The main design strategy for this area is to reconnect the pine forest and the beach to allow the circulation of sand as shown in Figure 5. The pine forest is not only an ecological relief area that provides shade for the beach tourists during summer, but also facilitates the natural influx of soil and sand. When wave heights increase owing to rising sea levels, the forest will act as a natural buffer. To connect the pine forest and the beach to alleviate coastal erosion, the proposal shown in Figure 5 indicates that the shoreline roads and the adjacent commercial areas have been relocated in the hinterland of the pine forest. Geumjin Village Area Geumjin Beach is 900 m in length, 50 m in width, 45,000 m 2 in area, and can accommodate 22,500 people. A shoreline road on the sea wall was developed along the beach, and the other side of the shoreline road is a small fishing town and the mountain behind it (Figure 6). The beach is located along the coastal terrace, and it has a steep slope towards the ocean. Recently, swell-like waves have occurred frequently on the east coast, exacerbating the erosion of the beach along the sea wall. Gumjin and Okgye are the major ports located on the north and south sides of the beach. Gumjin port is used by the residents whose livelihood is fishing, and Okgye port is a trading port that greatly contributes to the local industries. The proposed design involves raising the coastal buildings to circulate water and sand. When the shoreline roads are relocated behind the raised buildings, they can be directly accessed from the building level because of the steep topography of the east coast. At the front of the coastal buildings, boardwalk promenades can facilitate the pedestrian access for residents and visitors while allowing the influx of water and sand as well as preserving the coastal scenery as shown in Figure 7. Conclusions and Discussion This research analyzed physical and environmental characteristics of Gangneung's coastal areas, situated on the east coast of Gangwon Province in Korea. Gangneung is susceptible to the effects of climate change, and the erosion damage from the storm surges and high swells are expected to be more serious problem than the inundation because of topographical and oceanographic characteristics of the east coast. Therefore, the adaptive design strategies are mainly focused on the accommodation and retreat strategies in order to maintain the coastlines. The main aims of this research are to review the adaptation strategies in response to climate change through case studies and apply the findings to new research cases, and to analyze the research areas that are susceptible to climate change and propose adaptive design strategies through diagrams and three-dimensional representa- tions. Youngjin and Geumjin Villages in the coastal areas of Gangneung were selected as research sites to experiment adaptive design strategies in response to climate change and rising sea levels. The design strategies for the adaptation are as follows: 1) for both areas, the shoreline roads adjacent to the coast are to be moved behind the pine forest to connect the forest to the beach; and 2) the coastal buildings are to be raised in order to prevent coastal erosion by facilitating natural influx of soil and sand. This research has considerable potential to be used as an exemplary design adaptation to climate change for coastal areas. The research attempts to provide a basis for the land use policy in areas susceptible to erosion and flooding, to establish the guidelines for publicly funded developments, and to prepare long-term relocation plans for existing coastal developments in order to create a sustainable and resilient future for the coastal cities.
3,798.2
2015-04-09T00:00:00.000
[ "Environmental Science", "Engineering" ]
The Source, Significance, and Magnetospheric Impact of Periodic Density Structures Within Stream Interaction Regions We present several examples of magnetospheric ultralow frequency pulsations associated with stream interaction regions and demonstrate that the observed magnetospheric pulsations were also present in the solar wind number density. The distance of the solar wind monitor ranged from just upstream of Earth's bow shock to 261 RE, with a propagation time delay of up to 90 min. The number density oscillations far upstream of Earth are offset from similar oscillations observed within the magnetosphere by the advection timescale, suggesting that the periodic dynamic pressure enhancements were time stationary structures, passively advecting with the ambient solar wind. The density structures are larger than Earth's magnetosphere and slowly altered the dynamic pressure enveloping Earth, leading to a quasi‐static and globally coherent “forced‐breathing” of Earth's dayside magnetospheric cavity. The impact of these periodic solar wind density structures was observed in both magnetospheric magnetic field and energetic particle data. We further show that the structures were initially smaller‐amplitude, spatially larger, structures in the upstream slow solar wind and that the higher‐speed wind compressed and amplified these preexisting structures leading to a series of quasiperiodic density structures with periods typically near 20 min. Similar periodic density structures have been observed previously at L1 and in remote images, but never in the context of solar wind shocks and discontinuities. The existence of periodic density structures within stream interaction regions may play an important role in magnetospheric particle acceleration, loss, and transport, particularly for outer zone electrons that are highly responsive to ultralow frequency wave activity. Introduction Global magnetospheric ultralow frequency (ULF) pulsations can arise from a variety of sources internal to the magnetosphere, such as locally stimulated field line resonances and wave-particle interactions (Hasegawa, 1969;Hughes et al., 1978;Southwood et al., 1969). The solar wind is also known to be a source of oscillations, such as through the generation of Kelvin-Helmholtz waves on the magnetopause that couple to field line resonances (FLRs) (Agapitov et al., 2009;Chen & Hasegawa, 1974;Fujita et al., 1996;Southwood, 1974;Walker, 1981) or via ion foreshock generated Pc3 waves (Gary, 1981;Odera, 1986). Sudden increases in the solar wind dynamic pressure, such as those associated with solar wind shocks, are also associated with magnetospheric ULF waves. Generally, any sudden increase in solar wind dynamic pressure can act as a source of broadband compressional power that can couple to magnetospheric eigenmodes. Fast mode oscillations are trapped between gradients in the Alfvén velocity, which can occur at the equatorial ionosphere, the plasmapause, and the magnetopause. Cavity mode oscillations can then couple to FLRs where the FLR eigenfrequency matches the cavity mode frequency (Kivelson & Southwood, 1985, 1986Walker, 1998). The expected signature of a cavity mode oscillation is a damped magnetic field oscillation, as the cavity oscillations lose energy through FLR coupling, ionospheric dissipation, and leakage through the boundaries. Although the cavity mode has been studied extensively through theory (e.g., Claudepierre et al., 2009;Lee & Lysak, 1989), observations of cavity mode signatures are rare (Goldstein et al., 1999;Hartinger et al., 2012). The dayside magnetosphere must balance the solar wind dynamic pressure, such that B m 2 ∼ρ sw V sw 2 , and variations in the solar wind dynamic pressure can therefore also produce magnetospheric ULF wave activity (Kessel, 2008;Takahashi & Ukhorskiy, 2007). As the solar wind dynamic pressure varies, the location of the dayside magnetopause, and hence the strength of the dayside magnetospheric field, varies in direct response. While the general control of the dayside magnetospheric field strength and location by the solar wind dynamic pressure is expected on pressure balance arguments, work over the last decade has shown that variations in the upstream solar wind number density are sometimes highly periodic, typically in the Pc5 range (T=3-10 min) and longer (Kepko & Spence, 2003;Kepko et al., 2002;Korotova & Sibeck, 1995;Sarafopoulos, 1995;Viall, Kepko, & Spence, 2009). In turn, these periodic dynamic pressure variations drive magnetospheric oscillations at the same period, in what can be termed a quasi-static forced breathing of the magnetosphere. It is important to note that these periodic density structures in the solar wind are not propagating waves. Rather, they are coherent mesoscale structures created very early in the formation of the solar wind, which then advect outward in quasi pressure balance (DeForest et al., 2018;Di Matteo et al., 2019;Kepko et al., 2016;Viall & Vourlidas, 2015;Viall, Spence, & Kasper, 2009;Viall, Spence, Vourlidas, & Howard, 2010). In a large-scale statistical study comparing discrete frequencies observed in the solar wind number density with those observed in the magnetosphere by GOES, found that ∼50% of the time, discrete magnetospheric oscillations observed by GOES between f=0.5 to 5.0 mHz were driven directly by solar wind periodic density structures and demonstrated a persistent set of observed frequencies, near f=1.0, 1.5, 1.9, 2.8, 3.3, and 4.4 mHz. In a separate study, Viall et al. (2008) showed that the observations were better organized by the radial length scale of the periodic density structures, rather than apparent frequency (which depends on the velocity of the solar wind). This growing body of work has established that periodic density structures in the solar wind are a significant source of discrete, magnetospheric pulsations, particularly at f<∼4 mHz (Hartinger et al., 2014) and that these solar wind structures occur more frequently at certain scale sizes (and apparent frequencies) than others. More recently, Viall and Vourlidas (2015) and Viall et al. (2010) used data from the Solar Terrestrial Relations Observatory/Sun-Earth Connection Coronal and Heliospheric Investigation suite to observe periodic density structures leaving the Sun. They demonstrated that the periodic density structures were formed below at least 2.5 solar radii, the inner edge of the COR2 instrument field of view. These structures had periods of ∼90 min (f∼0.2 mHz), were observed to accelerate through the sonic and Alfvénic transition points up to slow wind speeds, and occurred near coronal streamers. These mesoscale structures that were injected into the solar wind at the Sun clearly survive to 1 AU where they impact Earth several days later (Kepko & Spence, 2003;Kepko et al., 2016;. Because mesoscale structures are often periodic (Viall et al., 2008;Viall, Kepko, and Spence 2009), the geospace effects go well beyond stochastic buffeting of Earth's magnetosphere. Instead, periodic solar wind density structures directly drive globally coherent, magnetospheric ULF oscillations. ULF waves are an important driver of magnetospheric particle dynamics, particularly for electrons in the outer radiation belts (Elkington & Sarris, 2016;Mathie & Mann, 2000, 2001O'Brien et al., 2003;Ukhorskiy et al., 2006). Broadly, ULF waves accelerate, transport, or lead to the loss of these electrons. Acceleration can occur by breaking the third adiabatic invariant through a drift-resonant interaction of the electrons with Pc5 (f=2-7 mHz) waves (Elkington et al., 1999(Elkington et al., , 2003. Under a continuum of Pc5 wave activity, electrons can undergo stochastic acceleration such that some particles lose energy, while others gain energy, leading to diffusion across drift shells (Elkington et al., 2003;Hudson et al., 2000;Shprits et al., 2008;Tu et al., 2012;Ukhorskiy et al., 2009). Finally, ULF waves can lead to efficient magnetopause shadowing and a commensurate rapid loss of outer belt electrons, particularly within a compressed magnetosphere that would occur with high dynamic pressure (Loto'aniu et al., 2010;Turner et al., 2012Turner et al., , 2013. However, a direct link between ULF waves driven by periodic density structures and magnetospheric particle dynamics has not yet been established. Stream interaction regions (SIRs) are important drivers of magnetospheric activity (Gosling & Pizzo, 1999;Kilpua et al., 2017;Tsurutani et al., 2006) and in particular can be important for energizing Earth's radiation belts (Bortnik et al., 2006;Borovsky & Denton, 2006;Miyoshi & Kataoka, 2005;Paulikas & Blake, 1979;Yuan & Zong, 2012). SIRs, formed when a faster solar wind overtakes a slower wind, can sometimes form forward shocks by 1 AU (Jian et al., 2006;Richardson, 2018), potentially leading to shock-induced ULF wave activity in the magnetosphere. The compression region between the two solar wind streams is also a source of broadband magnetospheric ULF wave power derived from intrinsic solar wind dynamic pressure fluctuations (Kilpua et al., 2013). The effects of SIRs on magnetospheric particle populations are complicated and depend on time, L-shell, and particle energy. Over the full timescale of the interaction (several days), the integral effect of SIR-driven geomagnetic storms is higher fluxes of radiation belt electrons compared to interplanetary coronal mass ejection (ICME)-driven storms, particularly at the higher L-shells (L>4.5) (Borovsky & Denton, 2006;Kataoka & Miyoshi, 2006;Kilpua et al., 2015;Turner et al., 2019). Yet on shorter timescales, substructure within the SIR can lead to substantial loss of energetic particles. Kilpua et al. (2015), for example, found that loss mechanisms dominate over acceleration as solar wind structure within stream interface regions impact Earth, yet the overall interaction of SIRs with the magnetosphere led to higher flux levels than ICME-driven storms. They suggested that the cause of electron loss was magnetopause shadowing, driven by the high solar wind dynamic pressure and high ULF wave activity, which scatters electrons to larger L-shells where they can be lost more effectively . Analyzing ICMEdriven storms, Hietala et al. (2014) found a similar decrease in relativistic electron flux during passage of the ICME sheath and attributed that loss to ULF wave-driven diffusion and magnetopause shadowing, enhanced by solar wind dynamic pressure fluctuations in the ICME sheath. In this paper we analyze six events demonstrating periodic oscillations in Earth's magnetosphere following sudden solar wind dynamic pressure increases. All events were associated with SIRs of varying magnitudes, where we defined SIR broadly to include any velocity change that produces an interface region. In all cases, the number density increase at the leading edge of the SIR was followed by quasiperiodic mesoscale number density structures in the solar wind, generally in the 0.5-to 2-mHz (8-30 min) range. After ballistic propagation, ranging from just a few to 90 min, to account for the upstream location of the solar wind monitor, the same oscillations were observed as magnetic field oscillations in the magnetosphere by the GOES spacecraft. For all events, the energetic particle population in the outer radiation belt also responded directly to the solar wind driving. We further show that the periodic density oscillations in the solar wind appear to have been preexisting structures in the upstream slower solar wind that were amplified by the pileup caused by the impact of the higher-speed stream. The discontinuity itself appears to play no direct role in the formation of the density structures, nor in driving the magnetospheric ULF oscillations within this frequency range. We conclude by discussing the potential importance of a quasiperiodic dynamic pressure drivers within SIRs on magnetospheric particles. Events The events presented here are representative examples visually identified from a number of different data sets and cover solar wind monitors from 14 to 261 R E upstream from Earth's magnetosphere ( Figure 1). They do not represent an exhaustive list of such events. We required each event to exhibit a sharp change in number density, generally a factor of 2 or more, preceded by relatively steady number density, that was then followed by quasiperiodic density variations. We further required that a GOES spacecraft be located in the dayside magnetosphere, where solar wind-driven oscillations are most likely to be observed. Solar wind data are from a variety of solar wind monitors, including Geotail, ACE, Wind, and THEMIS. Variations in solar wind dynamic pressure, p dyn =nmv 2 , are driven primarily by variations in the solar wind number density, n, and these variations are generally much larger percentage wise than those in velocity. We therefore plot solar wind proton number density rather than dynamic pressure so as to not obscure this fact and plot velocity separately. Events are presented below with increasing distance from Earth of the solar wind monitor. For each solar wind observation we add a constant time shift to the entire time series to account for ballistic propagation to Earth's magnetosphere. For one event, the constant time shift assumption is not appropriate, and we discuss the implications of that in the Discussion. Figure 2 shows solar wind |V x | and the total pressure, P t ¼ B 2 =2μ 0 þ ∑ j n j kT j , for the six events analyzed below, assuming n α =0.04n p and >T α =4T p . Increases in the total pressure have been shown to be excellent markers of stream interface regions (Jian et al., 2006). The events have been aligned so that epoch time T=0 is the time of the number density increase for each event. The 25 November 2008 event ( Figure 2f) shows a classic SIR signature, with an increase in total pressure at the forward shock at T=0 and a peak in total pressure partway between the slow-speed and high-speed streams, marking the stream interface. Indeed, this event is in the Jian et al. (2006) event list of SIRs. At the other extreme, the second event (Figure 2b), from 29 November 1996, shows only a small change in total pressure and a small jump in solar wind velocity, which has not yet steepened into a forward shock. Yet it very clearly occurs at the (a-e) Overview of the six events analyzed, demonstrating that each was associated with a stream interface region of varying magnitudes. Each panel shows the absolute value of the solar wind velocity |V x | (red) and the total pressure (purple). Epoch time T=0 is defined as the increase in solar wind dynamic pressure, and data are plotted ±1.5 days in either direction. All data are from the Wind spacecraft. Shaded regions correspond to the intervals of periodic density structures studied below. ) is plotted in (e). The trend toward zero flux after 1800 UT is due to contamination from a solar proton event. boundary between two solar wind streams. The remaining events fall between these two extremes. All data in Figure 2 are from the Wind spacecraft for consistency, whereas below we present data from the closest solar wind monitors available. Event 1: 9 September 2005 This event occurred with the closest solar wind monitor, in this case Geotail, located very close to Earth's magnetosphere, at (14, 27, −0.4) R E geocentric solar equatorial (GSE), just upstream and duskward of the subsolar point. The visually identified time delay between features observed in the Geotail number density data and the geosynchronous magnetic field measured by GOES was 5 min, consistent with the location of Geotail. The Wind spacecraft was located well upstream and far off the Earth-Sun line, at (213, −94, −7) R E GSE, and observed the sharp number density increase just a few minutes prior to Geotail, suggesting a highly inclined shock relative to the Earth-Sun line. The shock normal can be calculated assuming velocity coplanarity, such that n=(v 2 −v 1 )/|v 2 −v 1 |, where v 2 and v 1 are the downstream and upstream velocities (Riley et al., 1996). For both spacecraft, v 1 =(−340,0,0) and v 2 =(−460,−100,40), yielding a shock normal of n=(−0.8, −0.6,0.2), or 37°in the x-y GSE plane. The velocity, v p , and proton number density, n p , measured by Geotail are shown in Figures 3b and 3c, respectively, and the GOES-12 vertical component of the magnetic field, B z , is shown in Figure 3c with the Geotail number density time shifted by +5 min and superimposed. The Geotail n p shows a large and sudden increase in the number density, followed by a series of 20-min periodic structures, with peak-to-peak amplitude changes of ∼10 particles per cubic centimeter, a roughly 20% amplitude variation. GOES-12 also observed 20-min oscillations immediately following the shock, with each oscillation associated with a solar wind density structure ( Figure 3c). The amplitude of the periodic changes in both data sets was largest just following the shock and decreased thereafter. The GOES magnetic field data in isolation would suggest a damped compressional oscillation, but comparison with the Geotail number density shows that this apparent damping is a direct result of the decreasing amplitude of the driver. The Wind number density data, obtained 121 R E in Y from Geotail, exhibit some of the same variations observed by Geotail but are clearly different, providing some constraint on the along-shock coherence of these structures. The 1-min resolution >0.6-MeV electron flux from GOES-12 is shown in Figure 3d and shows similar periodic behavior that aligns with solar wind driving. Each peak in solar wind density and local magnetic field magnitude corresponded to a decrease in the >0.6-MeV electron flux, likely due to compression of the magnetosphere pushing the >0.6-MeV particle boundary inside geosynchronous orbit. Event 2: 29 November 1996 The Wind spacecraft was located just upstream of Earth at (53, 10, 5) R E GSE providing solar wind measurements very near the Earth-Sun line. The propagation delay to the GOES location based on visual inspection 10.1029/2019JA026962 Journal of Geophysical Research: Space Physics was +13.5 min. Figure 4c shows the GOES-10 and GOES-8 B z component of the magnetic field. GOES 10 was located near noon magnetic local time, while GOES 8 was located postnoon, as indicated by the diurnal magnetic field signature. The solar wind number density is shown in Figure 4b, time shifted by +13.5 min. These data show a sharp increase and ∼20-min periodicities in the number density near 20 UT (short horizontal ticks). The initial increase is ∼60%, leading directly to a ∼60% increase in the solar wind dynamic pressure, ρv 2 , impacting Earth's magnetosphere. An additional 10-min embedded oscillation is also apparent, marked with vertical ticks; the GOES-8 and GOES-10 magnetic field data also observed these suboscillations. A comparison between the GOES-10 magnetic field data with the background variation removed, and the time-shifted solar wind density, shows good agreement for these seemingly damped oscillations (Figure 4d). Rather than being an indication of internally damped oscillations, the GOES oscillations are directly driven by solar wind density oscillations. Electron particle flux data (E=50-225 keV) from the LANL 1990-095 satellite are shown in Figure 4d. The same 20-min periodicity is observed in the electron flux, suggesting that at these energies the geosynchronous particle population responds promptly to the direct quasiperiodic solar wind driving. The detrended GOES comparison with Wind in Figure 4c reveals further details. In addition to the sudden increase and oscillations after 20 UT, the preceding 4-hr interval contained oscillations that were observed both in the solar wind number density at in the GOES magnetic field data. Some of the more prominent peaks are marked with vertical ticks in Figure 4c. We note in particular the same period oscillations were observed in the hour prior to this feature (from 19-20 UT), showing that the discontinuity itself is not a significant driver of magnetospheric oscillations. Similar oscillations were observed in the LANL 1990-095 data. Finally, there was no clear shock associated with this event. As shown in Figure 2b, this interval occurred slightly after the initial increase in velocity. Event 3: 11 February 2000 The projected positions of Polar and the geosynchronous orbiting satellites onto the equatorial plane at 0300 UT on 11 February 2000 are shown in the left panel of Figure 5. Polar was located high latitude (λ∼50°) on the dusk flank and, based on its location, likely mapped to the dusk tail plasma sheet prior to shock arrival. GOES-10 was located at the dusk flank, while the two Los Alamos satellites 1989-046 and 97A were predusk and postdawn, respectively. The Wind satellite was located upstream in the solar wind very near the Earth-Sun line at (127, −0.3, 4.6) R E in geocentric solar magnetospheric (GSM) coordinates. The solar wind dynamic pressure and proton number density measured by Wind is shown in Figure 5b The front detector electron (E>375 keV) counts (A1) from the Polar HIST instrument observed four impulsive increases in the particle flux, to levels well above the ambient background. The plasma data from Hydra (not shown), which measures the thermal plasma, suggest that Polar entered a boundary layer region with a mix of magnetospheric and magnetosheath plasma at 0300 UT. The LANL 1989-046 spacecraft was located just sunward of GOES10 and detected four impulsive increases in the electron flux (E=50-150 keV), which also coincided with solar wind dynamic pressure increases ( Figure 5d). The enhancements appear to become increasingly dispersive. The timing of the arrival of the higher energy particles (E=105-150 keV) coincides most closely with the increases of B z observed at GOES10. The LANL-97A spacecraft was located at the dawn flank and while it did not detect a signal in the energetic electron flux (E=50-75 keV) associated with the second impulse, the other increases correspond to dynamic pressure increases. The drift period of a 100-keV equatorially mirroring electron is ∼90 min, so the variations observed here cannot be due to drift echoes. Instead, we may be observing local, prompt, energization of particles, or the movement of particle boundaries as the magnetosphere is driven by the solar wind changes. The globally coherent response of the energetic electrons indicates that the solar wind density structures drove globally coherent magnetospheric dynamics. Event 4: 31 October 2000 The ACE spacecraft was located in the upstream solar wind at (220, −1, −24) R E in GSM coordinates, very near the Earth-Sun line; Wind was located more than 150 R E in Y, so is not used. A sudden 30% increase in |V x | and factor of 3-4 increase in N (Figures 6a and 6b) was observed at 1628 UT (unshifted). In the downstream shock region, the number density shows a series of ∼8-min variations, with an amplitude of ∼20%. These variations continued for at least 2 hr. Similar variations were observed in the velocity, albeit at a much lower relative amplitude (∼2%). The magnetic field data from the GOES-10 spacecraft are shown in Figure 6c. The shock arrival at the magnetosphere is indicated at 1713 UT by a sharp jump in the total field strength. The ∼8-min fluctuations observed in the number density by ACE 44 min earlier were also observed by Journal of Geophysical Research: Space Physics GOES-10 as magnetic field increases (vertical lines in Figure 6). Figures 6e and 6f show the energetic electron (E>0.6 MeV) and proton flux (E>0.6-4.0 MeV) from GOES-10. Both the electrons and protons show similar variations as the driving density variations, particularly between 18 and 19 UT. The electron and proton variations appear to be in phase with the number density variations. Event 5: 27 November 2010 The Wind spacecraft was located far upstream at (220, 7, −23) R E , near L1, while THEMIS A, D, and E were near the dayside magnetopause, and THEMIS-B, having now become ARTEMIS, was in the solar wind off the dawn flank orbiting near the Moon (Figure 1). The Wind spacecraft observed a sharp, factor of 3 increase in the number density just prior to 1910 UT (Figure 7b), without a sharp change in velocity (Figure 7a). A series of quasiperiodic, ∼5to 10-min oscillations followed. This solar wind structure reached the magnetosphere almost 1 hr later, where the three THEMIS spacecraft, located in the dayside magnetosheath (note the factor of ∼4 increase in density compared to Wind), observed a similar increase in number density followed by similar periodic oscillations. Note the strong similarity between the Wind oscillations observed 1 hr prior and the THEMIS observations, consistent with quasi-stationary structures that have been advected by the solar wind. The decrease in THEMIS A, D, and E density at 2040 UT is due to entry into the magnetosphere. A few minutes later, ARTEMIS observed the solar wind structure near lunar orbit (Figure 7d). Note that the observations between the Wind and THEMIS + ARTEMIS spacecraft were separated in time by 60 min and Journal of Geophysical Research: Space Physics in the Y direction by ∼50R E , providing a lower limit on the timescale for evolution and azimuthal spatial coherence of these structures. Figure 7e shows magnetometer data from the GOES-13 spacecraft. For this event GOES-13 was located in the late afternoon sector and compared to the other events where GOES was closer to noon it did not see a strong compressional response. Instead, the periodicities were most clearly observed in the "normal" (N) component, which is azimuthal (Figure 7e). Finally, Figure 7f shows the 1-min energetic electron flux at three different energy levels from the GOES-13 spacecraft. The lowerenergy fluxes show a strong similarity to the solar wind periodicities and the GOES magnetic field variations, while the higher-energy fluxes appear to respond more directly to the longer timescale interaction of enhanced density between 2010 and 2040 UT. The THEMIS A, D, and E spacecraft show some embedded features at timescales shorter than 5 min. For example, both the first and second density structures observed at Wind appear as two peaks on top of a larger background at THEMIS (see 2010-2020 UT in Figure 7d). The THEMIS-B data show hints of this as well. The GOES magnetometer and electron data appear to be responding to this higher-frequency driving, as indicated between 2015 and 2020 UT. We note that at 3-5 min, the driving by the solar wind is no longer quasi-static, complicating the interaction, particularly for the energetic electrons. So although the electrons appear to be responding to the driving, the interaction is not likely to be direct the way it is for lower frequencies. Event 6: 25 November 2008 This event was identified as an SIR in the Jian et al. (2006) list and shows the classic features of an SIR. The Wind spacecraft was located well upstream of Earth, and off the Sun-Earth line, at (252, 47, 2) R E in GSE coordinates. Wind observed a sharp 20% increase in velocity and a factor of 3 increase in number density (Figures 8a and 8b). This number density increase lasted approximately 5 hr. As shown previously in Figure 2f, the velocity increase at 0500 is the forward shock. The high-speed stream associated with this SIR arrived 12 hr later. Approximately 87 min after the shock was observed by Wind, the density structure impacted Earth's magnetosphere. Geotail and THEMIS were both located in the near-Earth solar wind and observed the density structure, while GOES was in the afternoon sector moving toward midnight and observed magnetic field fluctuations associated with the dynamic pressure changes. We show the GOES magnetic field and a background fit in Figure 8c. We subtract the background fit to highlight the variations in Figure 8d. This introduces an artificial brief negative excursion prior to the oscillations, which we indicate by coloring it gray. Figure 8e shows a comparison of the Geotail (23, 18, −7), THEMIS-B (−7.7, −30, −3.5), and Wind (252, 47, 2) density oscillations, and the related GOES magnetic field oscillations. Each of the solar wind measurements have been shifted in time to align with the GOES magnetic field. For the first 2 hr of the event, particularly between 1 and 2 UT, there is good correlation between the quasiperiodic increases in solar wind number density observed by all three spacecraft, and the GOES magnetic field oscillations. The good agreement between the measurements continues for ∼2 hr, again placing a lower limit on the timescale of evolution of the solar wind structure. We show the 1-min resolution 40-keV (black) and 450-keV (red) electron data from the GOES-13 spacecraft in Figures 8f and 8g. We have not plotted the GOES-10 particle data, which was at insufficient resolution (5min averages) to cleanly observed the driving, and GOES-13 magnetometer data are unavailable. The GOES-13 electron data show a two-part behavior. First is an immediate increase in the particle flux near 0 UT, followed by a slow 2-hr decrease. This is seen in all electron channels of the GOES-13 detector (40-475 keV), although we show only the 40-keV channel for clarity. On top of this overall trend are small oscillations in the particle flux. To pull out more details, we have removed the background trend of both the 40-and 475-keV channels and plotted the residual in Figure 8g. The data show a prompt and coherent response to the solar wind driving, particularly near 0030-0200 UT, with a one-to-one correlation of the particle flux variations and the Geotail solar wind measurements in particular. Discussion The case studies presented here represent a broad sampling of quasiperiodic density enhancements observed following a sharp increase in solar wind number density, followed by magnetospheric oscillations at the same periodicity. This provides further direct evidence that the solar wind contains periodic density enhancements that drive magnetospheric oscillations, significantly expanding the number of case studies of such events. The observed periods were typically 5-20 min (3.3-0.8 mHz), much lower than the expected magnetospheric cavity mode periods . Since these density structures are frozen into the flow, it is appropriate to think of them as spatial structures, embedded in and carried by the solar wind flow, rather than propagating as waves (Viall et al., 2008). Table 1 provides a conversion of the timescales of the solar wind density structures into equivalent length scales, assuming a nominal solar wind speed of 400 km/s. Note that the shortest-duration (i.e., smallest) density structure at 5 min equates to a 20 R E equivalent spatial structure that envelops the entire dayside magnetosphere. At this period and higher (roughly 3.3 mHz and lower), the interaction of the structure with the magnetosphere is quasi-static and can be considered m=0 poloidal oscillations. The amplitude of the magnetospheric oscillations is directly related to the amplitude of the solar wind number density oscillations, as governed by B 2 m ∼nv 2 . When propagated to Earth these quasiperiodic enhancements were closely associated with similar quasiperiodic total magnetic field strength oscillations inside the magnetosphere, suggesting a direct driving of the magnetosphere by the quasiperiodic changes in solar wind dynamic pressure. At shorter timescales, less than ∼>5 min, the structure is smaller than the dayside magnetosphere, and the interaction can no longer be considered quasi-static leading to a more complicated interaction (see, e.g., the 27 November 2010 event). In this study we focused explicitly on events that contained a sharp increase in the solar wind number density, both because the sharp increase in density is a useful fiducial marker for multispacecraft comparison and also because the sudden impulse of such events on the magnetosphere can have significant particle impacts, either via direct acceleration by the induced electric field of the shock passage (Li et al., 1993;Zong et al., 2009Zong et al., , 2012 or other related effects such as particle transport, loss, and resonant acceleration (see review by Hudson et al., 2008). More specifically, solar wind shock impacts on the magnetosphere are theoretically presumed to initiate global ULF pulsations. We find no evidence for such shock-induced cavity modes at the frequencies (f<4 mHz) studied here. Instead, low-frequency oscillations observed within the magnetosphere following a shock impact appear to directly driven from periodic density structures in the solar wind. For example, note for the 29 November 1996 event, the sudden appearance at both GOES spacecraft of an apparently damped magnetic field oscillations near 20 UT (Figure 4c). This type of signature-an increase in the compressional component of the magnetic field followed by damped oscillations-is exactly the type of signature predicted to occur in a cavity mode oscillation, yet it was clearly directly driven by the solar wind. We note this is consistent with the recent result that established that cavity modes were limited to f=3-20 mHz, frequencies higher than observed here. The association of SIRs with periodic density structures is potentially significant from the standpoint of magnetospheric particle dynamics. High-speed streams are known to be efficient drivers of radiation belt flux enhancements, and SIR-driven storms are more effective at increasing energetic electron flux in the outer radiation belts than ICME-driven storms (Dmitriev et Kataoka, 2005Kataoka, , 2008. There is some evidence that ULF waves, particularly at the frequencies observed here, can play an important role in the energization and transport of radiation belt particles, particularly in the outer zone (Elkington et al., 1999;Mathie & Mann, 2000;O'Brien et al., 2003;Ozeke & Mann, 2008). For all six events we demonstrated a one-to-one correspondence of magnetospheric energetic particles over a range of energies, up to several MeVstandard unit. megaelectron volts, with the periodic solar wind density structures and the magnetospheric ULF waves. All six events demonstrate that these periodic structures can have prompt, coherent, and global impacts on the energetic electron population. Kilpua et al. (2017) found that the SIR interval itself, the region between the low-and high-speed solar wind, is associated with overall decreases in the radiation belt electron flux and suggested enhanced magnetopause shadowing as the cause. The Kilpua et al. (2017) study examined power in a few millihertz bandwidth (f=1.6-5.5 mHz), rather than testing for discrete periodicities. Quasiperiodic driving by the solar wind is likely to be more impactful than broadband driving, particularly for drift resonance effects and enhancing loss through magnetopause shadowing (e.g., Elkington et al., 1999;Turner et al., 2012). Therefore, the fact that SIRs appear to have inherent periodic density structures that then drive magnetospheric oscillations could contribute to SIR geoeffectiveness in a number of different ways, for example, by enhancing radial diffusion while also periodically altering the location of both the magnetopause and particle trapping boundaries, possibly leading to an increase in shadowing efficiency. The Polar spacecraft observations from the 11 February 2000 event of enhanced particle fluxes at such high latitudes ( Figure 5) suggests these structures could also produce loss of magnetospheric particles into Earth's atmosphere, through enhanced pitch angle scattering. The impact of such solar wind driving on magnetospheric particles is highly energy and location dependent. At the timescales examined here, the observed particle impacts, across a broad range of energies from tens of keV to severalMeV megaelectron volts, is likely due to motion of trapped particle boundaries driven by the forced breathing of the periodic density structures. Although not examined in this study, the integrated effect of such driving on overall loss or acceleration of the different particle populations could be important. We believe this to be a fruitful area for further research. Journal of Geophysical Research: Space Physics A critical unanswered question is the source of these density periodicities. There has been much previous research on the existence of periodic number density structures in the solar wind, and the period of Journal of Geophysical Research: Space Physics oscillations of the events here falls into the range of these previously discussed oscillations (Kepko & Spence, 2003;Kepko et al., 2002;Viall et al., 2008;Viall, Kepko, and Spence 2009;Viall, Spence, and Kasper 2009). These types of quasiperiodic density structures have been observed in the solar wind away from the types of shock structures examined here and appear to be ubiquitous, especially in the slow wind. Apart from the apparent association with a sharp density increases, which was a deliberate selection criterion, there are no obvious differences between the periodic density structures observed in this study compared to those in previous studies. This leads to the question of whether shock and discontinuity associated oscillations reported here are generated by the shock or whether they are ambient features in the solar wind that are processed by the shock. There is now substantial in situ (Di Matteo et al., 2019;Kepko et al., 2016;Viall, Spence, and Kasper 2009) and remote sensing (DeForest et al., 2018;Viall & Vourlidas, 2015;Viall et al., 2010) evidence that periodic density structures are formed in the solar corona, at the time of solar wind formation. The six events span a large range in amplitude of interacting velocity streams, with velocity jumps ranging from 50-300 km/s, yet all the events otherwise show remarkably similar characteristics in terms of the periodic density structures. We note that the two events that did not exhibit a shock (29 November 1996 and 27 November 2010) appeared qualitatively no different than the four that did. Likewise, the event that was part of a classic SIR (Event 6) appeared qualitatively no different than the other five. It is therefore unlikely that the structures are directly related to the forward shock itself. We explore this a bit further below. Figure 9 shows a new event that has characteristics of the six events previously presented, including a shock and periodic density structures in the downstream region. Yet, unlike the six events shown above, this was associated with a large ICME, and the periodic density structures were observed in the ICME sheath region, as opposed to a SIR. Physically, however, both types of events represent regions of compressed solar wind at the leading edge of a faster solar wind as it impacts and overtakes a slower solar wind. For this event, the Wind spacecraft was located at (208, −80, −6) R E , well off the Earth-Sun line. The ICME is well defined by the magnetic field signatures and extends from early on 28 June to midday on 29 June (Figure 9c). The shaded interval (B) in Figure 9 highlights the ICME sheath region, while shaded region (A) highlights an unperturbed interval of slower solar wind upstream of the ICME. The sheath contained ∼15to 20-min periodic density structures, and these are shown in Figure 10b. Similar oscillations were observed 50 min later by the GOES spacecraft, which was located in the dayside magnetosphere during most of this interval (Figure 10c). Despite the very large azimuthal separation (80 R E ) between Wind and Earth, there is a clear association between the Wind-observed ∼15to 20-min density variations and the GOES magnetic field measurements, particularly between 17 and 18 UT. The ICME sheath is a region of solar wind magnetic field and plasma that has been plowed into and piled up due to the faster-moving magnetic cloud that was injected into the solar wind earlier. Any existing structures in the slower solar wind ahead of the coronal mass ejection would have been compressed and amplified en route to 1 AU. If the periodic density structures observed in the downstream shock region were preexisting structures in the slower solar wind, then they should be present at a lower amplitude and uncompressed (stretched out) in the unperturbed region (A). To test this hypothesis, we artificially compressed the unperturbed solar wind in interval (A). We first converted the time series to a spatial series, such that x i =V i ·(T i+1 −T i ), then summed the number density over three point windows, in effect compressing the spatial structures by a factor of 3, consistent with the density jump observed at the shock. We then converted the length series back into time, assuming a constant 450-km/s velocity for simplicity. This time series is shown in Figure 10d in red. Note that the 5 hr of compressed data shown in Figure 10d represents 15 hr of observed solar wind data. We applied the same compression technique to the event of 25 November 2008, which was a strong SIR (see Figure 2f). We show the original solar wind number density data in Figure 11a, and the compressed data (also compressed by a factor of 3) are shown in Figure 11b in red. Four new quasiperiodic enhancements, labeled A-D, appear in the preshock solar wind region and appear similar to the periodic structures that were observed following the shock from about 2230 onward. We have identified the same A-D intervals in the original solar wind data (Figure 11a), where they appear as longer (approximately 1 hr vs. 20 min), smalleramplitude periodicities. Both of these events strongly support our assertion that the observed periodic density structures downstream of the solar wind shock were not created by the shock, but instead existed in the ambient solar wind, and were amplified and compressed as the faster solar wind ran into it. Our results indicate that the quasiperiodic mesoscale structures within the SIR were previously structures in the uncompressed slower solar wind ahead and are consistent with the belief that these structures are created at the time of solar wind generation. Evidence for a solar source of periodic density structures includes remote sensing of these structures down to a few solar radii (Viall & Vourlidas, 2015;Viall et al., 2010) and in situ studies using both Helios (Di Matteo et al., 2019) and L1 observations (Kepko et al., 2016;Viall, Spence, and Kasper 2009). Kepko et al. (2016) recently examined a 12-hr interval of solar wind that contained periodic density structures, which previously had been studied as driving magnetospheric pulsations (Kepko & Spence, 2003). The composition and magnetic field changes of that event indicated that the structures were formed by quasiperiodic S-Web magnetic reconnection back at the Sun (Antiochos et al., 2011;Higginson & Lynch, 2018). The observations here are consistent with that picture and provide further evidence for a solar source of periodic density structures. The observations presented in this paper provide minimums on both the azimuthal coherence and temporal stability of these structures, but we note that it is only the lack of spacecraft measurements beyond L1 halo orbits that preclude a broader investigation. Periodic density structures existed for up to several hours behind the discontinuities. For the 9 September 2005 event, the 20-min periodicities were present for 3 hr following the shock. At 450 km/s, this corresponds to a spatially striated region of solar wind of 5 Gm, containing embedded periodic 540-Mm mesoscale structures. Although the preexisting structures were all formed at the time of solar wind release and acceleration, the newly amplified and compressed structures near the interfaces are "younger" than those further downstream. There is an interplay between forces working to compress and amplify the smaller amplitude and spatially larger preexisting structures at the leading edge of the stream interface and processes such as waves and turbulence that lead to the decay of such structures further downstream from the shock, and this interplay bears further investigation. The distance between the solar wind monitor and Earth's magnetosphere for these events was up to 261 >R E (1.6 Gm) with a maximum delay of 87 min between the solar wind observation and the magnetospheric response. For the 25 November 2008, event the azimuthal separation between the Wind and THEMIS-B spacecraft, which both observed similar density structures, was 77 R E , indicating large azimuthal coherence of the structures. For the 9 September 2005 event (Figure 3), Wind and Geotail were separated in azimuth by 121 R E , and while the overall N p structure between the spacecraft was similar, the periodic density structures differed. The 25 November 2008 event provides further insight into the structuring of the periodic density structures. Figure 12 shows a comparison of just THEMIS-B and the Wind number density measurements. Near the beginning of the event, near 0 UT, the periodic structures between the spacecraft were aligned ( Figure 12a). Near the end of the event, near 4 UT, the periodic density structures seen in Wind and THEMIS-B came into agreement again, but at a different time shift, as indicated in Figure 12b, where we have aligned the entire 5-hr density enhancement. The best alignment for the density periodicities at the start of the interval leads to a small gap in time of 6 min between the sharp increase at the leading edge. THEMIS-B and the other near-magnetosphere spacecraft observed a density structure at the leading edge that was not observed by Wind. This difference is also seen in Figure 8e. This alignment of the periodic density structures at the beginning of the event produces an out of phase relationship at the end of the event. If instead we align the leading and trailing edges, the three large density periodicities at the end of the interval line up. Wind and the magnetospheric spacecraft were separated in space by 260 R E in X and 77 R E in Y, and in time by 81 min. Since the overall structure, from 0000-0430 UT, is aligned across the spacecraft, a temporal change explanation is unlikely. Instead, the observations are consistent with flux tubes at the leading edge that are twisted or misaligned with the front edge of the density structure relative to those at the back edge. As this was a very strong SIR, it contained a strong velocity shear in the Y and Z components of 25-50 km/s, which could lead to twisting of the flux tubes within the SIR. The association of these periodic density structures with SIRs and ICME sheath regions, and the resultant compression of the solar wind plasma by these interactions, suggests a link to planar magnetic structures (Nakagawa et al., 1989). Planar magnetic structures are intervals over which the variations and rotations of the magnetic field occur in a single plane and are thought to be preexisting discontinuities in the solar wind that have been amplified and then aligned by the faster solar wind compressing them (Neugebauer et al., 1993). Whether the periodic density structures behind the forward shocks of SIRs are strictly planar magnetic structures is not relevant for our study of magnetospheric effects. However, our observations here support the theory that preexisting structures in the solar wind are compressed, aligned, and then amplified by the faster speed wind behind them. Conclusions We have presented six SIR events that contain quasiperiodic density structures downstream of the leading edge of the interface. After accounting for solar wind propagation, which was up to 90 min, these periodic density structures were then observed to directly drive global magnetospheric oscillations with the same periodicity. For all events, we demonstrated a prompt and coherent energetic particle response to the periodic structures across a range of particle energies. The events ranged from strong SIRs, with a large jump in velocity across the region and a forward shock, to weak SIRs, where the velocity change was minor and no shock had yet developed. Despite the range of velocity change, each event contained periodic density structures with similar characteristics. We further presented evidence that these periodic structures appear to be preexisting structures in the slower solar wind and that these preexisting structures are amplified and compressed as the higher-speed flow pushes into the slower wind. The shock or discontinuity itself appears to have no role in generating these periodic structures nor in generating the <4-mHz magnetospheric oscillations. A key point is that the structures injected into the solar wind near the Sun and that are swept up and amplified by the higher-speed solar wind are quasiperiodic, rather than random or turbulent. The resultant magnetospheric impact is therefore coherent, and the change in total field strength inside the magnetosphere and changes to the magnetopause location occurs in a quasiperiodic manner. The period of these global, compressional (poloidal) ULF waves, typically 5-20 min, falls within and extends beyond the Pc5 band, which are known to be important for energetic particle acceleration, loss, and transport, particularly in the outer radiation belts. Other papers have observed even lower-frequency driving, although without examining the particle impacts (e.g., Kepko et al., 2002). Since these directly driven pulsations extend beyond the Pc5 band, their magnetospheric impacts are underexplored. Studies examining magnetospheric impacts of low-frequency ULF pulsations should therefore not focus exclusively on the Pc5 bandwidth. For every event studied here, the periodicity of the solar wind driver was clearly imprinted upon energetic magnetospheric particles. The association of SIRs, which are known to have substantial impacts upon the radiation belts, with periodic density structures likely enhances their effectiveness. While these results show that there is driving of energetic particles by periodic solar wind density structures, the effects are not simple. There are global and local particle responses to the interaction, and the response has time-dependent and quasi-static aspects that is energy dependent and should be taken into account particularly when modeling such interactions. Finally, while ULF waves are known to be important for radial transport of radiation belt electrons, the current state of the art relies on empirical relationships of ULF wave power to geomagnetic indices (e.g., Kp) to model this transport. But ULF power departs significantly from these empirical representations on short timescales, and empirical representations are often unable to accurately describe the diffusion rates (e.g., Mann et al., 2016) That these periodic density structures exist in the solar wind and drive global compressional oscillations, and are not contained in these empirical relationships, may be an important missing piece to specifying accurate rates of energy-dependent acceleration, transport, and loss within Earth's radiation belts. Additionally, it is believed that preconditioning of Earth's magnetosphere is an important factor in dictating whether the radiation belts are enhanced or not during a particular event. These solar wind mesoscale structures form, along with the magnetic field, the structures that control this preconditioning.
11,326
2019-10-01T00:00:00.000
[ "Physics" ]
Image Compression Using Fractal Functions The rapid growth of geographic information technologies in the field of processing and analysis of spatial data has led to a significant increase in the role of geographic information systems in various fields of human activity. However, solving complex problems requires the use of large amounts of spatial data, efficient storage of data on on-board recording media and their transmission via communication channels. This leads to the need to create new effective methods of compression and data transmission of remote sensing of the Earth. The possibility of using fractal functions for image processing, which were transmitted via the satellite radio channel of a spacecraft, is considered. The information obtained by such a system is presented in the form of aerospace images that need to be processed and analyzed in order to obtain information about the objects that are displayed. An algorithm for constructing image encoding–decoding using a class of continuous functions that depend on a finite set of parameters and have fractal properties is investigated. The mathematical model used in fractal image compression is called a system of iterative functions. The encoding process is time consuming because it performs a large number of transformations and mathematical calculations. However, due to this, a high degree of image compression is achieved. This class of functions has an interesting property—knowing the initial sets of numbers, we can easily calculate the value of the function, but when the values of the function are known, it is very difficult to return the initial set of values, because there are a huge number of such combinations. Therefore, in order to de-encode the image, it is necessary to know fractal codes that will help to restore the raster image. Introduction Remote sensing of the Earth (REE) is the observation of our planet and the determination of the properties of objects on the earth's surface, obtained with the help of imaging devices installed on aircraft and artificial satellites of the Earth (Figure 1). The obtained data are aerospace images of the observed part of the earth's surface in digital form in the form of raster images (Figures 2 and 3). These data are subject to further detailed analysis, namely, the determination of quantitative characteristics of the object under study, which are necessary to predict the development of a phenomenon or process. The processing and interpretation of remote sensing data is closely related to digital image processing. Remote sensing allows one to obtain from space high-quality images of the earth's surface, which help to solve practical problems in various fields of human activity. However, the amount of this graphical information is very large and needs to be compressed when transmitting data over communication channels and then used in geographic information systems. Today, fractal methods are often used to transfer images from satellites to the ground, which help to give a better picture. The main feature of these methods is the property of self-similarity of the image. Such methods provide large compression ratios. However, they need significant development while taking into account many criteria (speed, compression ratio, quality during decompression). They can be considered a real alternative to JPEG for many classes of images used in everyday life. Fractal image compression (fractal transformation, fractal encoding) is a lossy image compression algorithm based on the iterated function system to images [1,2]. The encoding algorithm is encoded [3]. The concept of the fractal method was first introduced in 1990 by the British mathematician Michael Barnsley [4]. It consists of the fact that in the image it is necessary to find separate self-similar fragments which are repeated many times. He proved a class of theorems that allowed for the effective compression of images. In [5], Arno Jacquin described a new method of fractal encoding, in which the image is divided into domain and rank blocks, which cover the entire image. This approach became the basis for the creation of new fractal encoding methods used today. Jacquin's method was perfected by Yuval Fisher [6] and other scientists. A large number of scientific papers are devoted to the study of the effectiveness of the fractal image compression method [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. For example, in [7] the algorithm of fractal compression of images using space-sensitive hashing is considered. Various methods are proposed in [8][9][10][11][12][13] aimed at reducing the volume of domain blocks, which makes it possible to reduce the encoding time of the search. In [8], a discrete cosine transform was used to classify all domain blocks into a number of classes, in [9] R-trees were used for this purpose, and in [10] a self-organized neural network was used. In [11,12], scientists use a genetic algorithm to search for optimal domains. In [13], the search is limited by the degree of information entropy of the domains. In [14][15][16][17][18][19][20][21][22], the variants of optimization and increase in fractal encoding speed and possibilities of their practical application are analyzed. Today, the topic of fractal compression algorithms is still being actively studied. The aim of our research is to model the class of fractal functions, study their properties and establish the possibility of their application for an efficient algorithm for encoding images that are presented in digital form. Materials and Methods Fractal encoding is a mathematical process for encoding rasters into a set of mathematical data that reproduces the fractal properties of a given image. This encoding is based on the fact that all natural objects have a lot of similar information in the form of repetitive patterns. They are called fractals. Fractal decoding is a reverse process in which a system of fractal codes is converted into a raster [14]. The concept of fractal was proposed by the French-American mathematician Benoit Mandelbrot. In 1977, he published "Fractal Geometry of Nature", describing repetitive drawings from everyday life [23]. According to him, many geometric figures consist of smaller figures, which when being enlarged repeat accurately a larger figure (Figures 4 and 5). After conducting research, he also found that fractals have chaotic behavior, a fractional infinite dimension (how completely a fractal fills a space when magnified to smaller details), and can be described mathematically using simple algorithms [24]. It is known that many geographical objects have fractal properties-contours of coasts and oceans, rivers, mountain gorges, and state borders where they are drawn by natural contours [24]. For the constructive task of such fractal objects and their analytical research today in mathematics, various systems of representation and the encoding of real numbers are widely applied: with the finite and infinite, constant and variable alphabet, with natural and the whole negative, rational and irrational bases, etc. These are s-s and non-s-s, Qrepresentations, chain fractions, real numbers in the series Cantor, Lurot, Engel, Sylvester, Ostrogradsky-Sierpinski-Pierce, etc [25][26][27][28]. The creation of a new encoding system for the fractional part of a real number significantly expands the range of such objects, which are relatively simply formally described and studied. In our work, we use an encoding system with a finite alphabet to build a new mathematical model that will be used in fractal image compression. The model is based on a continuous class of continuous functions that depend on a finite set of parameters and have fractal properties. Let A 5 = {0, 1, 2, 3, 4} be the alphabet of the five-digit numeral system, . be a space of the sequences of elements of the alphabet and let Q * 5 = q ij , i ∈ A 5 , j ∈ N be an infinite stochastic matrix with positive elements q ij > 0 and the following properties: max q 0j , q 1j , q 2j , q 3j , q 4j = 0 (continuity). The known theorem [25] states that for any x ∈ [0; 1] there exists a sequence (α k ) ∈ L such that where The representation of the number x in the form of series (1) is called Q * 5 -expansion while the symbolic notation If all the columns of the matrix q ij are identical, i.e., q ij = q i for any j ∈ N, then Q * 5 -representation is called Q 5 -representation. If q i = 1 5 is for all i ∈ A 5 , then the Q 5representation is a classic a five-digit representation. Each irrational number has a unique representation, but some rational numbers have two representations. These are numbers with the representations ∆ (4) . By agreement, we use only one of two representations of a rational number containing period (0). Then, we have the uniqueness of the Q * 5 -representation of a number. The concepts of cylinder and Hausdorff-Besicovitch dimension are important for image geometry. We will revisit them [25]. The cylinders have the following properties: (1) Cylinders of rank m are a union of cylinders of rank m + 1, i.e., (3) Basic metric ratio: Let (M, ρ) be a metric space, E a bounded subset of M and d(E) denote the diameter of the set E. Let Φ M be a family of subsets of the space M such that for an arbitrary set E ⊂ M and, for each number ε > 0, there exists an at most countable ε-covering {Ej} of E E j ∈ Φ M , d E j ≤ ε . Let α be a positive number. where the infimum is taken over all at most countable ε -coverings Definition 3. The positive number is called the Hausdorff-Besicovitch dimension of the set E. Using Q * 5 -representation of numbers, denote the function by the equality: where (g n ) = (g 0n , g 1n , g 2n , g 3n , g 4n ), n ∈ N is a sequence of vectors such that: (ε n ) is a sequence of positive real numbers with 0 ≤ ε n ≤ 1. (4) is a singular Cantor-type function with a Hausdorff-Besicovitch fractal dimension log 5 4; (5) the graph of the function is symmetric about point 1 2 ; 1 2 . Results We describe the method of fractal encoding of images using this class of nonmonotonic singular functions. The image is placed in a single square. We define two vectors Q and G such that: The first vector divides our image along the abscissa axis into five Q-cylinders (rank areas) that do not intersect (Figure 6), the length of q 0j , q 1j , q 2j , q 3j , q 4j respectively. The second vector on the y-axis specifies a set of G-cylinders that can overlap (Figure 7), the length of g 0k , g 1k , g 2k , q 3k , q 4k each, respectively. We take two identical images of 1 × 1 size. The first image is divided into Q-cylinders, and the second image is divided into G-cylinders (they describe similar parts and are used in constructed decoded images). For each method of search of Q-cylinders, the nearest Gcylinder for which the distributed features can be approximated by the distribution of a rank area is selected. For the best approach to the G-cylinders, use the official conversion, which helps to change the brightness and contrast. If the desired approximation is not achieved, each cylinder of the rank area again extends to the appearance of the corresponding parts and the process is repeated. The numbers of the Q-and G-cylinders that were used in the encoding process and helped to obtain the desired results, together with the coefficients of affine transformations, are written to the file. These results will then be used in decoding. Therefore, in the original image there is a search for self-similar areas, which are then taken as the basic elements of the fractal image. The latter is approximated by fractal transformations, and then we obtain an image in the form of Formula (2), which reflects the transformation. T 0n (x, y) = (q 0n x; g 0n ), T in is a compression image, so the {T 0n , T 1n , T 2n , T 3n , T 4n } family is an iterated function system. We show that the set F is a graph of a function continuous on [0; 1]. We give a geometric interpretation of the construction of this set. Let the graph of the function F 0 (x) be broken, connecting series points: (0; 0), (q 01 ; g 01 ), Graph of the function F 1 (x) is broken, connecting series points: (0; 0), (q 01 q 02 ; g 01 g 02 ), q 01 (q 01 + q 11 q 02 ; g 01 + g 11 g 02 ), q 01 + q 11 (F 0 (x)). These points are uniquely determined by the vectors Q and G, and they belong to the interior of the square [0; 1] × [0; 1]. We say that the transformation T is performed over the segments of the broken F 0 (x). With each of the segments of the obtained broken F 1 (x), which are not segments of constancy, we do the same (we perform the transformation T on them). Continuing this process, we obtain a functional sequence (F n (x)) such that: Thus, according to Banach's theorem (the theorem was formulated and proved in 1922 by Stefan Banach and is one of the most classical and fundamental theorems of functional analysis), there is a class of mappings-these are compressive mappings. A well-known statement: if we repeatedly apply the map T to the image F 0 so that F i = T(F i−1 ), then for i → ∞ we get the same image, regardless of the initial F 0 : The image F is called the original image of the transformation T. Let q ij = q i = 1 5 and ε n = 1. Then, The graph of the function is constructed according to the following algorithm. First, we build a unit square as a basis. We fix points (0; 0), 1 5 ; 3 4 , 2 5 ; 2 4 , 3 5 ; 2 4 , 4 5 ; 1 4 , (1; 1) according to the coordinates of the vector Figure 9). In the next step, the algorithm is repeated, i.e., each of the four broken lines is replaced by five new broken lines at the appropriate scale. The more iterations you make, the more accurate the graph of the function will be ( Figure 10). Increasing iterations is responsible for the accuracy of the graph. Function (2) has an interesting property-if we know the initial sets of numbers, it is easy to calculate the value of the function, but, conversely, when the values of the function are known, the initial set of numbers is difficult to recover because there are many such sets. Another special property of this function is the manifestation of self-similar properties of the graph of the function depending on the parameter at intervals where the function is not constant, i.e., the graph of the function This allows for encoding information faster, thus increasing the efficiency of data transmissions through communication channels. The process of encoding information requires a lot of calculations. Large volumes of iterations are performed to search for self-similar fragments in the image. Therefore, compressing a single image takes a long time. In this case, the more iterations there are to make, the more accurate the result will be. Decoding a fractal image is also an iterative process, although it takes little time, as all such objects are searched for in the encoding process. All you need to do is refine the fractal codes by transforming them into the original image. However, if you do not know the image encoding algorithm, the decoding process will be very cumbersome and time consuming. Here are some examples of fractal encoding with given initial sets of digits for a given image f : For Q-cylinders, the digits 1 and 3 are allowed, i.e., the image will be divided into cylinders ∆ 1 , ∆ 3 , ∆ 11 , ∆ 13 , ∆ 31 , ∆ 33 , . . .. Then, for the second identical image, as a result of affine transformations, the brightness distribution will contain G-cylinders with numbers 1 and 3, i.e., the digits of the set C 2 ≡ C[Q * 5 ; {1, 3}] are a set of Cantor type it is a set of not deleted points; it is possible to define the relation of this set to unit interval through the general length of the removed subintervals) ( Figure 12); 3. For Q-cylinders, the digits 1,2 and 3 are allowed, i.e., the image will be divided into cylinders ∆ 1 , ∆ 2 , ∆ 3 , ∆ 11 , ∆ 12 , ∆ 13 , ∆ 21 , ∆ 22 , ∆ 23 , ∆ 31 , ∆ 32 , ∆ 33 , . . .. Then, for the second identical image, as a result of affine transformations, the brightness distribution will contain G-cylinders with digits 1,2 and 3, i.e., the image (Figure 13), where C 3 is a set of Cantor type and M is a discrete subset of the set of five-rational numbers: For Q-cylinders, the digits 0 and 4 or 1 and 3 are allowed. Then, for the second identical image, as a result of affine transformations, the brightness distribution will contain G-cylinders with digits 1,2,3,4, if the digits 0 and 4 or 1 and 3 are allowed for Q-cylinders, then the image of the set of Cantor type C 5 ≡ C[G * 5 ; V n ] (Figure 14), where is a set of Cantor type of Lebesgue zero measure. This function class model allows for the development of a new fractal image encoding method for data transmission over communication channels, storage on media and subsequent use in geographic information systems. The obtained results can be effectively used for the computer processing of aerial photographs in GIS of various functional purposes. A promising development of this mathematical model is the consideration of the problems of its practical application in GIS of environmental monitoring for compression and archiving of geospatial data. Discussion This article highlights a new method of image compression, which uses a class of nonmonotonic singular functions with fractal properties and depends on a finite number of parameters. These features allow you to get excellent digital data compression ratios and fast decoding. Another feature is that the size of the data after compression will actually take up less space in the file. This makes it possible to transfer information from satellites to Earth faster and then use it in geospatial systems, for example, for weather forecasting, studying Earth resources, and so on. The disadvantage of this method is the large amount of computation, because to obtain a high degree of compression, you need to perform a large number of conversions, which can degrade image quality. The data we will receive after unpacking may differ from the initial ones, but the degree of difference will not be significant with their further use. Unfortunately, the proposed methods do not completely solve the problem of increasing the speed and efficiency of fractal compression. However, today, more and more scientists are trying to improve the efficiency of existing methods and find new ways to optimize fractal encoding. Conclusions Mathematical models of the effective use of fractal functions for compression (encoding) of raster images are covered. There are already a large number of existing models using functions with a complex local structure, but they also need to be improved. Functions with fractal properties, unlike conventional functions, help to efficiently encode data and solve complex problems in various areas of human activity. Such functions are given by a recursive formula. Their generation takes a long time, which provides a high degree of compression, but is time consuming. Unpacking the image is easier, because the main work has already been done during encoding and it remains only with the help of known fractal codes to return the raster image. The obtained results allow one to create a sufficiently reliable mathematical support for the process of compression of various graphic information and to improve existing methods. We see prospects for further research in constructing a family of functions with fractal properties, using different systems of encoding real numbers, and their application to create reliable and advanced methods of encoding spatial data, their storage, processing and representation. This will create a single information space and provide ample opportunities for systematic analysis of information for effective environmental quality management and ensuring the safety of life.
4,803.4
2021-04-14T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science", "Geography" ]
Research on Community Participation in Environmental Management of Ecotourism Ecological environment is the material base of the development of ecotourism. The ecotourism cannot develop well without high quality ecotourism environment. The goal of ecotourism development is to protect ecological environment, which is also the essential characteristic of ecotourism different from other kind of tourism. This paper tries to discuss the community participation in environmental management of ecotourism, aims to improve the awareness of participation and environmental protection among community residents and, to establish the mechanism of community participation in environmental management of ecotourism. In this way, can the community residents be benefit from ecotourism; and at the same time, the communities provide strong motive to protect the resources and environment of ecotourism well. Introduction Ecotourism is sustainable tourism, which is based on the ecological principle and sustainable development theory.Its aim is to conserve resources, especially biological diversity, and maintain sustainable use of resources, which can both bring ecological experience to travelers, conserve the ecological environment and gain economic benefit.Ecotourism establishes a harmonious symbiotic relationship between sightseeing visit and environmental protection, which can make the negative influence of travel to ecological environment be reduced to minimum extent by strict management, so as to ensure the everlasting utilization of resources.Ecotourism is very popular to travelers for its bases that emphasize on natural ecological environment and pay attention to ecological environment protection. As early as 1982, Zhangjiajie National Forest Park had been built, which is the first forest park in china.Subsequently, the ecotourism activities had begun to appear and develop in China.Especially, depending on natural landscape superiority, the forest parks and nature reserves regard ecotourism as pillar industry of sustainable utilization of resources.However, although the development of ecotourism has brought economic interests to tourism destination, it has caused great influences to local ecological environment, such as too many artificial sceneries, the influx of tourists and tourism transport facilities in short time, more and more waste of water, gas and residue discharged by industry and so on.All of them have exceeded tourist environment capacity, which results in damage and pollution to ecological environment.Furthermore, a high rise of garbage packing has destroyed the natural landscape and illustrated water body in tourist scenes, that is, water eutrophication.According to related organizations investigation on 100 nature reserves above the provincial level, there has been 22 percent of nature reserves causing environment destruction because of ecotourism development, and 11 percent of them appearing tourism resources degradation.Good ecological environment is not only the premise and basis of ecotourism, but also the important guarantee of state ecological safety.Therefore, in order to develop the ecotourism well, the ecological environment should be depended on, so as to promote the ecotourism development.Well how to promote ecotourism to develop healthy and coordinate is a new subject to us all. The essence and connotation of ecotourism Strictly speaking, to date, there has not been a uniform final conclusion on the definition of ecotourism among the scholars at home and abroad.The reason is not only that the understanding and comprehension of people toward ecotourism is continuously deepened, but also that many researchers, developers and business administrators artificially take their respective needs and garble a statement.Since 1980s, the studies on ecotourism have been prevailed all over the world. Ecotourism is the necessary choice of the tourism development in certain phase; it is the best form of sustainable tourism; it is the concrete application of the principle of sustainable tourism in natural areas and certain social cultural regions. This article argues that the connotation of ecotourism mainly embodies in the following aspects: First, the destination of ecotourism is refers to the natural regions that are subjected less interferences and pollutions. Second, the progress of ecotourism emphasizes on the principle of ecological protection.Ecotourism pays much attention to the protection during its development and uses development to promote protection, which is the harmony and unification among economic benefit, social benefit and environmental benefit Third, ecotourism is the green industry of high scientific and technological content, which needs multidisciplinary kinds of guiding and argumentation from ecologist, economist and sociologist, and needs to considerate the enduring ability of ecological environment and tourism resources; it is a kind of unique sustainable tourism and responsible tourism form that pays more attention to the continuity of ecology and cannot result in the environmental destruction or the decrease of the environmental quality. Fourth, ecotourism pays much attention to the economic development of tourism destinations and the improvement of the living standard of local residents; the income of ecotourism should not only be used to protect the ecological environment but also benefit the local residents. Fifth, ecotourism gives prominence to the educational function of ecological environment; the life style and environmental view of tourists can be changed through ecotourism activities, and the consciousness of protecting resources and environment can be improved. In all, ecotourism should regard environmental education, science popularization and spiritual civilization construction as the core content, and really let itself be a grand school of people studying nature, loving nature and protecting nature. Ecological environment is the foundation of ecotourism Environment is the material base of survival and development of human society.Man's materials production should be based on the exploitation and utilization of the environmental resources.Tourism is a kind of human's higher-level needs that meets individuals' spiritual.It is a wild wish to travel when social productivity is low or material life is not abundant.With the fast developing of economies and the rising standard of living, people began to have a well-off life after solving the food-and-clothing problems.In the premise that the basic survival conditions are satisfied, people have surplus money to pay traveling expenses.Meanwhile, people need novel, rich and enriched spiritual life to meet their enjoyment desire after the material living conditions getting greater improvement.The aim of travel is to add leisurely and carefree mood to human life, such as leisure, entertainment and vacation, thus the environment of tourism scenic spots is very important.Tourism environment is the one on which tourism industry rely for existence; environmental quality is the foundation of the existence of tourism.The development of tourism should be based on graceful sky, water and mountain, as well as environment protection.Only when the environment be protected well and natural landscape and humanities landscape be in a virtuous circle, can the travel desire of people be inspired and changed into real tourism demand. So is the development of ecotourism.Ecotourism activities, ecotourism resources and ecotourism industry are all based on ecological environment.There is no ecotourism without graceful ecological environment.Graceful ecological environment is the material base of the ecotourism development.Chinese have a saying: "With the skin gone, what can the hair adhere to?" Therefore, the ecological environment can be seen as life source of the ecotourism.There is no ecotourism without high quality ecotourism environment. Ecological environment protection is the goal of ecotourism Ecotourism is the inevitable choice of the tourism development in certain phase.It is a best kind of sustainable tourism.The ecotourism development model is based on the sustainable development view, and its targets are human development and social progress.Meanwhile, the ecotourism development emphasizes on the harmonious development among economy, society, human being and nature, which is completely satisfied the criterion of sustainable development view.Besides, the ecological environment protection is also the essential characteristic of ecotourism different from other kind of tourism.Ecotourism is not only one kind of simple, ecological and natural tourism pattern, but also is the one that increases our responsibly on natural resources protection through tourism activities.Therefore, the connotation of ecotourism puts more emphasis on the conservation of natural landscape.It is one kind of high-grade tourist activity and education activity with sense of responsibility.The responsibilities of ecotourism include the protection of tourism resources, the respect for the economy, society and culture of tourism destination, the promotion of sustainable development of tourism destination and so on.The basic aim of ecotourism is to be close to nature, to protect nature and to maintain the ecological balance.The most important characteristic of the real ecotourism is protection.By developing ecotourism, we can maximally follow the natural law of biodiversity, and fully embody the harmonies and unified ecological relationship between man and nature, and put an end to short-term economic activities, and seek for the unity coordinated with economics, society and ecology, and at last maintain the sustainable development of the resources and environment of tourism. Community participation is the important character of ecotourism Ecotourism has three typical characteristics by protection, economy and community participation, all of which are related to communities of tourist destination.The protection of ecotourism means protecting the environment and resources of ecotourism destination-including local communities.The economy of ecotourism means developing the economy of local communities of tourist destination.Both the protection and economy are the targets of ecotourism development; well the community participation is the effective method to realize the targets mentioned above.Ecotourism impliedly includes the model of community participation in tourism development, whose aim is to make the tourism development meet the demand of local development, and to make the communities appropriately set and market the norms of tour and industry's operation as well as reasonable financial source acquirement, so as to promote the quality of the resources and environment of communities.According to the ecotourism practice in all countries, community participation is an important part of ecotourism activities both in developed countries and developing countries. Community participation provides a powerful motivation for resources protection of tourism areas Local community is an important tie of binding the protection to the economic income and social benefit, and it is the core of stakeholders of ecotourism.The existence and development of local inhabitants are based on the resources and environment of ecotourism areas.The local inhabitants are both the beneficiaries after environmental optimization and the victims after ecological environment broken by ecotourism development.Community participation in ecotourism can make a positive promotion to the protection of ecotourism environment For example, it can avoid the neglect of environmental and social benefits, and prevent from such phenomena as acquiring short-term benefit by sacrificing long-term benefit and environmental protection; meanwhile, it can also make the damage caused by tourism development be controlled in the limits of assimilation and self-purification of ecological environment, which keeps the ecosystem stable, so as to resolutely avoid the ecological environment deterioration possibly caused by unplanned predatory management or over-exploitation. Community participation can mobilize all the social resources to administer the environment of the ecotourism destination.First, community participation itself has the feature by behavior style of internal convergence; all members of the community will construct a whole network of environmental protection construction under organizing coordination, which can closely supervise every community member and community unit.Second, the community can maximally call on the local residents to take part in the environmental protection activities.Besides, if the environmental protection consciousness is changed into social morality, the community members who have destroyed the ecological environment will be disdained by all other members; therefore, it is easy to be recognized that environmental protection regulations of the ecotourism destination is the common behavior criterion. Improve the consciousnesses of participation and environmental protection of the community residents We can develop widespread community education among the community residents, cultivate the community residents' sense and ability of participation, improve their positive cognition of ecotourism development and environmental awareness of rights and obligations.. Therefore, we should take some measures.First, we should let the community residents know what local environment and resources mean to them, and how many benefits the ecotourism protection can bring them, so as to stimulate their enthusiasm of participating in the ecotourism protection.Second, we should let them correctly understand the environment problems, and set up good environmental awareness, and keep civilized environmental behavioral habits, so as to devote themselves into controling environmental pollution and improving the ecological environment. Let the community residents share the benefits brought by ecotourism During developing ecotourism activities, we should not only protect the fragile local ecological environment, but also benefit the local residents, both of them are more and more inseparable.Besides, we should absorb the community residents to take part in the ecotourism operation actively, increase the economic income, improve the living conditions, and share the benefits brought by ecotourism development.Under the driving of economic interests, the local residents participated in ecotourism may recognize that their income stems from the development of tourism industry.Well good natural environment and the continuation of biodiversity are the premise conditions of tourism development; protecting environment is to protect one's own economic interests.Therefore, we should change the habit of the community residents from living on consuming resources to living on developing, marketing and managing resources, so as to not only relieve the pressure to resources protection, but also form relevant interest groups and resultant forces to biodiversity and ecological environment protection. Propose the communities to change to the life style of environmental protection type Community residents' daily life and work, economic activities, traditional customs and so on are close related to the ecological environment around as well as animal and plant resources in the natural reserve.For a long time, the residents in the ecotourism destination have lived on nature resources consumption; the abundant nature resources in ecotourism area have supplied them material basis for survival.These not only have caused environmental destruction and pollution, but also have been unfavorable to biodiversity protection.Therefore, the government should actively spread and apply new energys of environmental protection type, such as biogas, solar energy, power, briquette and so on, build the wastewater treatment plant for communities etc.; so as to form the life style of environmental protection type and realize the virtuous circle of environmental protection and community development Establish decision-making mechanism in planning of ecotourism participated by communities The community residents, especially those who long engage themselves in tourism activities, have a more intuitional understanding on the needs of tourists, can give some advice to the planners on the development of the ecotourism project and the distribution of facilities; meanwhile, they can offer useful reference to the environmental protection in ecotourism development process according to their long-history fit in the environment and, what's more, if they have participated and accepted the ecotourism project, they will be friendly and provide high quality service, which will improve the tourists' satisfaction to the ecotourism project,so as to achieve a better travel effect.Therefore, the management departments of tourism development should establish full-time branches of community management, and consider fully the interests of inhabitants, and guarantee the participation channel unimpeded, and form bulletin system and consultation system for significant happenings during the development and planning of tourism, and form veto system for improper important decisions in some tourism areas, and at last make sure that every tourism decision is discussed and studied by all parties. Innovate the mechanism of the operation and management of ecotourism resources The kind of internal incentive mechanism, which is able to stimulate the community residents consciously to take part in the protection of ecological resources they live by, is needed to protect the ecotourism environment and make the sustainable development of ecotourism come true.The mechanism should not only have the encouraging function of stimulating the community residents to participate in the ecological protection, but also be available to achieve the optimal management of the resources and the sustainable development of the community.Well the community residents carry out share cooperation of ecotourism with the ecotourism resources and the establishing of labor stock, thus they are both the shareholders and laborers in the development and management of ecotourism.All of them make those come true that the ecotourism resources change from "public"to "common", and the community residents become the real masters and actively participate in the ecotourism decisions and consciously maintain the ecotourism resources they live by.The basic aim of share cooperation of ecotourism is to arouse the enthusiasm of the participation of the community residents, so as to achieve aim to ecological environmental protection. Conclusions Ecological environment is the necessary condition and foundation of the ecotourism development.Well the community participating in ecotourism is favor of ecotourism environmental protection; the level of community participation in environmental management of ecotourism is based on the self-development of the community.Therefore, effective measures should be carried out.First, the government should supply teaching and training services, so as to improve the sense and ability of the participation of the residents.Second, we should establish the mechanism of community participation in environmental management of ecotourism; let the community residents benefit from ecotourism, so as to arouse their enthusiasm in participating in ecological environmental management and consciously maintain the resources and environment of ecotourism, and at last achieve the aim to ecological environment protection.Third, we should set up laws and regulations, which should be representative enough to influence the decision and to ensure the rights that the community residents participate in ecotourism and community development, so as to realize the legalization and institutionalization of the rights.
3,908.8
2009-02-17T00:00:00.000
[ "Economics", "Environmental Science" ]
Hot Pressing of Electrospun PVdF-CTFE Membranes as Separators for Lithium Batteries: a Delicate Balance Between Mechanical Properties and Retention A systematic investigation of the effect of hot-pressing on polyvinylidene fluoride (PVdF) chlorotrifluoroethylene (CTFE) electrospun membranes, intended as separators and substrate for the electrolyte of lithium batteries, has been carried out by means of dynamic mechanical analysis. Absorption and desorption gravimetric measurements and infrared spectroscopy have been also performed. In particular, the mechanical properties of the membranes have been measured as a function of hot pressing temperature and pressure in the range 20 120 °C and 0 4 kbar, finding an increase of about 30 times of the tensile modulus after pressing at the highest temperature and pressure. Moreover, the conductivity of hot pressed membranes swelled by an ionic liquid measured as a function of temperature between 60 and + 70 °C practically superimposes on that of the unpressed membrane. Introduction Lithium ion batteries are the state of the art devices for the storage of energy, as they possess an energy density (210 Wh kg -1 , 650 Wh l -1 ) which exceeds at least by a factor of 2.5 any rivalling technology 1 . Despite their large availability at commercial level, many aspects of lithium batteries could be implemented, for example with the use of electrodes operating at higher voltage or avoiding the presence of liquid and flammable electrolytes. One of the improvable parts of Li-ion cells is the microporous polyolefin separator which is extremely expensive and has poor wetting capability due to its small pore size and porosity. In order to circumvent these drawbacks, electrospun polymer membranes have been proposed as innovative separators [2][3][4][5][6][7][8][9][10][11] . Among the few polymers that meet the requirements for obtaining of a good polymer electrolytes, poly (vinylidene fluoride) (PVdF) and its copolymers, have been found to be very promising as they have good electrochemical stability and affinity to electrolyte solutions. However, as prepared electrospun membranes are very soft and difficult to handle because of their extremely low thickness (in the range of 20 µm) and low mechanical modulus. In order to avoid such problems, hot pressing of the polymer electrospun membranes has been proposed, but only some sporadic studies are available. He et al. 12 studied poly (vinylidene fluoride-trifluoroethylene) electrospun membranes. These authors reported the formation of the β phase of PVdF during the electrospinning process and an increase of the storage modulus by a factor 2 -3 after hot pressing. However, the details of the hot pressing procedure are not given. A broader investigation was reported by Na et al. 13 on PVdF electrospun membranes, subject to a hot rolling process. Na et al. observed a retention of ~ 400% of the electrospun membrane, that reduces to a value of 100 % when rolling is performed at 100°C and becomes a vanishing low value for higher rolling temperatures 13 . Conversely, the tensile modulus increases as the rolling temperature is increased. These authors were mainly concerned about the mechanical properties of the membranes, because they were intended as filtration systems. The same authors extended the investigation to some blends of PVdF with other polymers, such as polycarbonate and poly(methyl acrilate), confirming the better mechanical properties of hot rolled membranes 14 . Choi et al. 8 reported an investigation of the physical properties of PVdF fibers before and after a thermal treatment at 150 °C. The tensile strength and elongation at break as well as the tensile modulus were notably improved by the thermal treatment. Lee et al. 5 sandwiched two poly(vinylidenefluoridehexafluoropropylene) and PVdFelectrospun membranes between the anode and cathode of a lithium battery and performed hot pressing at 70 °C (at an unknown pressure). Then, dried cells were immersed in 1M LiPF 6 in an ethylene carbonate/dimethyl carbonate/diethyl carbonate/1:1:1, w/w/w) solution for 12 h. The ionic conductivity of the cell after hot pressing was the same of the cell before hot pressing (~ 6·10 -2 Scm -1 ) when measured at room temperature and above, while it was slightly lower for T< 290 K 14 The aim of the present paper is to perform a systematic investigation of the effect of hot-pressing mainly on the mechanical properties of electrospun membranes, intended as separators and substrate for the electrolyte of lithium batteries. To the best of our knowledge such a detailed investigation as a function of hot pressing temperature and pressure has never been reported. The electrospun membranes were prepared following a well established procedure by means of a homemade apparatus described in Ref. 7 using a solution prepared at room temperature by dissolving PVdF-CTFE in acetone: N,Ndimethylacetamide (DMAc) 70:30 v/v at a concentration of 13 % w/v. The typical thickness of the electrospun membranes is in the range 20 -50 μm. Pressing was performed in a rectangular shaped die with dimensions of 6 x 40 mm 2 . The die can be heated by an electric resistance. Each membrane was pressed for 10 minutes at selected temperatures (20, 50, 80 or 120 °C) and pressures (0, 0.4 or 4 kbar). Infrared spectroscopy measurements were performed by means of an Agilent Cary 660 spectrometer, equipped with a ceramic source, a KBr beamsplitter and a MCT detector. The spectral resolution was 0.5 cm -1 . 15 Absorption and desorption of LP30 by the membranes was measured by means of a Mettler Toledo AT261 balance placed in a glove bag with a flowing argon atmosphere in order to prevent contamination of the liquid by water. The absorption was obtained by dropping LP30 on the membrane and removing all the excess liquid by means of optical paper. The mechanical properties of the membranes were measured by means a Perkin Elmer DMA 8000 in order to perform dynamic mechanical analysis on small membrane pieces 4-6 mm wide and 10-12 mm long in the so-called "tension configuration" [16][17][18][19] . The storage modulus, M, and the elastic energy dissipation, tan δ, were measured at 10 Hz, between -100 and 100 °C with a temperature rate of 4° C/min. As a preliminary check of the hot-pressing effect on their electrochemical properties, the membranes have been swelled by 1-butyl-1-methylpyrrolidinium bis(trifluoromethanesulfonyl) imide (PYR 14 -TFSI), which is one of the most promising ionic liquid studied as electrolytes in lithium batteries 20 . The conductivity of the samples has been determined by acquiring impedance spectra at each temperature with a sinusoidal voltage of 10 mV rms in the frequency range 200 kHz -1 Hz The samples were positioned in a sample holder between stainless steel disk electrodes 100 micrometers apart and having a diameter of 0.8 cm. The resistance of the sample was determined by extrapolating the high frequency tail of the spectra toward the real axis. The temperature of the samples has been controlled by a Tenney (USA) Environmental Chamber Mod. TJR in the temperature range +70 / -60 °C. Infrared spectroscopy The infrared spectrum of PVdF-CTFE is dominated from the absorptions due to PVdF 21 and indeed all the membranes of PVdF-CTFE 10 % pressed at different temperatures and pressures display clear absorptions around 490,510, 532, 795, 840, 880, 907, 975, 1026, 1068, 1190 and 1278 cm -1 (see Figure 1). The lines centered around 510, 840 and 1275 cm -1 are the fingerprints of the crystalline β phase of PVdF, while the absorptions around 490, 532 and 975 cm -1 are the signature of the presence of the α phase 22 .The other lines are common to both structures of PVdF 22 .Therefore, one can conclude that in all membranes both crystalline phases of PVdF are present. Moreover it seems that the relative concentration of the α and β phase does not drastically change as a function of heating temperature or pressure, as the relative intensity of the lines is only slightly affected by the hot pressing procedure. δ, of the PVdF-CTFE 10 % electrospun membrane before any hot pressing procedure (red points). For comparison, also the anelastic spectra of a PVdF-CTFE 20 % membrane (olive points) and of a pure PVdF membrane (blue points) are reported. The glass transitions of PVdF and CTFE polymers, are clearly detected from the peaks in the tan δ data, around -47 and 6 °C, respectively. Mechanical properties For the PVdF-CTFE 10 % membrane (red points), at T= 25°C, M is ~ 9·10 6 Pa andthe modulus displays a typical temperature dependence with a decrease as temperature increases 23 . In particular, M reaches a value of 1.6·10 8 Pa at -100 °C and 1.1 10 6 Pa at 100 °C. In order to study the modulus variation as a function of the pressure, p hp , and the temperature, T hp , of hot pressing, we performed DMA measurements on all the PVdF-CTFE 10 % membranes, obtaining curves of M with the same qualitative dependence on T of to those reported in Figure 2. However, the absolute value of the modulus strongly depends on the hot pressing conditions (see Figure 3). Indeed, even after pressing at T hp = 20°C and p hp = 400 bar, M increases by an order of magnitude. The elastic modulus further increases when the temperature and pressure of hot pressing are increased, reaching a value ~ 30 times the value of the as prepared membrane for T hp = 120°C and p hp = 4 kbar. Figure 3 reports the values of the modulus measured at selected temperatures for membranes subjected to hot pressing at different temperature and pressure conditions, in particular it shows that the higher the temperature and pressure of hot pressing, the higher modulus is obtained for the electrospun membranes. Figure 4 reports the retention of all PVdF-CTFE 10 % membranes. Before hot pressing the PVdF-CTFE membrane has a retention of ~ 410 %. Heating the polymer without pressing does not alter retention, while pressing without heating slightly decreases retention to ~ 370 %. However the combination of pressure and temperature further decreases the retention reaching a value of ~ 300 % for T hp = 80 °C and p hp = 400 bar and ~ 250 % for T hp = 120°C and p hp = 4 kbar. It must be noted that in an absolute sense the measured values of the retention are extremely high. Indeed membranes obtained by casting polymers 24 can have retention of the order of 400 %, but must be sealed in a closed environment to avoid release of the liquid. Retention and release of LP30 In the case of the electrospun membrane the release of LP30 occurs in very long times. Figure 5 displays the mass variation of wet samples as a function of time. The pristine membrane has a mass variation of only 19 % after 24 hours of exposure to flowing argon, and even after 72 hours Δm/m is 57 %, so that there is still some LP30 left in the sample. In the case of hot pressed membranes the mass variation is slightly faster, but however the membrane treated at T hp = 80 °C and p hp = 400 bar displays a mass variation of 32 % after 24 h and of 75 % after 72 h. Conductivity In Figure 6 the comparison of the temperature dependence of the conductivity of a PVdF membrane swelled by 1-butyl-1-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide (PYR 14 -TFSI), which is one of the most promising ionic liquid studied as electrolytes in lithium batteries, and of a hot pressed membrane (T = 50 °C, p = 4 kbar) of the same polymer is reported. It is noted that the two lines, practically, superimpose one on the other, showing that the hot pressing procedure does not dramatically decrease the conductivity of the system. Conclusions The effect of hot-pressing on the tensile modulus measured at various temperatures of polyvinylidene fluoride (PVdF)-10% chlorotrifluoroethylene (CTFE) electrospun membranes, has been investigated by means of Dynamic Mechanical Analysis, as a function of the pressure, p hp , and temperature, T hp , used during hot pressing. An increase of the tensile modulus with increasing p hp , and T hp is obtained, reaching a value of ~ 30 times after pressing at the highest temperature (120 °C) and pressure (4 kbar). Concomitantly the retention of the LP30 electrolyte decreases from ~ 410 % at ambient p and T, towards ~ 250 % for T hp = 120°C and p hp = 4 kbar, a value which is still compatible with the application in lithium batteries. Moreover, even when exposed to a argon atmosphere, the release of LP30 occurs in very long times (of the order of 30-70 hours). The conductivity of the system polymer membrane/ionic liquid is only slightly affected by the hot pressing procedure.
2,951.2
2018-10-11T00:00:00.000
[ "Materials Science" ]
A novel model for malware propagation on wireless sensor networks : The main goal of this work was to propose a novel mathematical model for malware propagation on wireless sensor networks (WSN). Specifically, the proposed model was a compartmental and global one whose temporal dynamics were described by means of a system of ordinary di ff erential equations. This proposal was more realistic than others that have appeared in the scientific literature since. On the one hand, considering the specifications of malicious code propagation, several types of nodes were considered (susceptible, patched susceptible, latent non-infectious, latent infectious, compromised non-infectious, compromised infectious, damaged, ad deactivated), and on the other hand, a new and more realistic term of the incidence was defined and used based on some particular characteristics of transmission protocol on wireless sensor networks. Introduction The integration of wireless sensor networks (WSNs) within the framework of the Internet of Things (IoT) has marked a significant milestone in the current technological society.These networks, composed of autonomous nodes collecting data from the environment, play a crucial role in building intelligent and connected systems.The importance of WSNs in the IoT lies in their ability to provide real-time information, enabling more agile and efficient decision-making in various applications, from environmental monitoring to supply chain management. However, this technological advancement is not without challenges, and the security of WSNs emerges as an unavoidable priority.In particular, the propagation of malware in these networks poses a serious threat that can compromise the integrity and confidentiality of collected data.The importance of ensuring the security of WSNs, especially concerning malware propagation, becomes an essential element to preserve the reliability of IoT systems as a whole.This aspect takes on special relevance in critical environments such as health and infrastructure, where the reliability of information is crucial. In this context, the development of mathematical models to simulate the propagation of malware on WSNs emerges as an essential tool.These models not only allow a better understanding of propagation patterns but also facilitate the evaluation of security strategies and the implementation of preventive measures.In this regard, several mathematical and computational models have been proposed, and some of them will be reviewed in Section 3. The importance of addressing malware propagation through mathematical approaches lies in the ability to anticipate potential threats and mitigate associated risks, thereby strengthening the resilience of WSNs.In this work, we will explore these crucial aspects and propose innovative approaches to address emerging challenges in the landscape of WSNs within the context of the IoT. The main goal of this work is to propose a new model to study the propagation of malware on WSNs with the aim to address some of the deficiencies found in the global models proposed to date.Specifically, the main contributions of this paper are the following: • Get a more realistic description of malware propagation on WSNs by considering: 1) New compartments of devices: susceptible, patched susceptible, latent non-infectious, latent infectious, compromised non-infectious, compromised infectious, damaged, ad deactivated; and 2) a novel way to determine the incidence term based on a new definition of the time unit that takes into account the average routings performed by different nodes.• The basic reproductive number is explicitly computed and some control strategies are presented from its analytical study. The rest of the paper is organized as follows: In Section 2, the fundamentals of WSNs are introduced.The state of the art related to the design of mathematical models for malware propagation is presented in Section 3. Section 4 is devoted to the mathematical description of WSNs.The specifications of the novel model proposed in this work are shown in Section 5.The explicit and detailed description of the new model is presented in Section 6, and, finally, the conclusions and future work are shown in Section 7. The fundamentals of WSNs A WSN is a network composed of several sensor devices or nodes (also called "smart sensors") deployed massively in a specific region with monitoring, wireless communication, and computing capabilities.The main goal of a WSN is to collect and transmit environmental data. Functionally, nodes are low-power devices equipped with one or more sensors (mechanical, thermal, biological, chemical, optical, magnetic, etc.), a processor, a memory, a power source, and other components necessary for their proper functioning.Since these nodes have limited memory and they are usually deployed in hard-to-reach locations (which also complicates their maintenance), they must have radio connectivity to transmit collected data to a base station.In this sense, it is important to note that one of the most significant limitations of WSNs is energy management in the sensor devices.Therefore, one of their main objectives is energy conservation by optimizing communication and monitoring processes. Due to their small size, low cost, and ease of deployment, WSNs have several applications in various fields: tracking and surveillance of military targets, environmental monitoring to predict natural disasters, patient monitoring (e-Health), infrastructures monitoring (Industry 4.0), agricultural crop monitoring (Smart Agriculture), etc. [1][2][3]. The process of transmitting data between the node that "collected" it and the base is governed by algorithms that continuously select the most efficient route, considering node limitations such as memory, battery, etc. WSNs can be classified as "single-hop networks" (in contrast to "multi-hop networks"), meaning that information is transmitted from node to node until it reaches its final destination. The devices deployed in a WSN can be of different types depending on their functions and capabilities.The most numerous class is composed of "motes" or sensor nodes whose mission is to monitor the environment and transmit that information (in addition to routing data packets from other sensor nodes).Additionally, there are gateway nodes or collector nodes whose goal is to receive all information sent by sensor nodes and allow interconnection between the WSN and a TCP/IP (Transmission Control Protocol/Internet Protocol) network.For this purpose, assistance from a base station (data collector based on a common computer or embedded system) is required. Security of WSNs The security of WSNs is essential as many of their uses and applications are related to very sensitive phenomena or situations (monitoring combat zones, disaster management, monitoring critical infrastructure, etc.).Protecting WSNs is challenging since each node is a potential target for a logical or physical attack.Among logical attacks, some are aimed at monitoring communications by intercepting and modifying data, impersonating the identity of legitimate nodes to inject false information into the network, etc. Privacy concerns are significant in WSNs given the large volume of data generated and transmitted, which could be easily accessed remotely.Physical attacks aim to directly access nodes to reprogram their operation, manually introduce malicious nodes, intentionally damage deployed nodes, etc. [4,5]. The main security requirements when deploying and operating a WSN would be the following: • Confidentiality: Data monitored by a given sensor should not leak to an unauthorized neighbor node.The key distribution process is fundamental to ensuring the security of the transmission channel.• Authentication: The user must be sure that the data used in any decision-making process comes from a reliable source.• Integrity: It should not be possible to modify the data collected and transmitted by the sensor devices by, for example, injecting manipulated data from a "malicious" node.• Timeliness: It is necessary to ensure that the information is up-to-date at all times, especially when implementing key exchange protocols.• Availability: It is a basic point to ensure availability, within the energy consumption margins, of the largest number of nodes for the longest possible monitoring period.• Temporal synchronization: To save energy, each sensor must be able to deactivate its transmission/reception capability during certain periods of time. Cyberattacks on WSNs can be of different types: denial service attacks, sybil attacks, traffic analysis attacks, etc., and in the great majority, malicious code plays an important role.The methodology that can be carried out to try to prevent and counteract these attacks is mainly based on the development of three fundamental types of actions: 1) implementation of defensive measures [6][7][8], 2) use of cryptographic protocols [9][10][11], and 3) implementation of key management infrastructures [12][13][14]. State of the Art: A comprehensive review of malware propagation models in WSNs Malware stands out as a basic tool in the development of cyberattacks on various systems and/or computer networks.Traditionally, its malicious activity has been focused on devices with sufficient computational resources (processing, communication, etc.), such as computers, smartphones, etc., connected to different types of networks.More recently, its use has extended to devices with much more limited processing capabilities, such as wearable devices or, more specifically, sensor devices deploying the WSNs [15]. In the realm of WSNs malware activity primarily revolves around the "reprogramming" of the infected node (for example, considering host specifications regarding memory, processing and communication capabilities, or energy consumption).This alteration affects its functionalities related to monitoring and/or transmission (modification of collected environmental data, disruption of connections with adjacent nodes, compromise of data packet integrity, etc.) or may even cause permanent damage.Moreover, the propagation process depends not only on the characteristics of the sensor devices but also on the routing protocols implemented in the WSN [16].As a consequence the study of the dissemination of different specimens of malware in WSNs is an area of interest that the scientific community has begun to explore in recent years by proposing and analyzing mathematical propagation models. In the vast majority of works that appeared in scientific literature and proposed mathematical models to simulate malware propagation in WSNs, ordinary differential equations are commonly used to describe the dynamics.These are continuous and global models, and usually follow the same framework as models developed to study the spread of biological agents (classical Mathematical Epidemiology), with minor specific modifications as including new compartments, consideration of some characteristics specific to WSNs, etc.These studies are inherently theoretical with the challenge lying in theoretical demonstrations of the stability of the system, and their practical application and efficiency are not thoroughly analyzed.To a lesser extent, individual-based models have emerged, attempting to address some deficiencies in global models, which are accentuated in the context of malware propagation. In the following we will review models proposed in recent years.The great majority of these models are of a global nature (both deterministic and stochastic), with very few proposals based in the individual-based modeling paradigm. As global models, noteworthy examples include a review in [17] that examines SI (Susceptible-Infectious) compartmental models based on the Kermack-McKendrick paradigm adapted for studying malware propagation in WSNs.This study determines that models proposed to date inadequately consider energy and memory management, the use of authentication schemes, and sensor mobility, among other factors.In [18], a global SEIRS (Susceptible-Exposed-Infectious-Recovered-Susceptible) model is proposed and analyzed, considering nodes that collect correlated information in a certain common monitoring area.Spatial correlation is considered to analyze the dynamics of the computer virus in event-controlled WSNs.A Susceptible-Unexposed-Infected-Isolated-Removed epidemic model is presented in [19] where the qualitative study is shown.In [20], a global SIR (Susceptible-Infectious-Recovered) mathematical model is presented for cluster-based WSNs (differentiating nodes that are grouped in clusters from those that are not).Additionally, an attack/defense game is established between malware and implemented defensive elements, obtaining infection and recovery rates associated with the mixed Nash equilibrium strategy.Other works using game theory (and cellular automata in the description of the dynamics) to study malware propagation in a WSN through SIS (Susceptible-Infectious-Susceptible) and SIRD (Susceptible-Infectious-Recovered-Damaged) compartmental models can be found in [21] and [22], respectively.In [23], a global SIRS-L (Susceptible-Infectious-Recovered-Susceptible-Low energy devices) model is proposed and qualitatively analyzed where sensor nodes can recharge energy, and a new compartment, L, is considered for devices with low energy levels.A very similar work by the same authors that analyze a SISL model can be found in [24].This compartment, representing nodes with low energy load, is also considered in [25], where a SILRD type (Susceptible-Infectious-Node with low energy level-Recovered-Dead) model is proposed and analyzed, such that the energy depletion due to malware action is considered.In [26], a SIR model with a nonlinear incidence term and a sigmoid recovery rate is proposed and studied from a qualitative perspective, determining the most efficient control strategies.A similar study on a SEIR model is presented in [27], where the Pontryagin maximum principle is used to determine the optimal control strategy.In [28], a qualitative analysis is conducted on an SIRS model considering two types of recovered nodes: those with total immunity and those who have recovered from infection but can be reinfected.In [29], a purely theoretical study proposes a SEIRS-V model (spreading a computer worm in a WSN) that includes the immunized compartment (V) and explicitly calculates its solution using the homotopy perturbation method.In [30], an SEIQRV (Susceptible-Exposed-Infectious-Quarantined-Recovered-Vaccinated) compartmental model is proposed, considering the compartment of quarantined nodes (Q).A qualitative study of solutions is given, examining the effect of node density and transmission radius on malware spread.In [31], a susceptible-unexposed-infected-isolation-removed model is proposed and its dynamics are described by means of a system of ordinary differential equations whose qualitative analysis is also presented.In [32], a probabilistic model is proposed on complex networks where the dynamics are defined by a system of stochastic ordinary differential equations; in addition to susceptible nodes, two types of infected nodes (with high battery level and low battery level) and "secured" nodes are also considered.In this work a theoretical study of the stability in probability is performed. In [33], a qualitative study is carried out, proposing a SCIRS (Susceptible-Carrier-Infectious-Recovered-Susceptible) compartmental model that introduces the compartment of "carrier" nodes, similar to [34].In [35], the authors propose a hybrid model based on cellular automata and differential equations to simulate the spatiotemporal spread of malware on a WSN.The continuous model is qualitatively studied by analyzing the stability of the equilibrium points obtained. In [36], a theoretical study is conducted on an SIQPD (Susceptible-Infectious-Quarantined-Patched-Damaged) model considering that sensor nodes can move.This model is an improvement of those proposed by several of the same authors in [37,38].In [39], an SIS model is constructed, considering specific characteristics of the network such as limited energy use and node density in the definition of epidemiological coefficients.In [40], a stochastic SIRD model is designed (the dynamics are described by means of Markov chains) where both the spatial distribution of nodes and their differences in vulnerability to malware are considered.Another stochastic SIS model has been recently proposed in [41], where a simple derivation of the exact Markov chain for random propagation of the malicious code is presented. The mathematical description of the spread of malware using fractional epidemiological models has been also proposed: In [42], a SEIVR (Susceptible-Exposed-Infectious-Vaccinated-Recovered) model on scale-free networks was introduced and analyzed, and in [43] the qualitative analysis of a SEIR model was presented and the optimal control strategy was also discussed.In [44], another fractional-order compartmental epidemic model was presented and analyzed: The population of devices is divided into susceptible, exposed, infected, recovered and vaccinated, and bot theoretical and numerical aspects are studied.Moreover, the optimal control problem for a fractional malware propagation model is studied in [45] in the case of underwater WSNs, and these control strategies are improved using machine learning techniques such as deep neural networks and random forest.Furthermore, propagation models based on differential equations have also been proposed to study the behavior of malware and develop the corresponding antivirus software [46].Some characteristics of the life cycle of nodes are taken into account in [47] such that several compartments are considered: susceptible, susceptible and sleeping, infectious, infectious and sleeping, recovered, recovered and sleeping, and dead.The authors use a system of differential equations to represent the transition between these states in such a way that states a decision-making problem between the system and the specimen of malicious code as an optimal control problem.In [48], a model for malware propagation in underground and above-ground WSNs was introduced and analyzed.In this compartmental model, the devices are divided into susceptible, exposed, infectious, recovered, and low energy and each of this compartments is subdivided into underground and above-ground devices.Moreover, three basic features are captured in this model: the cross-infection, the infection time and low energy, and three hybrid control schemes are considered: the recovery scheme, quarantine, and pulse charging.A detailed study of the conditions for optimal control is done from a classic point of view and deep learning techniques are used. Following this review of the scientific literature, it can be observed that the majority of works focus on the theoretical analysis of the proposed mathematical model, often overlooking the characteristics of malware propagation on WSNs that are only tangentially considered within the mathematical description.While some models take into account some of these factors (see, for example, [48]), it is not the norm.Consequently, it seems opportune to design new families of models with the aim to provide the most possible detailed description of this phenomenon: considering new compartments of devices, incorporating characteristics of the data routing process in WSNs into the incidence term, etc. On the other hand, in the field of individual models, in [49], the authors propose an individual-based, discrete, and stochastic SEIRS-F (Susceptible-Exposed-Infectious-Recovered-Susceptible-Failed) model aiming to analyze malware propagation on a WSN and to study the reliability of its components in this situation.In [50], an SITPS (Susceptible-Infected-Traced-Patched-Susceptible) compartmental model is studied, considering, in addition to the classic compartments of susceptible and infected nodes, the compartments of "tracked" (T ) and "repaired" (P).This is a stochastic individual-based model (based on Markov chains), where the authors analyze the optimal epidemic control strategies.In [51], the authors propose a stochastic SI individual-based model to compute the probability associated to an industrial IoT device to be compromised by an Advanced Persistent Threat (APT).In [52,53], two works based on similar theoretical techniques are presented to analyze a propagation model of false data malware (false data injection attack). Node specifications In this work, we will consider N deployed nodes: n 1 , n 2 , . . ., n N , such that p i = (x i , y i ) ∈ R 2 stands for the cartesian coordinates of the position of the i-th node n i , with 1 ≤ i ≤ N. Additionally, we will assume that R i denotes the monitoring radius such that is the monitoring area of the i-th sensor node, and r i is the transmission radius with r i } being the transmission area (see Figure 1), so that any node n j such that p j ∈ B (p i , r i ) will be able to receive data transmitted by node n i . Node deployment Node deployment in the monitoring region R ⊂ R 2 can be done either in a predetermined manner (placing nodes in predefined locations) or in a non-predetermined manner (placing these devices in locations distributed more or less randomly).When constructing the model, the monitoring region R can be considered as a continuum (see Figure 2(a)) or it can be "discretized" into equal cells (square shape) and distributed homogeneously (see Figure 2 Once the nodes are deployed, the directed network defining the topology of contacts, G = V G , E G , is created and it is denoted as local connectivity (nodal) network.Here, the vertices of the network represent the sensor nodes: V G = {n 1 , n 2 , . . ., n N }, and there exists an edge between node n i and n j , e i j = n i , n j ∈ E G , whenever node n j is located within the transmission area of node n i : If all transmission radii are equal, r i = r j for 1 ≤ i < j ≤ N, the local connectivity network is defined by an undirected graph since e i j = e ji ∈ E G .This is the situation we will work with. Node life cycle Certain characteristics inherent to the sensor nodes constituting a WSN must be taken into account when designing epidemiological models.Among these, the most important is the management of energy consumption.In this way, during the period when the node is not performing monitoring or communication (reception and transmission) tasks, it remains "dormant" or "asleep," maintaining energy consumption at the minimum possible level.On the other hand, when it is carrying out the aforementioned tasks, the node is in an "active" state, consuming only the energy necessary for the proper development of these activities. Although sensor devices are designed to have low energy consumption, if they only have one power source, it will eventually be depleted over time, and the node will cease its operation, becoming "inactive."Some nodes may additionally be equipped with a secondary energy supply source that provides them with energy obtained from the environment, which will impact the lifespan of the device. Consequently, the activity of a sensor node consists of: 1) Monitoring the environment at regular time intervals. 2) Sending the data obtained from monitoring to the sink node. 3) Routing the data packets received (collected and emitted by other sensor nodes) toward the sink node. It can be assumed that activities 1) and 2) are performed sequentially with little time lapse between them and at predefined time intervals.Activity 3) is carried out whenever the node receives a data packet "in transit" to the sink node (see Figure 4).As it is assumed that routing protocols should optimize the distance traveled, both self-transmissions (sending the collected data) and transmissions due to routing received data packets from other nodes, should be directed toward the nearest neighboring nodes to the sink node. Specifications of the propagation and infection processes When designing any mathematical model to simulate the spread of a particular agent in a specific environment, it is crucial to clearly define the main characteristics of propagation and infection processes.These specifications will determine the variables and coefficients involved in the model, as well as the design of equations governing the dynamics of the phenomenon.Considerations should include both the environment of the spread (a WSN, in this case), the properties of the agent being propagated (malware in our case), and the actors that can host this malicious code (the sensor nodes of the network). Propagation process Regarding the propagation process, the following assumptions will be made: 1) At the onset of the outbreak, there will be a single infectious node capable of spreading the malware throughout the network (obviously, it will not be isolated). 2) The malicious code will propagate from one sensor node to another, utilizing legitimate communications established between them as a result of their activities (data transmission and data routing).This minimizes the chance of detection by potential security measures implemented in the WSN.It is noteworthy that, in this case, a "proper contact" can be defined as any transmission initiated by an infectious node, and whose recipient is a susceptible node, not vice versa.That is, a transmission initiated by a susceptible node toward an infectious node (even if the infectious node sends a data reception confirmation back to the susceptible node) will not be considered as a proper contact.It is assumed that the presence of malicious code embedded in these confirmation transmissions would be easily detected.3) Direct spread will always occur toward nodes within the transmission range of the infectious node.If, for example, n i is an infectious node and n j B (n i , r i ), then the probability of the malware being directly transmitted (not through intermediary nodes belonging to any path connecting them) from n i to n j is zero.However, during each time unit, the code initially transmitted from n i could reach nodes that are not within its transmission range, thanks to routing.4) Spread to nodes within the transmission range (neighbor nodes) could be of two types, depending on the specifications of the malicious code: (i) Unrestricted spread: The malware specimen lacks the ability to know the state of neighboring nodes (due to technical or other issues) and, consequently, attempts to spread indiscriminately.(ii) "Intelligent" spread: The malware specimen has the ability to fully or partially know the characteristics of neighboring nodes and can, consequently, decide to spread only to those sensor nodes of interest. Infection process Regarding the infection process, the following scheme will be followed: 1) When the malware specimen reaches a node, it has to "bypass" the security measures implemented in it so that if a intrusion attempt is detected, it is blocked, and the malware does not infect the sensor node (keeping it in the "susceptible" state).2) If security measures cannot detect the intrusion, then the malware infects the node, which becomes a "latent" state.The malware evaluates the utility of the host for its purpose.If the malicious code determines that the infected node is not of interest, it tries to spread to a neighboring node.During this period (where the malicious code attempts to infect another device), the host node will be in the "infectious-latent" state and will return to the susceptible state after the malware has successfully spread.If, on the other hand, the specimen of malware determines that the infected node is useful, it decides on the type of attack to perform: carrying out malicious activity without physical harm to the node (for example: malicious manipulation of collected data or data in transit), or physically damaging the node.In the first case, the node is to be in the "compromised" state, while in the second case, the node is in the "damaged" state.3) During the malicious activity, the malware may have the ability to spread (in which case the node is in the "compromised-infectious" state) or simply limit itself to carrying out malicious activity in the node itself (state "compromised").4) The security measures implemented in the network and in the node could detect the presence of malware, and in this case, they would evaluate if it is possible to eliminate the malicious code. If possible, the node would be "free" of malware, and its state would transition to "patchedsusceptible." Otherwise, (when countermeasures could not eliminate the malware), the node would be disconnected from the network, transitioning its state to "deactivated."5) Finally, if the malware activity in the infectious or compromised node is not detected, it will continue its operation until it ends.At this point, the node will return to the susceptible state.The duration of this period (infectious or compromise period) will be variable obviously, the greater the activity and the longer the period in which this malicious activity is carried out, the greater the probability of being detected). The state set As is previously discussed, the characteristics of the propagation and infection processes of the malware specimen, as well as the particularities of both the nodes and the WSN as a whole, define what is called the state set X.This is a finite set whose elements are all the possible states in which each node, at each step of time, can be found (susceptible, exposed, infected, compromised, immunized, etc.).Thus, in general, X = {x 1 , x 2 , . . ., x m }. In the case of designing global models, we can work with two types of variables depending on whether we consider the discretized monitoring region or the continuum monitoring region.Thus, in the first case, many variables will be defined as there are states and the number of tessellations in the region R: where s k (t) ∈ X is the state of the k-th sensor node at time t.In the second case, that is, if we consider the region R as a continuum, the variables will be defined as follows: Note that if we work with the discretized region, it is obviously possible to define "global" variables: In this work, it is assumed that the node population remains constant not only globally but also in each of the possible cells into which R is divided so that: ) ) Taking into account all these considerations, the possible states in which any node may be are the following: • Susceptible, S : The node is "free" of malware and either has never been infected before, or having been infected, such intrusion was not detected.• Patched susceptible, S P : The node is "free" of malware, having been infected at some previous time when security measures successfully detected and eliminated the malicious code.• Latent (non-infectious), L: This is an infected node in which the malware is determining what activity to perform based on the characteristics of the node.• Latent infectious, L I : The node will not be attacked by the malware but is being used as a transmission vector for its spread through the network (it is infectious).• Compromised (non-infectious), A: A node that is infected and is being attacked without physical damage.• Compromised infectious, A I : An infected node that is being attacked without physical damage and, at the same time, is serving as a transmission vector for the spread of malware to neighboring nodes.• Damaged, D: This is an infected node that has been attacked by malware, causing physical damage that prevents its operation.• Deactivated, Q: An infected node in which the malware has been successfully detected but could not be eliminated, so it has been disconnected from the rest of the network. Consequently, the state set is: (5.7) In Figure 5, the flow diagram representing all the transitions between states is shown. Temporal unit The correct definition of the temporal unit is a key factor in the model development as all epidemiological coefficients (and equations) depend on it.In our case, as is mentioned earlier, the milestones that determine the propagation process are the legitimate transmissions made by nodes, both of the data collected by themselves during the monitoring process and the data packets they receive from other nodes and have to route to reach the sink node.Therefore, we believe that the notion of a temporal unit should strongly depend on the number of such transmissions, specifically the number of own transmissions emitted by a sensor node.In this sense, given a number of monitorings (and subsequent transmissions) c, we define the temporal unit as the period of time during which c own transmissions of a sensor node occur (see Figure 6). Additional considerations In the novel global model that is proposed in the following section, some additional assumptions will be made, derived from what has already been previously established, and the nature of global models: • Transmissions carried out by each node (whether own or associated with the routing process) will be directed toward all nodes in its neighborhood.• The number (average) of own transmissions of each node per unit of time will be where ⟨k⟩ is the average degree of the network.• The number of routings performed by an arbitrary node per unit of time depends proportionally on the betweenness centrality of nodes and the average length of the shortest paths between nodes.• The path taken by a data packet during a unit of time has an average length l, which will be reduced by one unit after each monitoring.Consequently, during the period of time that lasts one unit of time, the data collected in the i-th (1 ≤ i ≤ c) monitoring carried out during that unit will travel a path of length l − (i − 1) (see Figure 7 ).6.The global model on the continuous monitoring region Transition from susceptible to Latent The mathematical determination of the incidence term is one of the most important (and decisive) tasks in designing a mathematical model to simulate the propagation of malware on a WSN. Considering the previous description of the propagation process, infection can occur when there exist an effective contact (communication between sensor nodes) between an infectious node (either in a latent state, L I , or attacked, A I ) and a susceptible node (whether it is an "original" susceptible or a patched susceptible).In this way, we have: where where k (N) stands for the average appropriate contacts of each node with the rest of the sensor nodes per unit of time, and 0 < q A ≤ q L ≤ 1, 0 < qA ≤ qL ≤ 1 are the probabilities that a suitable contact leads to infection when receiving a transmission from a node in infectious latent state L I or from a node in infectious attacked state A I , respectively.Additionally, it is assumed that qA ≤ q A and qL ≤ q L .Taking into account the above, we have: where K 0 = ⟨k⟩ • c is the average self-transmissions of the node, and K 1 (N) is the (average) number of routings each node manages in each unit of time.Specifically, in this work, we assume: where l − (δ − 1) < L ≤ l − δ and L is the average length of the shortest paths between any pair of nodes in the WSN.Note that C B (n i ) is the betweenness centrality associated to the i-th sensor node.In summary, we will have: .5) Transition from Latent to infected Considering the above, once the malicious code reaches a sensor node, it proceeds to evaluate it to decide what activity to develop: not attack the node and use it as a transmission vector or attack the sensor node with a higher or lower level of "aggressiveness".We can assume that during each unit of time, there is a fraction of nodes, 0 ≤ γ L ≤ 1, that are not attacked, another fraction of nodes, 0 ≤ γ D ≤ 1, are attacked and damaged, and another fraction of nodes, 0 ≤ γ A ≤ 1, which are attacked and used to carry out malicious activity without disabling them.Within these latter nodes, a fraction defined by 0 ≤ ν ≤ 1 will not be used as transmission vectors (infectious nodes).There will also be a fraction 0 ≤ ω ≤ 1 of latent nodes that are detected as infected by security countermeasures, of which another fraction 0 ≤ ρ ≤ 1 will be possible to eliminate the malicious code specimen.Finally, it will be assumed that there will be a small fraction (per unit of time), 0 ≤ η ≤ 1, of latent nodes that cannot be classified.Therefore, we will have γ L + γ D + γ A + η + ω + ρ = 1.Additionally, we can make the following assumptions about the numerical value of these epidemiological coefficients: • There will be fewer attacked and/or damaged nodes than non-attacked nodes: γ D + γ A ≤ γ L .Also, there will be many fewer damaged nodes than attacked nodes: γ D ≪ γ A .• The fraction of latent nodes detected by security countermeasures will be very small not only in comparison with these epidemiological coefficients: ω, ρ ≪ η, γ D , γ A , but also in relation to the detection and elimination rates that affect attacked and infectious nodes.• The fraction of nodes in which a decision cannot be made will be very low: As a consequence, we are assuming that: 0 ≤ ω, ρ ≪ η ≪ γ D ≪ γ A ≤ γ L . Transitions from infected to susceptible and/or disabled It is assumed that security countermeasures are constantly monitoring the WSN to detect (and eliminate, if possible) malware.Roughly speaking its detection will consist in searching for suspicious or unusual activities of the nodes.Remember that in infected nodes, the activities carried out by the malware (during monitoring and legitimate transmission periods -to try to go unnoticed-) are the following: • Development of malicious activity in a node without permanently damaging it. • Attempt to spread to other nodes. Depending on the state of the node, different activities will be carried out, and it is reasonable to assume that the more activities are performed, the more probability of detection there will be.In this sense, the minimum detection probability (per unit of time) can be assigned to latent state L nodes, as mentioned earlier, 0 ≤ ω ≤ 1.From here, we will assume the following: ω L I = cω, ω A = 2cω, ω A I = 3cω, where ω L I , ω A , and ω A I are the detection probabilities for latent infectious, attacked, and infectious attacked nodes, respectively. On the other hand, the rate of elimination of malicious code will be assumed to be the same regardless of the state of the considered node: 0 ≤ ρ ≤ 1.Finally, infected nodes where the malware has not been detected and has completed its malicious activity will become, again, susceptible in a fraction that will depend on the state of the node: 0 ≤ ϵ ≤ 1 for attacked nodes and 0 ≤ ζ ≤ 1 for latent infectious nodes.In Figure 8 the transition diagram of the described model is illustrated. The system of ordinary differential equations governing the model dynamics As indicated in Subsection 5.3, the variables used in this model are the following: Then, taking into account the specifications of the model given above, the system of ordinary differential equations that governs its dynamics is: ) where N = 8 i=1 X i (t) for all t.Also, the following initial conditions will be considered: .14) Note that the feasible region of this system is Γ = {(X 1 , . . ., X 8 ) ∈ (R + ) 8 such that X 1 + . . .+ X 8 ≤ N}, so only solutions living in this region will be of interest. Proposition 2. The system determined by equations (6.6)-(6.13)always has an infection-free equilibrium point P * 0 = (X * 1,0 , . . ., X * 8,0 ) defined by the following coordinates: Proof.The equilibrium points are solutions of the system namely: From Eqs (6.20)-( 6.22), it follows that: so the system (6.17)-(6.25) is reduced to the equation N , and the statement is proven.□ Proposition 3. If γ D = 0, then the system determined by Eqs (6.6)-( 6.13) has an endemic equilibrium point P * e = (X * 1,e , . . ., X * 8,e ) such that: 1) If ω = 0, it is given by: 2) If ρ = 1, it is given by: where: Proof.Considering Eq (6.23) and assuming that X 3 0, we have γ D = 0, so the system can be written as follows: From Eqs (6.35) and (6.36), it follows that: and from Eq (6.38), it is deduced that either ω = 0 or ρ = 1.Thus: 1) In the first case, when ω = 0, then X 2 = 0, and the system becomes: The first equation is a tautology, so only the last equation remains and the result stated in the proposition is obtained.2) In the second case, if ρ = 1 (with ω 0), then and the system becomes: The first equation is a tautology, and from the remaining equation, through a simple calculation, the result of the proposition is obtained. □ It can be assumed that a 1 α L + a 2 α A 0 and a 1 αL + a 2 αA 0 since α L and α A cannot be zero at the same time (for there to be incidence).Additionally, it can also be supposed that γ L 0 and γ A 0 because otherwise there would be no transition between latent and infective-latents and attacked nodes, respectively. Note that the endemic equilibrium point exists when several very special conditions are satisfied, namely: 1) γ D = 0, meaning that the malicious code specimen is not capable of permanently damaging sensor nodes and rendering them disabled.2) Either 2.1) ω = 0, meaning that security countermeasures are not capable of detecting the presence of the malicious code specimen.2.2) or ρ = 1, meaning that security countermeasures are capable of eliminating the malware specimen from all nodes where it has been detected. Meeting these conditions would simplify the model significantly by eliminating three transitions between compartments.Consequently, this case can be considered as marginal. Calculation and analysis of the basic reproductive number Theorem 4. The basic reproductive number associated with the previously described epidemiological model for malware propagation is given by: Proof.We will apply the next-generation method (see, for example, [54,55]) to compute in an explicit way its basic reproductive number.Thus, considering only the variables corresponding to compartments of infected nodes, the system (6.6)-(6.13)can be written as follows: with V i = V − i − V + i , where: ) F 6 = 0, (6.55) V − 6 = (ϵ + 3cω) X 6 , (6.56) such that F i represents the appearance of new nodes in compartment X i from infection, V + i indicates the number of nodes entering in state X i due to system dynamics, and V − i stands for the number of nodes disappearing from compartment X i due to model dynamics. A simple computation shows that: and so that Consequently we have: and the basic reproductive number will be the spectral radius of the matrix F V −1 thus finishing the proof.□ The most relevant epidemiological coefficients of the model are those that appear in the explicit expression of the basic reproductive number since this is a crucial threshold parameter that determines the behavior of the temporal evolution of infectious nodes.Considering the meaning of the concept of the basic reproductive number, the lower the numerical value of this coefficient, the easier it is to control the epidemic outbreak.Consequently, the fundamental goal is to try to reduce the value of R 0 below 1, since it is reasonable to establish the main prophylactic strategies as those aimed at reducing the value of R 0 .Note that, in our case, the basic reproductive number can be written as follows: which means that R 0 will decrease when: 1) The denominator ω + γ L + γ A + γ D increases. These are purely mathematical conditions, and some of them may not have "physical" or epidemiological significance or be impractical in practice.Analyzing them in some detail, we can draw the following conclusions: 1) The denominator ω + γ L + γ A + γ D roughly represents the rate of abandonment from the compartment of nodes in the latent state L. From a practical standpoint, it makes sense to increase ω (1 − ρ), which is the fraction of latent nodes that are detected and become deactivated, thus preventing them from becoming infectious in the future.However, it does not make much sense to increase ωρ or γ A ν, and, certainly, it makes no sense to increase γ L or γ A (1 − ν) since it would be increasing the compartment of susceptibles in the first case (potential future infectives) or the compartments of infectives, L I and A I , in the second case.2) Obviously, if we reduce the number of nodes susceptible to infection, the infectious outbreak will be contained.This could be achieved either by immunizing them (a process not considered in the current model) or by isolating them from the network (which would negatively impact its operation).3) In principle, it would not be possible to decrease the rates γ L or γ A since they correspond to characteristics of the malicious code specimen, and it is assumed that we would not have access to them.The same would apply to increasing the rates ν, ζ, and ϵ.However, it would be possible to influence the detection rate ω of latent nodes, although it would only be practically useful, as discussed in point (1), to increase ω (1 − ρ).4) The contagion probabilities q L , q A , qL , and qA decrease if we enhance the effectiveness of security measures implemented in the nodes.5) Decreasing k (N) implies reducing the number of contacts (direct transmissions and routing transmissions) of the nodes.This is not possible without affecting the proper functioning of the WSN. Numerical simulations We will illustrate the proposed model with some simulations considering different contact topologies: complete network, homogeneous grid network, random network, scale-free network, and small-world network. In Table 1, the values of the epidemiological coefficients considered in all simulations are presented.These are purely illustrative numerical values. Mathematical Biosciences and Engineering Volume 21, Issue 3, 3967-3998.It will be assumed that at the initial time, we have the following compartment configuration: which means that all nodes at t = 0 are susceptible except for a single node in the latent and infectious state.In addition, as mentioned earlier, simulations will be performed on WSNs whose network topology is defined by five complex networks of different typologies.For the sake of simplicity, we assume N = 100 and consider the following contact topologies (note that for N > 100, the simulations obtained are similar if the same epidemiological coefficients, global structural indices, and initial conditions are considered): • Complete network, G 1 . • Homogeneous grid network, G 2 : • Random network (ER type with connection probability p = 0.5), G 3 : In Table 3, some of the main structural coefficients associated with these networks are shown.In addition to the previously mentioned values of the coefficients, we will suppose that for each step of time, there will be c = 3 monitorings/direct transmissions from each node, and the length of the path traveled in the WSN by a data packet in each time unit is l = max{L 1 , L 2 , L 3 , L 4 , L 5 } = 7.Additionally, in Table 3 the values of the respective contact rates k(N) in the cases under consideration are shown: If the system of differential equations governing the dynamics of the model is numerically solved using the above data (we will use Mathematica software for this simulations), the solutions shown in Figures 13-17 are obtained.Note that in all cases, the system evolves toward an infection-free equilibrium state.Also, it can be observed that in two cases (those using the complete network and the random network), there is an initial increase in the number of infected nodes before declining, while in simulations where the WSN has a homogenous grid, scale-free, or small-world topological structure, the infectious outbreak disappears immediately without any impact on the network.This fact could have been foreseen simply by considering the data in Table 3, where it can be observed that given that all simulations have been obtained from the same values of epidemiological coefficients, it is precisely the impact of the network topology on the computation of k(N) that strongly determines the epidemic evolution.In Figure 18, the evolution of the latent compartment in WSNs with contact topologies defined by ER random networks with different connection probabilities p is shown.The corresponding values are presented in Table 4.It can be observed that as the connection probability used in the random network construction algorithm decreases, the prevalence (number of infected nodes) becomes more "flattened" until a certain threshold value of p -when R 0 is less than 1-where the number of nodes in the latent state decreases from the initial time. Finally, it should be noted that the definition given for incidence severely undervalues network topologies defined from homogenous grid, scale-free, and small-world networks, even considering the same numerical values for the malware's epidemiological coefficients.Obviously, the effect of propagation cannot be the same as in a complete or random network (constructed using a high probability in the ER algorithm), but, in my opinion, it shouldn't be so slight, especially when empirical evidence shows that contagion probabilities, q L and q A , should be multiplied by 10 3 to achieve similar behaviors. Conclusions and future work In this work, following a review of the state of the art regarding mathematical models for simulating malware propagation in WSNs, a novel way of defining incidence has been proposed, which takes into account the average number of routings per unit of time.Based on this, a new propagation model has been designed.Through a detailed analysis of the phenomenon, this model considers more compartments than those employed in other existing models. Taking into account all these compartments (8 in total), the study of stability becomes overly complex, although it is possible to explicitly obtain the expression for the basic reproductive number.Subsequently, an analysis of the basic reproductive number can be performed to determine key containment measures. The proposed model is of a global nature, where the studied variables represent the size or density of the considered epidemiological compartments (for example, the number of infected devices at each step of time).Consequently, it does not take into account the specific characteristics of each of the devices within the WSN (both those related to the processes of propagation and infection, as well as the specific contact topologies).This is a potential limitation of the model that could be addressed by studying the development of individual-based models, which is left as future work.On the other hand, although certain specific aspects and characteristics of WSNs and malware propagation have been considered in this work, further exploration is needed in the ad hoc design of epidemiological coefficients for models simulating malware propagation.Furthermore, as future work, the simplification of the proposed model by using fewer compartments is suggested, thereby enabling an in-depth qualitative analysis. Use of AI tools declaration The author declares he has used Artificial Intelligence (AI) tools in the revision (and improvement) of the English of this article.Moreover, AI tools have not been used for other purposes. Figure 1 . Figure 1.Arrangement of nodes n i and n j along with their respective monitoring and transmission regions. (b)). Example 1 . Figure 3 illustrates the local connectivity network defined by the deployment of N = 100 nodes -all equipped with the same transmission radius -shown in Figure 2. Figure 3 . Figure 3. Local connectivity network associated with the node deployment illustrated in Figure 2. Figure 4 . Figure 4. Timing of the main activities of a sensor node. Figure 5 . Figure 5. Flow diagram representing the transition of states. Figure 6 . Figure 6.Structure of the temporal unit where it is considered c = 3. Figure 7 . Figure 7. Length of paths traveled by data sent during a unit of time. Figure 8 . Figure 8. Flow diagram representing the transition of states between compartments. Figure 13 . Figure 13.Temporal evolution of the compartments in the complete network G 1 . Figure 14 . Figure 14.Temporal evolution of the different compartments in a grid network G 2 . Figure 15 . Figure 15.Temporal evolution of the compartments in an ER random network G 3 with p = 0.5. Figure 16 . Figure 16.Temporal evolution of the compartments in a scale-free network with n = 3 G 4 . Figure 17 . Figure 17.Temporal evolution of a WS small-world network with p = 0.1 G 5 . Figure 18 . Figure 18.Temporal evolution of the latent compartment on different WSNs described by ER random networks. number of susceptible nodes S at time t.X 2 (t) = number of patched susceptible nodes S P at time t.X 3 (t) = number of latent nodes L at time t.X 4 (t) = number of latent infectious nodes L I at time t.X 5 (t) = number of attacked nodes A at time t.X 6 (t) = number of infectious attacked nodes A I at time t.X 7 (t) = number of damaged nodes D at time t.X 8 (t) = number of deactivated nodes Q at time t. Table 1 . Numerical values of the epidemiological coefficients considered in the simulations. Table 2 . Global structural indices associated with the complex networks used in the simulations. Table 3 . Contact rates associated with the complex networks used in the simulations. Table 4 . Characteristics of the random networks used in the simulations.
11,975.6
2024-02-22T00:00:00.000
[ "Computer Science", "Mathematics" ]
Comparison of strength-based rock brittleness indices with the brittleness index measured via Yagiz’s approaches Rock brittleness is one of the most significant properties of rock having a major impact not only on the failure process of intact rock but also on the response of rock mass to rock excavation. In fact, the brittleness is a combination of rock properties including not only the uniaxial compressive strength (UCS) and Brazilian tensile strength (BTS) but also density and porosity of rocks. Due to that, the brittleness should be examined very carefully for any excavation projects, i.e., mechanized excavation, drilling and blasting. The aim of this paper is to compare the strength-based brittleness indices with both the rock brittleness index (BIo), directly obtained via punch penetration test (PPT) and also estimated via Yagiz’s approach (BI1) as a function of strengths and density of rocks. For the aim, database including more than 45 tunnel cases are used to compute common rock brittleness indices (BI2, BI3, BI4), different combination of UCS and BTS. Further, these indices are compared with both BIo and BI1 as well as each other. It is found that the BIo and BI1 have a significant relations (ranging of determination coefficients (r2) from 0.69 to 0.88 with strength-based brittleness indices commonly used in practice. Also, based on findings, several rock brittleness classifications are also revised herein. Introduction Brittleness is a key rock characteristic; it is pertinent to predicting rock fragmentation behavior, energy consumption in cutting rock, and the selection of proper cutting geometry considerations in mechanical excavation [1]. While brittleness is typically understood as a concept, there is no universally accepted measure for this rock characteristic; often a combination of rock properties is used define brittleness rather than a single test to make a direct measurement. Brittleness is a material property describing the material's loss of carrying capacity with a small deformation [2]. The brittleness is defined as a lack of ductility [3][4]. Ramsey [5] defined brittleness as the breakage of the internal cohesion of rocks. Obert and Duvall [6] defined brittleness as the fracture of materials at or only slightly beyond the yield stress. Hucka and Das [7] defined brittleness as follows: "with higher brittleness, the following facts are observed: low values of elongation, fracture failure, formation of fines, higher ratio of compressive to tensile strength, higher resilience, higher angle of internal friction, formation of cracks during indentation" [3][4][5][6]. Strength-based rock brittleness indices are commonly used as indirect tools to assess material brittleness. The rock brittleness indices can be quantified as the ration of the indirect Brazilian tensile [7][8][9]. More, the brittleness can be estimated using UCS, BTS and density of rock [10][11][12]. The brittleness indices based on the rock strength parameters are widely used due to the relatively easy acquisition of the uniaxial compressive strength and splitting tensile strength, and these indices have wide use in the grade estimation of rock burst [7,13,14]. On the other hand, rock fragment size distribution [15] and indentation tests have been very useful to examine the rock behavior under the cutters [10][11][12]. Altindag [9] proposed the use of rock strengths to relate brittleness to drillability. One of the tests that could be considered for the measurement of brittleness is an indentation test also called as punch penetration test (PPT). The PPT, originally intended to provide a direct method for estimating the normal load on disc cutters was developed in the late 1960s to provide a direct laboratory method to investigate rock behavior under the indenter [8]; however, the test has been used for evaluations of the hardness of rock, its drillability, and brittleness, and can be considered for predicting the performance of a Tunnel Boring Machine (TBM) [10, 17 and 18]. Szwedzicki [19] employed the test to measure rock hardness and stated that this test could be used for predicting the cuttability of rocks. Ozdemir [20] stated that the test could be utilized for quantifying the brittleness and toughness of rock by using test output. In this paper, rock brittleness index measured using the PPT via Yagiz' method is compared with other well-known alternative strength based indices to be used for rock engineering and excavation purposes in practice. Data development The data collected from 48 mechanized tunnelling projects is examined to update data variables to compare the rock brittleness indices as summarized in table 1. The dataset includes 35% sedimentary rocks, 32% metamorphic and %35 igneous. In the dataset, rock brittleness can be classified from extremely low to very high brittleness based on the classification published by Yagiz [11,12]. During the tests that was conducted by Yagiz (2009), entire intact rock specimens were observed and examined with naked eyes to make sure that those rock sample does not have any deformation related to discontinuities of rock mass. Each test is conducted at least five time to obtain the averaged values represented in table 1. Strength based rock brittleness and Yagiz's approaches The punch penetration test is one of the methods used for investigating various rock properties such as brittleness, toughness, hardness and drillability by using different evaluation approaches [10,[17][18][19]. The first test apparatus was designed and the testing procedure given in a paper [21]. Since then, the test has been used for different purposes and does not specify certain rock properties; however it is known that the brittleness obtained from the PPT is a very significant input variables for estimating the excavatability and drillability of rocks [2,10,17,20]. In the punch penetration test, a standard conical indenter is pressed into a rock sample that is cast in a confining steel ring [10]. The load-displacement measurements of the indenter are then acquired with a computer system and can be related to the mechanical cuttability (the energy required for efficient chipping). Yagiz [10] Where, BIo is the measured brittleness index in kN/mm, Fmax is the maximum applied force on a rock sample in kN, Pmax is the corresponding penetration at maximum force in mm, BI1 is the predicted brittleness index in kN/mm. The UCS and BTS are in MPa, and D is the unit weight of rocks in kN/m 3 . More information related to Yagiz's method used for rock brittleness index can be found in the literature [11]. Since mid-1960's, dozens of different brittleness indices have been proposed to use for various applications especially in rock mechanics; however, well-known strength based rock brittleness indices are the subject herein to examine for rock engineering and excavation practice as follow. Even though BI2 and BI3 are well known, these indices do not have and classification used in rock excavation and engineering practice. However, Equation 2 and 5 are examined and relatively well classified by proposed researchers [9,[11][12] as shown in table 2 and 3. Besides that, Dhal et al., [15] classified rock brittleness based on rock fragment size distribution as shown table 4 which is given herein to pay attention to rock brittleness classifications in the current literature. Table 2. Rock brittleness classification for excavatability [11][12]. Data Analyses Several statistical correlations including simple linear and non-linear regressions are conducted to obtain the relations between the investigated rock brittleness indices. It should be mentioned that since the rock strengths ratios are the main variables for those indices except Yagiz's approaches [11], obtaining the high regression coefficient among some of them is likely. The rock brittleness is not only related to rock strength but the combination of rock properties including density, porosity, mineralogy, quartz content and strength of rock; however due to restriction of the dataset, only density and strength of rock is considered herein. More, rock density has very close relations with the porosity; so, simply either density or porosity should be taken consideration to examine the rock brittleness as possible. After the data examined and the brittleness indices was computed using the dataset in the literature [11], simple regression analysis is used for estimating the rock brittleness as a function of each other via linear and non-linear regressions based on coefficients of determination (R 2 ) as shown on table 5. As a result, every possible linear and non-linear regression is applied to the dataset and then the coefficient of determination (R 2 ) and equations are obtained as given here in from figures 1 and 2. As seen in table 5, the coefficient of determination between the BI2 and BI3 is likely (r^2=0.97) since both of them is the ration of UCS to BTS values of rocks. It is seen that the good correlations are also obtained among the BI1 and BIo and BI2, BI3 and BI4 with determination coefficient of more than around 0.70. Conclusions In this paper, common strength-based rock brittleness values are evaluated and the relations between those indices and the brittleness measured and estimated from the PPT is discussed. It is found that the strength based rock brittleness are practical and easy to obtain for early stage of rock engineering projects. However, rock brittleness is not only related to rock strength but it is also relevant with porosity, mineralogical content of rock, hardness and density of rocks. So, only the strength ratios of the rocks are not enough to compute the brittleness of rock. Due to that, rock brittleness obtained as a function of rock strength and density of rock using the output of the PPT is more reliable than others which are only the strength ratios. The brittleness measured and estimated via Yagiz's approaches are more realistic, since rock density that is also directly related to porosity and abrasivity of the rocks is used as one of input variables to estimate the rock brittleness; More, there is a space to update the obtained rock brittleness values using other rock properties as an input such as quartz content of rocks. According to the findings, rock brittleness classification and indices developed/estimated based on the PPT is reasonable and acceptable for preinvestigation stage of rock engineering project to examine the brittleness and excavatability.
2,307.2
2021-01-01T00:00:00.000
[ "Geology" ]
Energy-Efficient Power Allocation for Full-Duplex Device-to-Device Underlaying Cellular Networks with NOMA : Full-duplex (FD), Device-to-Device (D2D) and non-orthogonal multiple access (NOMA) are promising wireless communication techniques to improve the utilization of spectrum resources. Meanwhile, introducing FD, D2D and NOMA in cellular networks is very challenging due to the complex interference problem. To deal with the complex interference of FD D2D underlaying NOMA cellular networks, power allocation (PA) is extensively studied as an efficient interference management technique. However, most of the previous research works on PA to optimize energy efficiency only consider the system framework of partially joint combining techniques of FD, D2D and NOMA, and the constraints of optimization problem are very different. In this paper, in order to further improve the energy efficiency of a system, a dual-layer iteration power allocation algorithm is proposed to eliminate the complex interference. The outer-layer iteration is to solve the non-linear fractional objective function based on Dinkelbach, and the inner-layer iteration is to solve the non-convex optimization problem based on D.C. programming. Then, the non-convex and non-linear fractional objective function is transformed into a convex function to solve the optimal power allocation. In this approach, FD D2D users reuse the spectrum with downlink NOMA cellular users. Imperfect self-interference (SI) cancellation at the FD D2D users and the successive interference cancellation (SIC) at the strong NOMA user are considered in the system framework. The optimization problem is constructed to maximize the system’s energy efficiency with the constraints of successful SIC, QoS requirements, the maximum transmit power of BS and FD D2D users. Numerical results demonstrate that the proposed algorithm outperforms the traditional orthogonal multiple access (OMA) in terms of energy efficiency with a higher system sum rate. Introduction Recently, researchers have growing interest in Device-to-Device (D2D) communications underlaying cellular networks because D2D users can reuse the spectrum of cellular users, thus improving the spectral efficiency [1][2][3][4]. Similarly, non-orthogonal multiple access (NOMA) and full-duplex (FD) as significant candidate technologies in 5G can greatly improve the spectrum utilization and achieve a higher energy efficiency for next-generation wireless communication [5][6][7]. However, the combination of the above three will cause serious co-channel interference while improving spectral efficiency. Therefore, a reasonable resource allocation has become particularly important, which can effectively suppress interference while ensuring the system spectral efficiency. On the other hand, for massive machine-type communications (mMTC) scenario in 5G, users' energy is always limited [8,9]. How to improve energy efficiency while guaranteeing the quality of service (QoS) of equipments is also a key problem. D2D communication allows users to transmit directly while using the authorized frequency band of the base station (BS), thus reducing the burden of the BS and save communication resources of cellular BS. Therefore, D2D communications can improve the overall capacity of the mobile communication system. For this reason, D2D communication is considered in cellular networks while causing co-channel interference when D2D users multiplex the resources of cellular networks. Resource allocation for D2D communications underlaying cellular networks was researched in [10][11][12][13][14][15]. In [10], the spectral efficiency of D2D users was maximized. Four transmit power control strategies were proposed, ensuring that the interference to the BS should be lower than a threshold. In [11], the system throughput was maximized while ensuring that the co-channel interference remained minimized. The channel allocation, mode selection and power control were also studied. NOMA is a promising solution for reducing the co-channel interference using the successive interference cancellation (SIC) at the receivers in 5G. Nowadays, many researches have studied the resource allocation of D2D joint NOMA communications scenarios [16][17][18][19][20]. The maximization of the uplink energy efficiency and the achieved rate of the D2D communication based on the NOMA system were studied in [16]. The authors proposed a joint power control and sub-channel assignment algorithm to gain the optimal energy efficiency. In [17], the authors investigated the time scheduling and resource allocation algorithms in a NOMA-based D2D communication while maintaining energy efficiency among D2D users without affecting cellular users' energy efficiency. In [18], the authors optimized the power allocation, the resource assignment and the SIC decoding order for the NOMA users to maximize the obtained rate of D2D communication. FD communication allows users to transmit and receive signals using the same spectrum simultaneously, and thus theoretically, the capacity can be twice that of half-duplex (HD) communication with the development of self interference (SI) cancellation technology. The combination of FD and D2D communication joint NOMA has aroused researchers' great interest [21][22][23][24][25][26]. The NOMA vehicle-to-everything (V2X) system was considered in [21], in which the vehicle enabled the D2D transmission mode. As for the Roadside Unit (RSU) selection, a full-duplex communication mode was applied to further improve the performance. The ergodic rate of two vehicles in a specific group was compared. In [22], the cell-center D2D user acted as the relay for the cell-edge D2D user, while the cell-center D2D user can operate in HD or FD communication mode to communicate and form a group of NOMA users with multiple relay nodes. Three schemes supporting D2D-NOMA systems were proposed to derive the outage probabilities. Cooperative NOMA (C-NOMA) in cellular downlink systems was discussed in [23], in which users operated in FD mode and can assist the communication from the BS to poor-channel-quality users. D2D user grouping and power control were studied to improve multiplexing gain. In addition, some researchers focus on the machine learning [27][28][29][30] technique to deal with big data and resolve non-linear problems. In [28], a nature deep Q-Learning algorithm in deep reinforcement learning is proposed to solve the power allocation problem for achieving a higher data rate and energy efficiency of MIMO-NOMA wireless network. In [29], the authors aimed to maximize the total energy efficiency of D2D multicast clusters underlaying cellular networks. Since the optimization problem is non-convex, the authors transform it into a mixed-integer programming problem according to the Dinkelbach algorithm and employing the Q-Learning to solve it. In [30], the authors design a novel artificial intelligence (AI)-based framework for maximizing the secrecy energy efficiency (SEE) in FD cooperative relay underlaying cognitive radio NOMA systems. The non-convex SEE optimization problem is solved with ensemble learning to select the optimal relay and with a quantum particle swarm optimization-based technique to optimize power allocation. Although the above work has carried out a detailed analysis on the research of D2D, FD and NOMA, there is still little work on the comprehensive combination of D2D, FD and NOMA, which is of great value to promote the system's spectral efficiency and energy efficiency. Motivated by this, we research on the resource allocation of FD D2D underlaying cellular networks with downlink NOMA, where FD D2D users can operate at the same spectrum with downlink NOMA users. Considering the energy-limited nature of wireless terminal devices and the requirements of global green communication, we set the maximization of the system's energy efficiency as the ultimate goal. At the same time, we consider that the transmission rate is very important to users, because many services such as watching videos, playing online games and live broadcasting do need relatively high data rates. Therefore, we maximize the system's energy efficiency while ensuring users' QoS requirements. The contributions of this paper can be summarized as follows: (1) We investigate FD D2D underlaying cellular networks with downlink NOMA, where two downlink cellular users form a group of NOMA users and the FD D2D users reuse the spectrum for NOMA users. (2) The system's energy efficiency is maximized under the conditions of power constraints for all users, the successful SIC for NOMA users and the QoS requirements for FD D2D users and NOMA users. (3) The optimization problem in this paper is non-convex and non-linear fractional, which makes it difficult to obtain the optimal solution directly. Thus, a dual-layer iteration algorithm is proposed to deal with the problem. The outer-layer iteration algorithm is to solve a non-linear fractional problem based on Dinkelbach algorithm, and the inner-layer iteration algorithm, based on the difference of a concave or convex (D.C.) structure, is used to deal with the non-convex programming. (4) The performance of the proposed scheme is compared with the traditional orthogonal multiple access (OMA) scheme, which verifies the superiority of the proposed scheme in terms of energy efficiency. The rest of the paper is organized as follows. The system model and the optimization problem are introduced in Section 2. Then, in Section 3, we provide the proposed optimal solution. The simulation analysis and relevant discussion are displayed in Section 4. Lastly, a brief summary of this paper is presented in Section 5. System Model and Problem Formulation In this section, we present the system model first and, subsequently, discuss the construction of the optimization problem. System Model In this subsection, the system model of FD D2D communications underlaying cellular networks with downlink NOMA is presented, as shown in Figure 1. A downlink resource allocation scene is considered where two NOMA users (C 1 and C 2 ) share the downlink spectrum with a pair of D2D users (D 1 and D 2 ) in a single cell system. The same spectrum is accommodated to C 1 and C 2 , utilizing the power multiplexing of NOMA. Therefore, the BS and two NOMA users are both equipped with a single antenna. In particular, the D2D users operate in FD mode and we assume that both D2D users are equipped with separate transmit and receive antennas. We denote the channels of the BS- 1 and g d 2 ,c 2 , respectively. In addition, we assume that large-scale fading based on the distance path loss model and small-scale fading based on the Rayleigh fading model is the channel model [24]. We further assume that a dedicated control channel is applied to collect the channel state information (CSI) in our networks so that the perfect CSI is assumed. We denote that the transmission power from the BS to C 1 and C 2 are P c 1 and P c 2 . Accordingly, the transmit powers of the D 1 and D 2 are denoted by P d 1 and P d 2 , respectively. To ensure that the SIC process at the two NOMA downlink users are successful, the users are sorted based on their channel gains, such that g b,c 1 > g b,c 2 . According to this order, C 1 first carries out SIC technology to eliminate interference from other users and then decodes its own signal, while C 2 decodes its own signal by directly treating other signals as interference. The received signals of C 1 and C 2 are accordingly represented as where x c 1 , x c 2 , x d 1 and x d 2 are the transmission signals of users c 1 , c 2 , d 1 and d 2 , respectively, and n c 1 , n c 2 ∼ CN 0, σ 2 denote additive white Gaussian noise (AWGN) with zero mean and σ 2 variance. User c 1 first applies SIC technology to eliminate interference and then decodes its own signal, i.e.,γ whereγ c 2 and γ c 1 are the signal-to-interference and noise ratio (SINR) of user C 2 at user C 1 and user C 1 , respectively. To make the SIC process successful at the C 1 , the transmission power of two NOMA users should meet the following requirement [25,26]: where θ represents the minimum power gap, ensuring that the SIC process is successful between the two NOMA users. As for user C 2 , it decodes the received signal x c 2 directly while treating other received signals as interference. Therefore, the SINR γ c 2 of user C 2 is Similarly, the received signals of D2D users are denoted as where ηP d 1 and ηP d 2 are the residual SI of D 1 and D 2 , and η denotes the SI cancellation capability of the FD transmitter. n d 1 and n d 2 denote AWGN with zero mean and σ 2 variance. Thus, the received SINRs γ d 1 and γ d 2 at users D 1 and D 2 are, respectively, represented as Problem Formulation The energy efficiency optimization problem is formulated mathematically in this subsection while ensuring users' QoS requirements, power constraints and the minimum gap for successful SIC. Based on Equations (4), (5), (9) and (10), considering the unit bandwidth, the transmission rate corresponding to the user i can be expressed as The energy efficiency of the FD D2D communication underlaying cellular networks with downlink NOMA can be expressed as the ratio of the system sum rate to the total power consumption. The total power consumption consists of average circuit loss power P 0 and the transmit power of BS and D2D users. Therefore, the following linear model is applied to represent the power consumption where 3P 0 is the circuit power at the BS and users D 1 and D 2 . Maximizing the system's energy efficiency is our purpose while satisfying three kinds of constraints. Firstly, we ensure the minimum transmission rate of each user as a QoS requirement. Secondly, the transmit power of all users is constrained. In addition to the above two common constraints, we also guarantee the successful SIC constraint for NOMA user C 1 . The energy efficiency optimization problem can be expressed as follows: where Equation (13a) is the objective function of the system's energy efficiency maximization. R th1 and R th2 in (13b) are the minimum transmission rates of C 1 and C 2 , respectively. Since the channel gain of C 2 is worse than that of C 1 , we set R th2 < R th1 . R thd in (13c) is the QoS requirements for D 1 and D 2 . Additionally, the constraint in (13d) ensures that the SIC process of C 1 is successful and k is the normalized coefficient, k = g b,c 1 P d 1 g d 1 ,c 1 +P d 2 g d 2 ,c 1 +σ 2 . In the constraints found in (13e) and (13f), P bmax and P dmax are the maximum transmission powers of the BS and the D2D users, respectively. Proposed Optimal Solution In this section, the power allocation of the optimization problem (13a) for FD D2D communications underlaying cellular networks with downlink NOMA is researched. Note that the optimization problem (13a) is a non-convex and non-linear fractional problem, which makes it quite difficult to obtain the optimal solution directly. Consequently, a duallayer iteration algorithm is proposed to deal with the problem. The outer-layer iteration algorithm is to solve the non-linear fractional objective function based on Dinkelbach algorithm. After that, the objective function is still a non-convex optimization because the substructure has the D.C. structure. Therefore, the inner-layer iteration algorithm is to solve the non-convex optimization problem based on the D.C. programming. Thus, the nonconvex and non-linear fractional objective function is transformed into a convex function and we can apply convex optimization to gain the optimal power allocation solution. There is no further improvement in the system's energy efficiency when the iterative process is convergent. Outer-Layer Iteration Algorithm For the purpose of solving the objective optimization problem, which is a non-convex and non-linear fractional programming, we mainly focus on transforming the form of the objective function in this subsection, and we assume that the optimal solution of Equation (13a) is λ * , which is given by where P * i is the optimal power allocation. The following proven Theorem 1 can help us transform the objective function [31,32]. , then, F(λ) is a monotonically decreasing function of λ, and the maximum energy efficiency λ * = max Hence, based on Theorem 1, we can transform the non-linear fractional programming of Equation (13a) to the problem of Equation (16). Thus, we can solve Equation (16) to obtain the corresponding optimal power allocation: where D is the set consisting of Equations (13b)-(13g). As for constraints (13b)-(13d), we can transform them into a linear form of power, respectively, in Equation (17). Therefore, the set D is a convex set: Based on the above analysis, we have solved the non-linear fractional objective optimization problem. But we cannot obtain the optimal solution by applying the outer-layer iteration algorithm because the substructure ∑ i∈U R i is still a non-convex structure. Therefore, in next subsection, we will introduce the transform of the non-convex structure and then present the complete iteration algorithm, including outer-layer iteration and inner-layer iteration. Inner-Layer Iteration Algorithm Although we have transformed the non-linear fractional function in Equation (13a), the subtraction structure ∑ i∈U R i in problem Equation (16) is still a non-convex structure that needs to be transformed into a convex structure. First, we rewrite the subtraction structure ∑ i∈U R i as where, and Obviously, f 1 (P) and f 2 (P) are both concave functions on P, and the second part λ(∑ i∈U P i + 3P 0 ) in Equation (16) is a linear function. Therefore, the objective function in (16) has the D.C. structure. In addition, D is a convex set. Thus, we can solve the problem (16) based on D.C. programming. According to [26], f 2 (P) can be expanded by the first-order Taylor function around P (k) : where x, y = x T y is the inner product of the vectors x and y. f 2 (P (k) ) is the gradient of f 2 (P) at P (k) . Therefore, the objective function in problem Equation (16) can be transformed to a concave function, which is expressed as: It is easy to solve the concave problem in Equation (22) by applying standard convex optimization techniques, such as the interior-point method [33]. The detailed dual-layer iteration algorithm to solve the optimization problem (22) is shown in Algorithm 1. First, we set the initial conditions of the optimization problem (22). Then, the outer-layer iteration updates λ constantly and the inner-layer iteration updates the power P * until the inner-layer iteration and outer-layer iteration meet the tolerances, respectively. Finally, the optimal solution can be obtained through Algorithm 1 after a finite number of iterations [31]. Simulation Results We provide the simulation analysis in this section, which illustrates the superiority of the system's energy efficiency for the proposed scheme. Firstly, we analyze the convergence of the iterative algorithm in Section 3. Next, we further analyze the impact of the QoS requirement of each user, the minimum gap for successful SIC and SI cancellation coefficient on the system's energy efficiency. For comparison, we present the simulation results with traditional OMA networks where each cellular user occupies half-spectrum resources and D2D users share the total spectra at the same time. Furthermore, the achievable rate of each user is still an important metric to the system performance. Therefore, we analyze the effect of SI cancellation coefficient and the distance between D2D users on the user's achievable rate, and compare FD D2D with HD D2D to observe the improvement of the achievable sum rate. The channel model between user i and user j is denoted as g i,j = d −α i,j |h i,j | 2 . Among them, d −α i,j is the distance path loss, in which d i,j represents the distance from the transmitter i to the receiver j in meters, α represents the path loss coefficient and we set α = 4. For the small-scale fading, all users experience independent Rayleigh fading with zero mean and unit variance. Particularly, all terminals are uniformly distributed in the cell with a radius of 400 m and the D2D users are located within a short distance of 40 m. Furthermore, we set R th1 = 3 bps, R th2 = 1 bps, R thd = 3 bps, P bmax = 24 dBm and P dmax = 21 dBm. Unless stated otherwise, Table 1 presents the simulation parameters applied in this paper. The convergence performance of the proposed dual-layer iteration algorithm to obtain the optimal power allocation was researched and is presented in Figure 2. It can be observed that the system's energy efficiency converges to a stable value after finite iterations (four times), which indicates that the iterative algorithm we proposed in Section 3 has a lightweight and less complex nature. In addition, the reduction in the maximum transmission power of BS and D2D users will lead to the reduction in system's energy efficiency. This is because the reduction in the maximum transmission power will lead to the reduction in the power that can be allocated during the transmission, which will reduce the reachable rate of the users, thus reducing the system's energy efficiency. We can also see that the reduction in the maximum transmission power of the BS has a greater influence on the energy efficiency than that of D2D users. This is because when the transmission power of the BS decreases, the BS still needs to allocate more power to maintain the minimum rate of the NOMA weak user C 2 , while allocating less power to NOMA strong user C 1 , hence reducing the system's energy efficiency. Then, the system's energy efficiency is investigated with variable circuit power and minimum gap for successful SIC in Figure 3. We can determine that when the circuit power increases, the system's energy efficiency decreases, and the proposed scheme always outperforms the OMA scheme, which proves the advantage of our proposed scheme and the feasibility of the solution. In addition, when θ increases, the system's energy efficiency of the proposed scheme shows a downward trend. This is because more power should be allocated to the poor channel gain user C 2 and accordingly less power to the strong channel gain user C 1 , which will lessen the system's energy efficiency. Moreover, because the energy efficiency of the OMA scheme is independent of the minimum gap for a successful SIC, we only show one curve of the OMA communication scheme. The effect of the QoS requirements of users on the system's energy efficiency is depicted in Figure 4, where η = −90 dB. We can see the proposed scheme also outperforms the OMA scheme. In addition, when the minimum transmission rate of user C 2 increases, the energy efficiency of all cases presents a downward trend. This is because when the minimum transmission rate of C 2 increases, the BS will allocate more power to C 2 to maintain the minimum requirement. Then, the power allocated to C 1 will inevitably decrease, which will reduce the system sum rate and then reduce the system's energy efficiency. Accordingly, when the minimum transmission rate of user C 1 increases, the system's energy efficiency also decreases. This is because the increase in the minimum transmission rate of C 1 will cause the BS to allocate more power to user C 1 , which will lead to the insufficient power allocated to user C 2 to maintain the normal communication, thus reducing the energy efficiency of the system. We present the effects of different minimum transmission rates of D2D users and the SI cancellation coefficient on the system's energy efficiency in Figure 5. We observe that as SI decreases, the energy efficiency of all cases increases. Furthermore, with the increase in R thd , the energy efficiency of the system decreases, no matter in the proposed scheme or the OMA scheme. This is because with the increase in the minimum transmission rate of D2D users, the transmission power of the D2D users also needs to increase. Therefore, the transmission power may not meet the requirements of the minimum transmission rate, resulting in the reduction in the system's energy efficiency. Although we aim to maximize the system's energy efficiency in this paper, the spectral efficiency of the users is still a key metric that affects communication quality between users. In Figure 6, we have illustrated the relationship between the user's achievable rate and the SI cancellation coefficient. We can see that when η varies, the achievable rate of each user meets the minimum transmission rate requirement. Particularly, C 2 always maintains the minimum threshold, while C 1 's rate decreases and D2D users' rate increases. This is because SI only exists in FD D2D users. Obviously, when η decreases, the achievable rate of D2D users increases. In the meantime, in order to optimize the overall energy efficiency, the D2D users must allocate more power, which means more co-channel interference to the cellular users. Hence, C 1 's rate gradually decreases and C 2 's rate consistently keeps to the minimum in order to maintain QoS requirement. Figure 7 discusses the effect of the distance between D2D users on the ergodic sum rate. In addition to comparing with the OMA scheme in Figure 7, we also change the FD D2D users to HD D2D users for comparison in order to discuss the influence of SI cancellation coefficient on the system sum rate, herein, namely the HD-D2D scheme. We can see the system sum rate of all cases decrease as the D2D distance increases, because the channel condition of the users becomes worse, reducing the system sum rate. From the figure, we also find out that our proposed scheme has advantages over the other two schemes with regard to the sum rate. Conclusions In this paper, we proposed a dual-layer iteration power allocation algorithm to maximize the energy efficiency of the system, where one FD D2D user pair uses the same spectrum resource with two cellular users using a power domain NOMA transmission. The outer-layer iteration is to solve a non-linear fractional problem for energy efficiency based on Dinkelbach and the inner-layer iteration is to solve the non-convex optimization for power allocation based on the D.C. programming. The simulation results show that the proposed algorithm can achieve better energy efficiency than the traditional OMA with the constraints of the successful SIC power level for NOMA users, the QoS requirements and maximal transmission power of D2D users and BS. Although an improved performance is achieved, there are still some challenges to be solved for more complex interference scenarios. In the near future, we will focus on extending our proposed algorithm to the scenario of multiple NOMA groups sharing a spectrum with multiple D2D user pairs by jointly considering channel assignment and power allocation to maximize the system's energy efficiency. Moreover, machine learning methods can be further introduced to solve the joint optimization problem. Conflicts of Interest: The authors have no relevant financial or non-financial interest to disclose.
6,174
2023-08-14T00:00:00.000
[ "Computer Science" ]
The Influence of Human Quality Index and Supporting Facilities through Electrical Supply in Humbang Hasundutan Regency Humbang Hasundutan Regency is one of the regencies in North Sumatera, has 10 (Ten) subdistricts, 153 villages 1 (one) Urban Village. Total Population 176,429 inhabitants, 40,783 household heads, average Electrical Energy Supply 14, 7021 MW, Human Development Index an average of 71,353 points. The growth of Human Quality Index is a needed supporting facility of HDI, enormous energy, as for problem that as follows: What is the effect of human quality index and supporting facilities trough electrical energy in Humbang Hasundutan Regency. What model is suitable to measure the availability of electrical energy in Humbang Hasundutan Regency is seen from the human quality index and supporting facilities ten years later. Supporting theories in this study are theories of human resources, especially theories relating to the index of human development, electrical energy. The effect of human resource quality index and supporting means on the availability of electrical energy in Humbang Hasunduan district with available data (10) ten years of regional autonomy can be used equation model = = a + and measuring the quantity of energy supply intake can use formula ((E) =f(t)= A . ebt. The results of this study showed an increase in Human Development Index of 0.62%, improvement of supporting facilities 4, 24% and increased availability of electrical energy 34%, correlation of human energy index with 94.41% energy availability and electricity supply availability for the ten years later 242,281MW the impact of human development index increase from 2004 to 2013. INTRODUCTION All districts increase that generates resources to promote life. At this time many countries are shaken by the energy and human resources crisis, the Indonesian government has not escaped. Based on Law No. 34 in 2004 states that local governments have broad authority to manage the resources in their area. Human Development is formulated as an extension of choice for the population (enlarning the choises of people) which can be seen as a process of effort towards expanding options (UNDP, 1990). Human resources are very vital organizational assets, therefore their roles and functions cannot be replaced by other resources, however modern of technology used, how much funds are prepared, but without professional human resources, everything becomes meaningless (Prihantin Lumban Raja 2013 , p .; 1). Supporting facilities that support human quality index such as hospital facilities; school, employment; housing and the environment, household spending and consumption are crucial in raising the human resource quality index. The addition and improvement of facility buildings in quantity must be considered, Electrical energy is an inevitable global problem, the economy and human activities will be totally paralyzed without energy therefore demands the energy supply and planning as early as possible. Electrical energy supply can be seen from how much the demand by customers and how much the quality index of human resources and supporting facilities. The problems of the Study Based on the description above the problems of this study are: a. What is the effect of human quality index and supporting facilities trough electrical energy in Humbang Hasundutan Regency b. What model is suitable to measure the availability of electrical energy in Humbang Hasundutan Regency is seen from the human quality index and supporting facilities ten years later. The Objectives of the Study In relation to the problems of the study, the objectives of this study are: a. To find out the effect of human quality index and supporting facilities of electricity energy that available in Humbang Hasundutan Regency. b. To find out how much the influence of human quality index and supporting facilities that available of electricity ten years later. The Significances of the Study The findings of the study are expected to be beneficial and give contribution theoretically and practically, as follows . a. For the Humbang hasundutan district government, they know how much the human quality index and supporting facilities increase that available of electricity and able to predict the availability of electricity in ten years later. b. Furthermore, the results of this study are useful to add academic value as a function of Tridarma Perguruan Tinggi and to add knowledge about the development of human resources and the availability of electricity. c. This research can be useful as a theoretical material to be developed in the area of knowledge. II. REVIEW OF LITERATURE 2.1 Human Resources Human resources one of the most vital sources. Humans are the creatures that have the highest nature have values that can manage resources, adaptability, make changes and are able to answer the challenges of every change. Human resources are very vital organizational assets, therefore their roles and functions cannot be replaced by other resources, however modern of technology used, how much funds are prepared, but without professional human resources, everything becomes meaningless (See Lumbanraja Hal; 1. MSDN 2013). The concept of human development by UNDP (1995: 12) defines human development is a process to expand choices for the population in the concept that the population is placed as the ultimate and development efforts are seen as a principal means to achieve goals. The quality index of human resources is the amount of value that is formed by the human development process. To ensure the achievement of human development objectives, there are 4 (four) main things that need to be considered, namely: a. Productivity b. Equalization Continuity c. Empowerment d. Productivity a. Productivity Productivity of the population must be empowered to participate fully in the process of income generation and employment. b. Equalization Residents must have the same opportunity to gain access to all economic and social resources. All obstacles that minimize the opportunity to gain access must be removed so that residents take advantage of the opportunities that exist and participate in productive activities that can improve the quality of life. c. Continuity To Access economic and social resources must be addressed not only for future generations. All physical, human and environmental resources must always be replenished. d. Empowerment Residents must participate fully in decisions and processes that will determine the shape or direction of their lives and participate in benefiting from the development process. Human Quality Index Human quality index is the value of the human development process that is formed Life Expectancy Index (eo), school average, and literacy and purchasing power. According to the Republic of Indonesia government regulation No.8 in 2008 concerning guidelines for evaluating the administration of local government, it is stated that Evaluation of the Implementation of Regional Autonomy (EKPOD) is a systematic process of collecting and analyzing data on the performance of regional autonomy covering aspects of community welfare, public services and power regional competitiveness. In this case the human development index is used to measure the end result of regional autonomy. The analogy of the availability of electricity rises exponentially, and then slowly decreases after consumption. International Journal of Advanced Engineering Research and Science (IJAERS) [ The value as a function of time follows a logistic curve, namely the curve S, and the reserve decreases with the end of the energy source. The consumption curve K on its principle follows the curve S but lags behind time compared to the curve A. (Abdul Kadir: Energi, 1989). Fig.2.1: Energy Supply and Consumption The use of electrical energy (E) for many years ago is recorded as a function rather than time, then generally seen the growth of E is greater than the function E is linear. Mathematically, the curve is expressed in the form of an exponential function formulated: E = f (t) in E =.A. e bt Fig.2 .3: Availability of Electric Energy with Linear Scale By using the magnitude of the energy in this logarithmic scale the curve is a straight line, and then the quantity "a" can be found in the early years, whereas "b" is the growth coefficient to determine the required function of the data as thoroughly as possible and the time period needs to be noted as parameter limitation. Research Design This study aims to measure the influence of human quality index and supporting facilities on the availability of electrical energy therefore the research method used by researchers is quantitative method , which is explanatory research. Because it is an estimate, it is certain that forecasting results can and usually always deviate from the actual numbers will be realized. The first step in predicting the value of an element to be a future being is to recognize as much as possible the element that influences it, especially its change. The second step is to measure the level and shape of each determinant's influence of the number that is forecasted. Regression analysis is a statistical technique that measures the level of dependence between one magnitude and one or several other quantities. Since it can determine the relationship between the quantities then this analysis can be used to estimate the value of the magnitudes and parameters contained in an equation or function such as a demand function. Population and Sample According Sudjana (2005: 6) suggests that the population is the totality of all possible values, the results of calculating or quantitative or qualitative measurements of certain characteristics of all collected data that are complete and clear source. According to Sugiono (2010: 72) the population is an area of generalization consisting of objects or subjects that have certain qualities and characteristics set by the researcher. In this case the data population is the index of human quality, supporting facilities, energy availability in Humbang Hasundutan regency. Research Sample is part of the number and characteristics possessed by the population. In this case the sample is taken from the human quality index, supporting facilities, energy availability for ten years later in Humbang Hasundutan District. An important step in educational research is to define the study population, researchers make research samples are data for ten years the human quality index and supporting facilities and electrical energy installed, the sample taken in the study is data in the form of numbers for ten years according to the title of research in the district Humbang Hasundutan.To obtain data regarding the presence or absence of the influence of the Index of Human Quality and Supporting Facilities on Electricity Supply, researchers conducted observations in several departments, offices, BPS offices, PLN Offices in the Humbang Hasundutan Area. Source of Data Data sources are obtained from Agencies, Health, Education, Economy, Housing and Environment, Bappeda, Central Statistics Agency, PLN Rayon Dolok Sanggul and Siborongborong and, as well as the Office of Natural Resources and Energy in Humbang Hasundutan District. Electrical Energy Supply Below will be described on the availability of electricity for 10 (ten) years in Humbang Hasundutan District. From the above provisional data, the average electricity supply availability for 10 years = 14, 7021. From the data above, it can be illustrated the 10-year the Energy is Growth based on graph in Humbang Hasundutan District from 2004-2013 Picture.4.3: Graph of Electrical Energy Supplies in Humbang District of ten years Source: Data BPS Humbang Hasundutan Hypothesis testing The Effect Analysis of Human Quality Index on Electrical Energy facilities . Hypothesis Testing by using the statistical formulas presented in the previous chapter the influence of Human Quality Index (X1) and Sup porting Facilities (X2) trough Electrical Energy Supply (Y) is depicted with equation = Y = a + X b . Number of Customers Power Installed Electrical Energy Production Electricity Sold (1) Relation of Human Quality Index with Electrical Energy Facilities The relationship between the Human Quality Index and Electrical Energy facilities can be described by the equation Y = a + 1 X b where a and b can be calculated using a simple linear regression formula Picture.4.4: Graphs of Human Quality Industry Relationships and Electrical Energy Supplies ten years in Humbang Hasundutan district
2,741.2
2018-07-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Sequence-based multiscale modeling for high-throughput chromosome conformation capture (Hi-C) data analysis In this paper, we introduce sequence-based multiscale modeling for biomolecular data analysis. We employ spectral clustering method in our modeling and reveal the difference between sequence-based global scale clustering and local scale clustering. Essentially, two types of distances, i.e., Euclidean (or spatial) distance and genomic (or sequential) distance, can be used in data clustering. Clusters from sequence-based global scale models optimize spatial distances, meaning spatially adjacent loci are more likely to be assigned into the same cluster. Sequence-based local scale models, on the other hand, result in clusters that optimize genomic distances. That is to say, in these models, sequentially adjoining loci tend to be cluster together. We propose two sequence-based multiscale models (SeqMMs) for the study of chromosome hierarchical structures, including genomic compartments and topological associated domains (TADs). We find that genomic compartments are determined only by global scale information in the Hi-C data. The removal of all the local interactions within a band region as large as 10 Mb in genomic distance has almost no significant influence on the final compartment results. Further, in TAD analysis, we find that when the sequential scale is small, a tiny variation of diagonal band region in a contact map will result in a great change in the predicted TAD boundaries. When the scale value is larger than a threshold value, the TAD boundaries become very consistent. This threshold value is highly related to TAD sizes. By the comparison of our results with those previously obtained using a spectral clustering model, we find that our method is more robust and reliable. Finally, we demonstrate that almost all TAD boundaries from both clustering methods are local minimum of a TAD summation function. Introduction The chromosome, the physical realization of genetic information, is one of the most complex and important cellular entities [1][2][3][4][5][6][7]. Over the past few decades, the significance of its threedimensional architecture for supporting essential biological functions, such as DNA that clusters from sequence-based global scale models optimize Euclidean distance relations, and these models can be used in genomic compartment analysis. In contrast, clusters from local scale models optimize genomic distance relations, and these models can be used in TAD analysis. Essentially, our SeqMMs provide a way to explore the hierarchical structures of chromosomes. Mathematically, genomic compartments are defined from principal component analysis [17], they are global structural features. The loci in the same genomic compartment are spatially close to each other. But their sequential distances can be very large. Based on global scale clustering, we design Type-1 SeqMM and use it for genomic compartment analysis. In contrast, TADs are local structural features. The loci in the same TAD are not only spatially close to each other, but also sequentially adjacent to each other. Their sequential distances are usually within a certain genomic distance. Based on local scale clustering, we introduce Type-2 SeqMM and use it in TAD analysis. Methods As a discrete representation of geometries, manifolds, high-dimensional structures, abstract relations and complicated subjects, point cloud data (PCD) are widely used in computer science, engineering, scientific computing and data science. Particularly, PCD and PCD based classification or clustering methods [37], including K-means, hierarchical clustering, spectral clustering, modularity, graph centrality, network approaches, etc, have been constantly used in biomolecular data analysis. However, as demonstrated in Fig 1, biomolecular structure data are essentially different from the general PCD, as they incorporate a unique sequential information. The simulated structure corresponds to chromosome 22 from Human ES Cell line and is generated by using software shRec3D [33]. To have an intuitive understanding of the sequential information in PCD analysis, we consider a DNA structure with PDB ID 1ZEW. Using atomic coordinates, a weight matrix is constructed. The weight values are defined by using the rigidity function [38], where r ij is the Euclidean distance between i-th and j-th atoms, N is the total number of atoms and η is the scale parameter that controls the influence range of each atom. In this case, we choose η = 8 Å. The weight matrix is illustrated in Fig 2(c 2 ). Two more matrices are constructed by dividing the weight matrix into a diagonal band region as in Fig 2(a 2 ) and the remaining off-diagonal regions as in Fig 2(b 2 ). Based on these three matrices, we can decompose the DNA structure into two parts using the spectral clustering method [37,39]. Results are illustrated in Fig 2(a 1 ), 2(b 1 ) and 2(c 1 ). It can be seen that, if we only consider relations between sequentially adjacent atoms, which are represented in the diagonal region, the DNA structure will be clustered into two complementary helix chains. However, if we use the whole matrix or only off-diagonal regions, the DNA structure will be divided in the middle region with two chains in each cluster. Generally speaking, Fig 2 demonstrates two types of sequence-based models, i.e., sequence-based local models and sequence-based global models. It can be seen that their properties in structure decomposition differ greatly. In the first type, atoms with shorter sequential distances are more likely to be grouped into the same cluster. In the second one, spatially close atoms, i.e., atoms with large weight values, are more likely to be assigned to the same cluster. Mathematically, the sequence-based local model optimizes sequential distances, while the global model optimizes spatial distances or Euclidian distances. All PCD based classification and clustering methods belong to the second type. Therefore, the direct application of these methods in biomolecular data analysis may have some limitations. In Hi-C data analysis, sequential information is usually highly relevant. Fig 3 demonstrates a potential problem for global scale clustering in TAD analysis. In this figure, genomic loci are represented by red pentagons. It can be seen that the spatial distance between the two loci in any red circle is much shorter than the one in green circles, while sequential distances are exactly the opposite. If we use the traditional PCD based clustering methods, two loci in the same red circle will always have priority to be clustered into the same TAD. Obviously, this will cause serious interpretation problems, as the sequential distance between the two loci can be much larger than the size of a TAD. Sequence-based multiscale modeling It should be noticed that two distance definitions, i.e., Euclidean distance and sequential distance, are greatly different and matter a lot in multiscale modeling. The Euclidean distance is just the three dimensional distance between two elements. In Hi-C data, Euclidean distance between genomic loci is inversely related to their contact frequencies [35]. In contrast, the sequential distance is defined between two elements on chains or polymers. If sequential numbers are assigned to elements, a sequential distance is just the difference between these two integers and it is always an integer. In Hi-C data, sequential distance between two loci is their genomic distance. Even though graph and network based multiscale models are widely used in biomolecular structure and function analysis [40][41][42][43][44][45][46][47][48][49], the measurement defined in these models are in terms of the Euclidean distance. To be more specific, when we discuss atomic scale, residue scale, second structure scale, tertiary structure scale, etc, we are analyzing structural elements based on their sizes measured in Euclidean distances. In this section, the sequence-based multiscale modeling is proposed for biomolecular data analysis, particularly for Hi-C data analysis. In our multiscale models, a scale parameter N b is defined not from the Euclidean distance but from the sequential distance. The parameter N b can be viewed as a cut-off sequential distance. In Hi-C matrices, the parameter N b specifies the size of the diagonal band region. Further, two sequence-based multiscale models are proposed for analyzing chromosome genomic compartments and TADs. These two models, denoted as Type-1 SeqMM and Type-2 SeqMM, are derived from the perspective of local scale clustering and global scale clustering, respectively. Type-1 SeqMM. In Type-1 SeqMM, we remove sequentially short-range interactions by changing the value of scale parameter N b . More specifically, for a Hi-C matrix, a diagonal band A potential problem for global scale clustering in TAD analysis. Each locus is represented by a red pentagon. Global scale clustering considers only spatial relations, thus groups loci in each red dash circle as a cluster. Biologically, loci in each green circle are more favorable to be clustered into the same TAD, as their sequential distances are much shorter. The missing of sequential information in global scale clustering will cause problems in the TAD analysis. https://doi.org/10.1371/journal.pone.0191899.g003 Sequence-based multiscale modeling for Hi-C data analysis region with size N b is systematically removed from the model, resulting a new Hi-C matrix as following, Here M ij can be the original or normalized contact frequencies. It can seen that our Type-1 SeqMM is defined by taking away the local interactions from the model and is designed for global scale clustering. An example can be found in Fig 6(a) to 6(c). We suggest that it can be used in chromosome genomic compartment analysis. Type-2 SeqMM. In Type-1 SeqMM, when short-range interactions are systematically removed from the biomolecular data, long-range interactions are preserved. Type-2 SeqMM is designed in the exact opposite way, The scale parameter N b controls the size of the diagonal band region. Mathematically, our SeqMM matrix in Eq (3) is a weighted Laplacian matrix, which plays an important role in graph representation and spectral clustering [37,39,50]. The second smallest eigenvalue and its associated eigenvector from the Laplacian matrix, are known as the Fiedler value (or algebraic connectivity) and the Fiedler vector, respectively. The Fiedler value is an important measurement of the general topological connectivity of a graph. The Fiedler vector gives an optimized classification of a graph into two separated domains [39,50]. In our Type-2 SeqMM, the local interaction region can be systematically enlarged to model the different scales of interactions. Type-2 SeqMM is proposed for chromosome TAD analysis. After Hi-C data preprocessing, a weighted Laplacian matrix can be generated by using a suitable scale value N b . The TAD number in the data is estimated based on size and resolution of the Hi-C matrix. We assume the size of TAD to be around 2Mb, and TAD number N c can be roughly calculated by dividing the total length over 2Mb. The basic procedure is presented in Algorithm 1. It should be noticed that the final number of TADs is usually larger than N c . The Code is available at S1 File. Algorithm 1 Type-2 SeqMM for TAD analysis Pre-processing: Remove all rows and columns, that summed together equal to zero (or smaller than a pre-defined range); Transform the Hi-C contact frequencies to spatial distances (default function f(x) = log (1 + x)); Step 1: Choose a scale parameter N b to construct a weighted Laplacian matrix as in Eq 3; Step 2: Calculate the first N c eigenvectors. Here N c is the estimated number of TADs; Step 3: Employ the K-means algorithm on the N c eigenvectors to identify N c clusters; Step 4: Subdivide each cluster into several TADs until the loci in each TAD are sequentially contiguous. Genomic compartments The genomic compartment is defined from the principal component analysis of Hi-C data. Mathematically, the principal component captures the global shape of a structure. In Chen's spectral method [36], it shows that the genomic compartment results from the first principal component (FPC) are identical to the predictions made from the lowest-frequency eigenvector of weighted Laplacian matrices. More interesting, as proved in the elastic network model and normal mode analysis, these lowest-frequency eigenvectors are uniquely determined by the global geometric information of structures [51][52][53][54]. Since the FPC describes the global properties of a structure, we use the Type-1 SeqMM for our genomic compartment analysis. We consider the GM06990 chromosome 14 data with resolution 100Kb. This is a classic example used for genomic compartment analysis [17]. Before the principal component analysis (PCA), the chromosome 14 Hi-C matrix is processed. We remove all columns and rows with all zero values and normalize the matrix using the Toeplitz matrix [17]. After that, we construct a new matrix by removing the diagonal band region with N b = 60 from the normalized Hi-C matrix, and calculate its FPC. Further, we compare this new FPC with the original one. Results are shown in Fig 4. The blue line represents the FPC from the original Hi-C matrix and red line represents the FPC from the off-diagonal matrix. It can be seen that they are almost identical to each other. Actually, the Pearson correlation coefficient (PCC) between the two FPCs is as high as 0.991, meaning that the removal of the diagonal band region have almost no influence to the FPC. To have a more quantitative understanding of the FPC and Hi-C diagonal regions, we continuously change the value of the scale parameter N b to generate a series of Hi-C matrices at different scales. Then we systematically calculate their FPCs and measure the similarity between these FPCs with the original one by PCCs. Results are shown in Fig 5. It can be seen that PCCs changes very slowly when scale parameter is smaller than 100, which is 10 Mb in genomic distance. State differently, we can get almost the same genomic compartment even when we remove all the Hi-C data within the 10 Mb band region. It should be noticed that almost all the largest Hi-C value, i.e., contact frequencies, are located within this 10 Mb band. These values, however, are irrelevant to the chromosome genomic compartment! We further test our SeqMM on other GM06990 chromosomes. A very consistent pattern can be observed. Results of chromosomes 1, 5, 9 and 13 are illustrated in Fig 6. It can be seen that the shape decrease of PCCs is usually found at around 100 locus (10 Mb in genomic distance). This indicates a transition between local scales to global scales. Further studies are needed to explain its biological implications. Topological associated domain Another very important finding in Hi-C data analysis is the topological associated domain. TADs are megabase-sized local chromatin interaction domains. They have loop structures and are highly stable and conserved across various cell types and species. TAD boundaries are found to be enriched with the protein CTCF, housekeeping genes, transfer RNAs and short interspersed element (SINE) retrotransposons [18,23,24,26]. These components play important roles in establishing and supporting TADs and other architectural structures of the chromosome. Due to the structural and functional importance of TADs, various algorithms have been proposed for the identification of TADs as stated in the introduction part. However, the sequential information is not considered in any of these models. In this section, Type-2 SeqMM is used to study chromosome TADs. In our Type-2 SeqMM, the clustering is done by using K-means method on eigenvectors from spectral graph model. The basic procedure of the algorithm is illustrated in Algorithm 1. To explore the relation between the band size and TAD boundaries from the clustering, we consider a 100Kb resolution Hi-C matrix for chromosome 22 from IMR90 cell line. We systematically change the band size N b from 20, 80, 140 to 200. The corresponding TAD boundaries are illustrated in Fig 7. It can be seen that the TAD regions evaluated from different scales are not exactly the same and have some variations. Particularly when the band size N b change from 20 to 80, the calculated TAD regions are quite different. Further, when the band size is larger than 80, although the TAD boundaries are still not the same, they share more and more common values. To have a more quantitative understanding of this, we systematically change the scale parameter N b from 2 to 351 (the size of the normalized Hi-C matrix) and calculate the TAD boundaries. Results are shown in Fig 8. We can find that when the value of scale parameter N b is small, a tiny change of its value can result in huge variations of the predicted TAD boundaries. However, when the scale parameter is larger than a certain value, the fluctuations in the Sequence-based multiscale modeling for Hi-C data analysis predicted TAD boundaries are greatly reduced. The threshold value is roughly about 20, which is 2 Mb in genomic distance. We further apply the spectral approach used in Chen's method [36] on the multiscale Laplacian matrices in Eq (3). Results are shown in Fig 9. It can be seen that the variation of the predicted TAD boundaries by his method is much larger than that of our Type-2 SeqMM. More interestingly, the amplitude of variation below the threshold (2 Mb) is much larger than the one after the threshold, which is the same as in our model. Biologically, the threshold value should be highly related to the TAD properties. This is because when the band sizes N b of our multiscale matrices are smaller than the size of TADs, local interactions within TADs are removed from our models, resulting in a much larger variation in predicted TAD boundaries. However, when the band size is larger than 2Mb, all TAD-related local interactions will be considered, thus a much consistent TAD boundaries should be expected. Stated differently, since TADs are mainly determined by local interactions within the 2Mb band region, the calculated TAD boundaries should always be the same for multiscale matrices with N b larger than 2Mb. In this sense, our Type-2 SeqMM is much more robust and reliable than Chen's method [36] as a much smaller variation is observed in our model when N b is larger than 2Mb. Mathematically, in Chen's spectral method, the global scale clustering is iteratively used to subdivide contact matrix or matrix region into two subregions until the algebraic connectivities within the submatrices are all smaller than certain threshold. Therefore, this method optimizes only spatial distances between different loci. Sequence-based multiscale modeling for Hi-C data analysis Further, even with the difference between the two models, both methods capture the local minimum of a TAD summation function. We consider the 100Kb resolution Hi-C matrix for chromosome 22 from IMR90 cell line. The band size N b is chosen as 60, which is amount to 6 Mb in genomic sequence. We summarize the contact matric values along the direction that is perpendicular to the matrix diagonal. Results are shown as the black lines in Fig 10(a) and 10(b). The TAD boundaries from Chen's method and our Type-2 SeqMM are illustrated by blue and red lines. It can be seen that nearly all of these lines are located at the local minima of the summation function. More interestingly, the two methods share many common TAD boundaries. This indicates that the situation illustrated in Fig 3 does not widely exist. This can also be confirmed from the behavior of off-diagonal values. Usually, the off-diagonal values decrease very quickly outside the TAD regions, meaning the distance between a locus from a TAD and a locus outside this TAD is usually very large. Conclusion In this paper, we discuss a sequence-based multiscale clustering model for biomolecular data analysis. Biomolecules and their complexes are hierarchical structures made from one or several polymer chains. With the sequential information embedded in these polymer chains, biomolecular data are fundamentally different from the general point cloud data. Traditional clustering methods derived from point cloud data, fall short when sequential information Sequence-based multiscale modeling for Hi-C data analysis matters. To overcome this problem, we propose a sequence-based multiscale model for biomolecular structure analysis. We generate a series of structural matrices by gradually and systematically removing the short-range or long-range interactions. These new matrices focus on different sequential scales and their clustering has different biological interpretations. Two SeqMMs have been applied to Hi-C data analysis. We find that the genomic compartments only relate to the global scale information. The removal of a diagonal band region as large as 10 Mb has very little influence to the finally compartment results. Further, we study TADs with our local scale models. We find that when sequence scale is small, a tiny variation of its value will result in great changes in TAD boundaries. However, when the scale value is larger than a threshold value, the TAD boundaries become very consistent. This threshold value is highly related to the sizes of TADs. Interestingly, our method is much more robust than a previous spectral clustering method in the TAD analysis.
4,753
2017-07-12T00:00:00.000
[ "Computer Science" ]
Cross-talk between the HPA axis and addiction-related regions in stressful situations Addiction is a worldwide problem that has a negative impact on society by imposing significant costs on health care, public security, and the deactivation of the community economic cycle. Stress is an important risk factor in the development of addiction and relapse vulnerability. Here we review studies that have demonstrated the diverse roles of stress in addiction. Term searches were conducted manually in important reference journals as well as in the Google Scholar and PubMed databases, between 2010 and 2022. In each section of this narrative review, an effort has been made to use pertinent sources. First, we will provide an overview of changes in the Hypothalamus-Pituitary-Adrenal (HPA) axis component following stress, which impact reward-related regions including the ventral tegmental area (VTA) and nucleus accumbens (NAc). Then we will focus on internal factors altered by stress and their effects on drug addiction vulnerability. We conclude that alterations in neuro-inflammatory, neurotrophic, and neurotransmitter factors following stress pathways can impact related mechanisms on craving and relapse susceptibility. Table 1 Overview of neurotransmitters alteration following stress and addiction in the reward system. DA GABA Glu AEA NE 5-HT Orexin GABA VTA GABA neurons, in general, provide local inhibition of VTA DA neurons. VTA GABA neurons, as a mediator of reward and aversion, are also involved in addiction, depression, and other stress-related disorders [17]. For instance, opioids hyperpolarized the VTA GABA neurons and disinhibited the VTA DA neurons [18]. CRF enhances the firing rate of dopaminergic and GABAergic neurons in the VTA [19]. Jennings et al. have illustrated that the footshock or a footshock-associated cue attenuated the firing rate of GABAergic neurons in the VTA, which transmit to the GABA neurons of the BNST. GABA afferents of BNST can inhibit the VTA GABAergic neurons and disinhibit the VTA DA neurons [20], while in an acute footshock stress situation, the BNST glutamatergic neurons showed an increment in firing rate, excited the VTA GABA neurons, and suppressed the dopaminergic firing rate [17]. As shown in acute stress, which increases the firing of VTA GABA neurons almost immediately, its lasting effects are more complicated because the plasticity of GABAergic synapses onto dopaminergic neurons is affected over a longer period of time. The GABAergic plasticity is thus lost over the following days, which removes the inhibitory effect of GABA on VTA DA firing [17]. Chronic unpredictable mild stress (CUMS)-induced depression results in a decrease in inhibitory synapse outputs, excitability, and excitatory synaptic reception in the NAc. Because the NAc contains GABAergic neurons, all of these attenuated changes in the subcellular compartments of GABAergic neurons result in the dysfunction of the NAc during chronic stress, which leads to major depression [21]. The basal level of extracellular GABA in the NAc increases 3 weeks after discontinuing treatment with cocaine injections [22]. Additionally, there is evidence for reciprocal presynaptic modulation between GABA, DA, and glutamate in the NAc, implicating altered GABA transmission in mediating the cocaine-induced changes to DA and glutamate transmission [23] (Table .1). Based on the above-mentioned studies, we can conclude that stress decreases the level of GABA in both the VTA and NAc, which are reversed by addiction ( Fig. 1A and B). Therefore, drug-seeking behavior may also be influenced by changes in GABA levels, that needs further investigation. Glutamate Previous research has looked at other stress-induced changes in VTA DA neurons after drug abuse, such as hyperexcitability [24,25], upregulation of AMPAR subunit glutamate receptor 1 (GluR1), and NMDAR1 subunit. Acute swim stress increases the AMPA/NMDA ratio of excitatory synapses as much as drug abuse, which has a similar impact on VTA DA neurons [26,27], which requires GRs, glutamate A1 receptor (GluRA1), and NMDARs activation [26,27]. Western blot data showed social stress-induced cocaine cross-sensitization via augmentation of the VTA GluRA1 content [28]. Injections of CRF-receptor-2 (CRFR2) antagonists, but not CRFR1 antagonists, in the VTA area reduced glutamate and DA release and mitigated drug relapse [29]. The dopaminergic system is believed to mediate compulsive drug use at the level of the NAc; in contrast, the glutamatergic system's activity at the level of the NAc is mostly responsible for relapse after drug extinction [30]. Following stress, synaptic changes in the NAc shell reflect an enhancement of AMPAR-mediated currents but not of NMDAR miniature EPSCs. Stress-induced reinstatement of drug-seeking behavior in animals and relapse in humans may be related to altered processing of information via the plasticity of excitatory inputs [31]. Another study found that stress-induced glutamate-dependent DA enhancements in the NAc core which leads to the vulnerability to drug abuse [32]. Enhancement of glutamate release induced by conditioned cues and drug exposure in the NAc engages synaptic plasticity responsible for drug-seeking behavior [33] (Table .1). Overall, it is recognized that stress increases the Fig. 1. A schematic diagram of the brain reward circuit involved in addiction. The hippocampus, PFC, NAc, and amygdala receive projections from the VTA. In all reward and stress regions, GRs are present. A) As a result of the interaction between stress and the reward system, the LH-orexin neurons modulate the VTA DA neurons. VTA receives projections from BNST. PFC glutamatergic neurons communicate with VTA, DA, and NAc neurons. NAc, the hippocampus, and the BNST are modulated by glutamatergic neurons in the amygdala. NAc receives glutamatergic projections from the hippocampus. VTA and NAc receive serotonergic and norepinephrine projections from DRN and LC. B) The alteration of the reward system following addiction. The serotonergic pathways from DRN/LC to VTA and NAc are increased. The NE is also increased in the NAc. The GABA pathway from NAc to VTA was increased, as was the LH-oexin pathway to VTA. The thick arrow depicts hyperactivity, while the thin arrow depicts hypoactivity. Created with BioRender.com. expression of GluR1 and AMPA receptors in the VTA and NAc, which increases drug craving (Fig. 1). The HPA axis and addiction Stress is a physiological reaction to a physical or physiological threat [1]. In a stressful situation, stress signals from different regions such as the PFC, amygdala, and hippocampus are projected to the parvocellular neurons in the paraventricular nucleus (PVN) of the hypothalamus [34]. CRF stimulates adrenocorticotrophin (ACTH) hormone from the anterior pituitary, which subsequently triggers the secretion of cortisol/CORT from the adrenal glands. In chronic stress, the HPA axis is highly activated, and GCs released abnormally from the adrenal glands [35]. Cross-talk between the stress system and addiction at the three levels of the HPA axis has been reported in previous studies [36]. The pituitary gland, ACTH and addiction The Pro-opiomelanocortin (POMC) system, ACTH, and β-endorphin modulate reward circuits in the brain and are involved in addictive behavior [37,38]. Smokers exposed to stress within four weeks of their relapse history had lower levels of ACTH, β-endorphin, and cortisol [31]. The β-endorphin level increased after alcohol consumption in the acute stage, but the negative feedback of cortisol on β-endorphine reduced the β-endorphine amount in the long term. ACTH is the product of β-endorphine cleavage. As a result, the reduction in ACTH was followed by a decrease in β-endorphins [38]. Chronic stress also increased the activity of the -melanocyte-stimulating hormone (-MSH) and melanocortin-4 receptor (MC4R), which resulted in a decrease in excitatory dopaminergic synapses on DA receptor type 1 (D1R)-expressing neurons in the NAc and the development of anhedonia [37]. The adrenal gland, glucocorticoids and addiction At the third level, GCs play an important role in arousal, cognition, mood, immunity, and inflammatory reactions [39]. Moreover, studies have shown that there is a common pathway between self-administration of drugs and the release of GCs in an acute stressful situation [35]. Preclinical studies have revealed that the GCs antagonists administration in the basolateral amygdala (BLA) prevents stress-induced drug relapse [40]. Glococorticoid receptors (GRs) are expressed in the hippocampal formation, PFC, amygdala, and NAc, which are related to neural plasticity in drug abusers [41][42][43]. GRs in the NAc region play an important role in drug relapse in stressful situations by reducing DA clearance in this region [43]. In mice with knocked-out GRs genes, a reduction in firing of the VTA DA neuron and cocaine self-administration were observed [44]. The GRs of the hippocampus have a role in spatial memory formation that is induced by the Ca 2+ /calmodulin-dependent protein kinase II (CaMKII) and brain-derived neurotrophic factor (BDNF)-cAMP response element-binding protein (CREB) pathway [45]. Both drug abuse and GCs could have been inducing LTP in the ventral hippocampus, which relates to stress and emotional processing [46,47]. Additionally, Koenig and Olive demonstrated that GCs synthesis were blocked prior to ethanol consumption, which prevented further consumption [48]. It is clear from the aforementioned lines that stress is linked to increased HPA activity and GCs release. Therefore, GCs have a role in drug-related learning and memory [49] and also have a reciprocal relationship with drug susceptibility in drug consumers [50,51]. Below, we have discussed how certain types of drug addiction and the HPA axis are related. HPA axis and alcohol addiction It has been demonstrated that stress increases the probability of alcohol seeking in both healthy and addicted people [52]. Acute alcohol exposure whether supplied voluntarily or by an experimenter stimulated the production of CORT and ACTH, while prolonged treatment sufficient to cause dependency resulted in a dampened neuroendocrine state [53]. In people with severe alcohol use problems, protracted withdrawal and high levels of arousal can lead to HPA axis, GCs, and PFC dysfunction (AUDs). According to research, the HPA axis may be dysfunctional with binge/heavy drinking, and this is linked to non-dependent people's drive to drink [54]. Maternal separation stress (MS) mice showed a notable predilection for ethanol even at low dosages (0.1-1%), according to research on ethanol drinking behavior [55]. HPA axis and nicotin addiction Stress makes it more difficult to resist the urge to use or seek out nicotine and intensifies the satisfaction that follows consumption [9]. Additionally, it has been discovered that human participants frequently attribute their prolonged nicotine abuse to stress [56]. During nicotine withdrawal, anxiety-like behavior and dysphoria (a negative and unpleasant affective state) are also connected to CRF [57]. Following long-term nicotine usage, abrupt withdrawal led to the HPA axis' hyperactivity [58]. HPA axis and opioid addiction The group with maternal deprivation (MD) exhibited a greater level of CORT. In all of the brain regions, including the hippocampus and the NAc, MD rats' expression of the BDNF and GR genes was shown to have significantly decreased. Additionally, MD rats showed a substantial increase in the expression of the opioid receptor in all of the brain regions. According to previous findings, MD causes changes in the way the HPA axis works, the amount of BDNF in the body, and the opioid receptor system that make people more susceptible to morphine as adults [59]. Research on lab animals has linked the development of opiate self-administration behaviors and the progression to opiate dependency to the deregulation of numerous brain stress-responsive systems and the HPA axis [60,61]. Following the expression of conditioned place aversion (CPA), CORT levels rose. Prior to naloxone, pre-treatment with the selective CRFR1 antagonist CP-154,526 reduced the extinction duration and inhibited morphine-withdrawal-induced unpleasant memory consolidation [62]. VTA: stress and addiction Dopaminergic neurons, which make up 65% of the VTA, are part of the reward prediction [17]. Recent studies revealed that CRFR1 and CRFR2 are important in the VTA response to social stress [63], and later cocaine "binge" self-administration was facilitated by these receptors' existence [64]. Intermittent stressors have a lot of effects on different dimensions of substance consumption, which are mediated via VTA DA system activation, like psychomotor stimulant sensitization, conditioned place preference (CPP) enhancement, augmentation of cocaine self-administration, amphetamine, heroin, and cocaine seeking relapse [65]. The activity of VTA DA neurons participates in cognition, motivation, and locomotor activity, and it is widely implicated in rewardseeking such as drug abuse, and brain self-stimulation [66]. Morphine and ethanol have a reinforcing effect on VTA DA neurons by activating μ-opioid receptors [67,68]. It has been shown that VTA DA neuronal activity decreases in withdrawn rats after chronic morphine exposure [69] or ethanol administration [70]. NAc: stress and addiction VTA sends the DA projections to NAc [71], which is another critical region of the reward circuit. The NAc as a major part of the ventral striatum is composed of GABAergic medium spiny neurons (MSNs) [72] which play an important role in responding to rewarding or aversive stimuli [73]. MSNs in the medial NAc shell inhibit the medial VTA DA neuron, whereas the lateral NAc shell neurons inhibit the VTA GABAergic interneurons and disinhibit the DA neurons. Hence, in a reverse direction, the DA neurons return to lateral NAc shell neurons and activate D1Rs [74]. D1R-MSNs in NAc convey reward signals, while D2R-MSNs encode aversion [75]. Drug consumption for a long time or exposure to chronic social defeat stress play an important role in the plasticity of MSN neurons in NAc, which is mediated via upregulating the D1Rs on MSN neurons and downregulating the D2Rs of these neurons in resilient animals [76]. Changes in NAc caused by stress can stimulate drug-seeking behavior as well as synaptic changes in the level of VTA projection and the lateral NAc shell [77]. Prenatal stress increased D2Rs expression in adult offspring's NAc; however, D2Rs were reduced in rats given nicotine, indicating that stress increased vulnerability to nicotine addiction [78]. In addition, restraint stress increased the response to drugs by increasing DA release in the NAc core [32]. Collectively, changing the reward system, especially in response to stress, may result in drug-seeking behaviors (Fig. 1). Endocannabinoid system The endocannabinoid system (ECS) as a neuromodulatory system has a role in CNS development, synaptic plasticity, and the response to endogenous and exogenous threatening factors [79]. ECS presence in corticolimbic structures like the PFC, amygdala, and hippocampus has an important role in stress and anxiety-like behavior in adults and exerts this effect by regulating the HPA axis [80]. In a normal state, the HPA axis activity is constrained by the basal level of anandamide (AEA) in the BLA, but during stressful situations, CRF signaling coordinates a breakdown of tonic AEA activity to encourage a state of anxiety. When the effect of ECS has vanished, the HPA axis escapes constraint and GCs concentration increases [81]. AEA is an endogenous ECS that is synthesized and released by postsynaptic terminals, activates cannabinoid receptor type 1 (CB1Rs) in presynaptic terminals, and suppresses presynaptic neurotransmitter release [82]. Stress has different effects on the ECS depending on the brain region and duration of exposure. Both acute and chronic stress generally cause a reduction in the content of AEA in the brain tissue [83]. Maternal separation disrupts emotional memory formation and increases cannabinoid sensitivity by altering CB1R signaling [84]. CUMS could induce ECS signaling deficits in the NAc by impairing CB1Rs function [85]. Dysregulated neural plasticity, increased stress reactivity, unpleasant affective states, and cravings that fuel addiction are all caused by impaired ECS signaling [86]. Stress-induced alcohol consumption is influenced by ECS signaling, but not stress-induced relapse after quitting. Similar inconsistencies with cocaine have been discovered, where CB1R antagonistism does not prevent footshock-induced resumption of cocaine-seeking behavior under stress [87], but it has been discovered to stop the resumption of cocaine-seeking behavior brought on by swim stress [88] or by administering CRF directly [87]. Additionally, CB1R activation speeds up the VTA DA neurons' rate of firing, which thus increases drug seeking and reinforces marijuana's addictive properties [89]. Since ECS signaling was previously mentioned, it is plausible that the stress modality itself could have varied impacts on this situation's ECS signaling, with ECS signaling potentially being critical for some forms of stress-induced cocaine reinstatement. Nicotine self-administration reduced baseline VTA dialysate oleoyl ethanolamide (OEA) levels and increased AEA release during nicotine intake [90]. Chronic ethanol exposure and traumatic stress have an impact on cannabinoid components in limbic regions [91]. It is likely that CB1R antagonists have antagonizing effects on alcohol and nicotine reward due to a diminished ability to increase NAc DA release. Blockade of CB1Rs, specifically in the VTA and NAc, reduces alcohol consumption [86] (Table .1). Norepinephrine Locus coeruleus (LC) is the region containing the norepinephrine (NE) neurons. In confronting the stressor, LC receives the projections from the paragigantocellularis nucleus of the brainstem, which have previously been enriched via CRF fibers from PVN, and BNST which led to the anxiety-like behavior [92][93][94]. BNST also receives projections from NE neurons of the LC [95]. Multiple kinds of adrenergic receptors are expressed on VTA DA neurons, which might mediate the interaction between VTA DA neurons and LC-NE inputs. Activation of VTA α1 and β3 adrenergic receptors reversed social avoidance behavior in previously identified susceptible mice and normalized the pathological hyperactivity of VTA→NAc DA neurons. Based on this phenomenon, it appears that the LC-NE system and the VTA→NAc reward circuit might be synaptically relayed by the adrenergic receptors α1 and β3 [96]. It was discovered that resilient mice released more NE from their LC neurons that project towards the VTA, suggesting that NE plays a role in mood regulation in the VTA [97]. Findings revealed that α2 adrenergic receptor agonists prevent stress-induced craving in human cocaine-dependent subjects. Additionally, it has been shown that β2 adrenergic receptor activation of a CRF-releasing projection from the BNST to the VTA is necessary for stress-induced cocaine use. In this regard, the findings show that β adrenergic receptor blockade in the BNST prevents footshock-induced reinstatement in rats [98]. Social isolation is associated with an increased response of DA and NE in the NAc and increased sensitivity to ethanol-mediated stimulation of NAc DA and NE release [99]. However, chronic stress can also attenuate the LC firing rate [100] (Table .1). Chronic nicotine self-administration lowered PVN NE release produced by footshock stress in rats, which could modify CRF neurons in this region and lead to stress hypersensitivity during chronic nicotine use [101]. Acute nicotine increased stress-induced increases in plasma corticosterone and epinephrine, implying that the stress-relieving effects of cigarette smoking are not mediated by a decrease in peripheral sympathetic nervous system activation [102]. CRFR1 and α2 noradrenergic receptor may have a role in the heightened anxiety-like behavior observed following acute heroin withdrawal in rats, potentially through increasing the release of NE in stress-related brain areas [103]. Psychological stress may increase cocaine-seeking behavior in chronically addicted individuals or animals, and medications that reduce NE transmission may prevent stress-induced reinstatement of usage [104]. As a result, NE alters during chronic stress, intensifying the effects of stress and contributing to increased craving (Fig. 1). Serotonin Serotonin (5-HT), which is released by the dorsal raphe nucleus (DRN), has an important role in psychiatric disorders [105]. The previous consumption of psychostimulants or opioids accompanied by chemical or physical stressors elevates the sensitivity of 5-HT neurons in DRN by increasing the GABA A receptors on these neurons [106]. In stressful circumstances, the CRF influences the DRN-5-HT system by modulating the serotonergic and GABAergic neuron dendrites [107]. Because the CRF impression on GABAergic neurons is more frequent, the electrophysiological studies indicated that the 5-HT neurons are inhibited indirectly by CRF and CRFR1 activity [105,106]. There is a complex regulation of DA release within reward circuitry that can be attributed to the multiplication of 5-HT receptors and their location on different neurons. 5-HT receptor subtypes exert excitatory effects on DA release in the NAc as well as discharge activity in VTA DA neurons that project to the NAc in substance abuse [105]. Serotonergic hypofunction, which is associated with dysphoric mood, can alter the susceptibility to stress-induced drug reinstatement [108]. Similarly, the CRF of the amygdala, by decreasing the 5-HT release in addicted rats, may be involved in aversion that is accompanied by drug withdrawal [105]. Stress reduces the reward responses of DRN 5-HT neurons and VTA DA neurons, resulting in anhedonia, a key symptom of depression. The negative effect of stress on reward responses suggests that some reduction in the reward responses of both 5-HT and dopaminergic neurons might underlie stress-induced anhedonia [109]. By decreasing 5-HT1B receptor stimulation, stress decreases 5-HT tone in the NAc of male mice to promote aversion and potentiate cocaine preference [110]. CRFR1 activation reduces 5-HT release and activates coping behavior in rats exposed to acute swimming stress [105]. However, when exposed to repeated swim stress, CRF aggregation impresses on CRFR2 and excites the 5-HT neurons, revealing passive coping behaviors such as avoidance, denial, self-blame, and immobility [105,111] (Table .1). As a result, low levels of 5-HT contribute to substance abuse initiation and relapse in acute stress conditions, whereas CRF increases 5-HT release from DRN neurons in chronic stress conditions [105]. Because 5-HT levels in reward circuits decrease with stress and increase with addiction, one reason for the proclivity to become addicted could be to compensate for the decrease in 5-HT (Fig. 1). Orexin Orexin (OX) neurons in the lateral hypothalamus (LH) secrete two neuropeptides, including OXA and OXB [112]. Evidence suggests that OXA signaling is more involved in reward seeking, whereas OXB signaling is associated with arousal and stress responses [113][114][115]. In some cases, such as chronic stress, compulsive drug craving, and chronic relapse, orexin neurons send projections to the hypothalamic stress systems, which activate CRF-containing neurons [112,116,117]. CRF stimulates orexin release in the LH, which is a factor involved in the activation and maintenance of arousal that is associated with the stress response and activates in response to cocaine, morphine, and their cues [118,119]. The LH is very important in motivating cocaine consumption by acting on the orexin factor [120]. Hyperarousal and withdrawal physical reactions observed after drug consumption increase orexin expression in LH neurons [121]. CRF terminals make direct synaptic contacts with orexin neurons, which express the CRF-R1/2. CRF and orexin have well-defined roles in drug-seeking behavior by regulating PFC-mediated addiction-like behaviors [122]. The induction of a stress-like state by orexin results in the reinstatement of cocaine seeking [123]. OXA projections into the paraventricular thalamus (PVT) induce cocaine-seeking behavior, which is perhaps promoted by CRF function in mediating stress and anxiety-like responses, which are well-known factors implicated in cocaine seeking in abstinent individuals [124]. Furthermore, the OXA antagonist reduces cocaine self-administration and stress-induced relapse [125]. The role of orexins in reward is partly explained by activating dopaminergic neurons in the VTA. Additionally, intra-VTA injections of orexin increase the levels of extracellular DA in NAc [126]. Cocaine exposure potentiates the synaptic plasticity via OXAR1 on VTA DA neurons and increases the firing rate of these neurons [127]. Orexin neurons project abundantly to the VTA and NAc and are activated by cues indicating food and drug rewards. OXA injections induce drug-seeking behaviors, suggesting that they are also involved in the reinstatement of reward-seeking behaviors [126]. In the NAc, chronic drug exposure has been found to result in long-term up-regulation of OXBR [115] (Table .1). Based on previous studies, chronic stress often reduces the number of orexin-induced action potentials in DRN-5-HT neurons [128]. Stress reduces orexin, which, in turn, reduces 5-HT activation and leads to stress-induced drug craving. Chronic stress also leads to decreased prepro-orexin mRNA levels, reducing the number of orexin neurons in the VTA [129]. Orexin neurons also send many projections to the DRN [130]. However, recently, it has been shown that, following the stress condition, the neuropeptide S (NPS) of the brainstem also activates the LH orexinergic neurons [131]. Therefore, previous findings show that stress alters orexin secretion, which can mediate drug craving (Fig. 1). The BDNF BDNF is distributed in the cerebral cortex, hippocampus, amygdala, and hypothalamus [132], and it is contributing to neurogenesis and synaptic formation, neuronal differentiation, migration, and survival [133]. Early life separation stress increased the CRF and tyrosine hydroxylase (TH) in the hippocampus, NAc, and PVN, resulting in a reduction in the BDNF content in these regions in the offspring of mice [134]. By regulating the HPA axis, BDNF has an anxiety-like effect and is involved in neuroendocrine homeostasis [135]. There has been considerable interest in the effect of BDNF on reward-related neuronal circuitry. BDNF has been implicated in mediating synaptic plasticity associated with cocaine abuse as well as cocaine-induced behaviors, namely CPP, behavioral sensitization, and cocaine self-administration [136]. Cocaine exposure increases BDNF gene expression in the VTA, which has been linked to increased drug seeking [137]. Following intermittent social defeat stress and opiate consumption, μ-opioid receptors expression increased BDNF expression [138,139]. Episodic social defeat in rats with a cocaine abuse background increases the BDNF content of the VTA region [140,141]. While in continuously stressed rats, DA and BDNF content showed a reduction in the VTA that has a suppressive effect on cocaine seeking [142]. These conflicting findings may depend on the time of drug consumption, as studies showed that in the VTA, BDNF was significantly elevated after 10-15 days of withdrawal but not on withdrawal day 1 [143]. In the PFC, significant increases in BDNF expression were observed on withdrawal days 8 and 14, but not on withdrawal days 1 or 3 [144]. Therefore, cocaine administration regulates BDNF levels in a complex manner that varies depending on the addiction phase (e.g., acquisition/maintenance; early/late withdrawal). It has been reported that the maternal separation, via decreased BDNF exon IV levels in the PFC area, leads to the increased exposure of adolescent mice to cocaine [145]. These mice showed a susceptibility to alcohol consumption in stressful situations in adulthood [134]. Chronic ethanol treatment reduced BDNF mRNA levels in mPFC, while forced swim stress (FSS) increased ethanol consumption [146]. Chronic stress inhibits BDNF signaling in the PFC, which is activated by cocaine and plays an important role in the development of cocaine side effects [147]. As a result, chronic stress and long-term drug consumption reduce BDNF, while BDNF increases following acute stress. Previous studies found that cocaine self-administration in rats induced a transient increase in BDNF in the NAc. Endogenous BDNF enhancement in the NAc subsequently reduced cocaine self-administration and attenuated relapse [148]. Thus, following stress or drug abuse, BDNF may have local-dependent alterations that are necessary to mediate or prevent cocaine-seeking behavior. The neuro-inflammatory factors There is a lot of evidence that chronic stress is linked to inflammatory processes, especially those that have to do with how the inflammatory cascade is controlled [149]. Stress and GCs sensitize the innate immune responses to proinflammatory cytokines [150]. Furthermore, stress can activate microglia and exert its function as an enhancer of microglial and glial mediators, contributing to the development of drug abuse [151,152]. Glial mediators include tumor necrosis factor-alpha (TNF-), interleukin 1 beta (IL-1), interleukin 6 (IL-6) and chemokines [153]. IL-1 and IL-6 stimulate the HPA axis and raise ACTH and CORT levels [154,155]. TNFα release from striatal microglia has been able to downregulate the AMPAR subunits in the striatum, a critical region for the motor and reward systems that receive glutamatergic and dopaminergic inputs [156]. It has been reported that the expression of the anti-inflammatory cytokine IL-10 in the NAc area reduces morphine-induced glial activation and prevents morphine CPP relapse in adulthood [157]. Furthermore, Ibudilast (an anti-inflammatory drug) used as a glial pro-inflammatory or TLR4 (inflammatory cytokine production initiator) antagonist reduced morphine-induced pro-inflammatory cytokine, morphine withdrawal behavior, and morphine-induced DA efflux in the NAc [158,159]. From the above lines of evidence, it can be concluded that anti-inflammatory or antagonistic inflammatory factors during stress can reduce the desire for drug abuse by affecting the reward pathway (Table .2). Conclusion There is considerable evidence from preclinical and clinical studies that indicates an association between highly stressful situations and addiction vulnerability [164,165]. Stressor exposure causes long-lasting neuroendocrine and physiological changes in brain regions involved in learning, motivation, and stress-related adaptive behaviors, which are briefly discussed below as concluding remarks of points. 1) Stress increases the HPA's activity and GCs release. It also enhances drug-related learning and memory and increases the susceptibility to drug use. 2) VTA DA neuronal activity has been reduced after chronic morphine withdrawal. However, this effect has been reversed by restraint stress, which increased the DA release in the NAc core. 3) Stress seems to change the GABA content in both the VTA and NAc regions. GABA levels have decreased in NAc and increased in VTA following the stress, which is reversed by addiction. 4) ECS plays a similar role to addiction, depending on the type of stress, but they have an inhibitory effect on the HPA axis. 5) Following stress, an increase in the content of glutamate and its receptors (GluR1 and AMPARs) in the NAc, as well as an increase in the VTA DA level, can lead to drug cravings. 6) The reduction in 5-HT levels in reward circuits has been seen in stress conditions, while it has increased in addiction. 7) Chronic stress has increased the NE, which has a role in stress-related drug reinstatement. 8) CRF and orexin increments in stressful conditions increased drug-seeking behavior by influencing PFC to regulate addiction-like behaviors. 9) Endogenous BDNF elevation in the NAc subsequently reduced cocaine self-administration and attenuated relapse. 10) The anti-inflammatory cytokine IL-10 expression in the NAc area attenuated the morphine-induced glial activation and prevented the morphine CPP relapse. The interaction of the HPA axis with other factors or neurotransmitters may open up new avenues for neuroscientists or neuropsychiatrists to discover therapeutics that can be used in addicted subjects. Furthermore, we suggest that more investigations are Reduced the morphine abuse [158,159] *An anti-inflammatory drug, ** TLR4: Toll-like receptor 4 (Inflammatory cytokine production initiator). required to clarify the exact mechanism of stress and its interaction with each factor to affect addiction. Ethics approval All procedures performed in studies were in accordance with the ethical standards of the ethical committee of Kerman University of Medical Sciences (Ethical approval number: IR. KMU.REC.1397.610, Reg. No. 97001069). Author contribution statement All authors listed have significantly contributed to the development and the writing of this article. Data availability statement Data will be made available on request. Declaration of interest's statement The authors declare no conflict of interest.
6,953.8
2023-04-01T00:00:00.000
[ "Biology" ]
Automated Design Space Exploration with Aspen . Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection of costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach. Introduction The design of next generation Exascale computer architectures as well as their future applications is complex, uncertain, and intertwined.Not surprisingly, modeling and simulation play an important role during these early design stages as neither the architectures nor the applications yet exist in any substantive form.Consequently, relevant performance models need to describe a complex, multidimensional design space of algorithms, application parameters, and architectural characteristics.Traditional performance modeling tools made this process difficult and resulted in a tendency to use simpler, less accurate models. In our earlier work, we designed Aspen (Abstract Scalable Performance Engineering Notation) [1], a domain specific language for structured analytical performance modeling, to allow scientists to construct, evaluate, verify, compose, and share models of their applications.Aspen specifies a formal language and methodology that allows modelers to quickly generate representations of their applications as well as abstract machine models.In addition, Aspen includes a suite of analysis tools that consume these models to produce a variety of estimates for computation, communication, data structure sizes, algorithm characteristics, and bounds on expected runtime.Aspen can generate all of these estimates without application source code or low-level architectural information like Register Transfer Level (RTL).This ability to cope with high levels of uncertainty distinguishes Aspen from simulators, emulators, and other trace-driven approaches. In fact, Aspen (and analytical modeling in general) is particularly useful at an early time horizon in the codesign process where the space of possible application parameters, algorithms, and architectures is too large to search with computationally intensive methods (e.g., cycle-accurate simulation) [2].With this much uncertainty, application developers tend to identify important ranges of application parameters, rather than discrete values.Similarly, hardware architects may have identified a range of possible computational capabilities, but the machine characteristics have not been finalized.For example, feasible clock ranges may be dictated by the feature size and known well in advance of fabrication.Finding optima within these ranges transforms 2 Scientific Programming a typical performance modeling projection into an optimization problem. Key Contributions. To address this challenge of design space exploration, we have extended our Aspen language and environment with expressive semantics for characterizing flexible design spaces rather than single models.Specifically, we add three new language constructs to Aspen: user-defined resources, parameter ranges, and a collection of costs in the abstract machine model.Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver.The solver uses these ranges (along with other constraints) to evaluate the Aspen performance models and evaluate a user-defined objective function for each point in the design space.As we will show, this automated process can allow thousands of model evaluations quickly and with minor regard to the performance model complexity. The key contributions of this paper are as follows: (1) a description of Aspen's syntax and semantics for specifying resources, parameter ranges, and costs in the abstract machine model; (2) a formal problem description for four types of optimization problems derived from Aspen models; (3) a description of new Aspen analysis tools which consume Aspen models and explore the design space with a standard nonlinear optimization solver; (4) a demonstration of these new capabilities on existing Aspen models for 3DFFT, CoMD, and the Streaming Sensor Challenge Problem [3]. Related Work. In the space of analytical models, Aspen's approach to the abstract machine model is conceptually in between pure analytical models and semiempirical powerperformance models based on direct measurement.Examples of the former include BSP [4] and Log variants [5,6] that focus strictly on algorithmic bounds.Examples of the latter include models based on performance counters or measurements [7][8][9][10][11][12] including proposed counters such as the leading loads counter [13].Aspen is distinguished from these works in that it is capable of modeling machines and applications in more detail than the pure analytical models while obviating the requirement of the semiempirical approaches for an instrumented execution environment. Other related approaches are trace-driven and use linear programming for power-performance exploration, especially for searching the configuration space of dynamic voltage and frequency scaling [14,15] or making decisions under explicit hardware power bounds [16]. On the application side, our goals for the use of Aspen and the 3DFFT model are directly related to the Exascale feasibility and projection studies of Gahvari and Gropp [17], Bhatele et al. [18], and Czechowski et al. [19]. In terms of design space exploration itself, an automated approach is a well-studied topic.Hardware-focused studies are also common, although they typically focus on reconfigurable architectures [20][21][22], particularly in well-constrained compiler-based planning or system on a chip (SoC) designs [23][24][25][26]. Several works focus on the theoretical aspects of exploring design spaces.Peixoto and Jacome examine metrics for the high-level design of such systems [27].There are also works focusing on the abstractions [28] and algorithms for the search [29], environments where source code is available and modifiable [30], and specialized approaches for multilevel memory hierarchies [30].In general, these works have similar goals and overall function to DSE in Aspen, but they consider very different machine models (usually with much more certainty and detail than the Aspen AMM). Aspen Overview While a more detailed description of Aspen has been published elsewhere [1], we briefly provide an overview and illustrate its use on an example model for a 1D Fast Fourier Transform (FFT).Aspen's domain specific language (DSL) approach to analytical performance modeling provides several advantages.For instance, Aspen's control construct helps to fully capture control flow and preserves more algorithmic information than traditional frameworks like BSP [4] and Log variants [5,6].Similarly, the abstract machine model is more expressive than frameworks that reduce machine specifications to a small set of parameters. The formal language specification forces scientists to construct models that can be syntactically checked and consumed by analysis tools; this formal specification also facilitates collaboration between domain experts and computer scientists.Aspen has also been defined to include the concept of modularity, so that it is easy to compose, reuse, and extend performance models. Furthermore, this specification allows scientists to include application specific parameters in their model definitions, which would otherwise be difficult to infer.With this feature, Aspen can help answering application-specific questions such as how does parallelism vary with the number of atoms?And, this type of approach also allows inverse questions to be asked, such as, given a machine, what application problem can be solved within the system constraints? Aspen is complementary to other performance prediction techniques including simulation [31,32], emulation, or measurement on early hardware prototypes.Compared to these techniques, Aspen's analytical model is machineindependent, has fewer prerequisites (e.g., machine descriptions, source code), and decreased computational requirements.This positions Aspen as an especially useful tool during the early phases in the modeling lifecycle, with continuing use as a high-level tool to guide detailed studies with simulators.Hence, the primary goal of Aspen is to facilitate algorithmic and architectural exploration early and often. Example: FFT. The FFT is a common scientific kernel and plays an important role in the image formation phase of SSCP [3], explored further in Section 5. Fortunately, FFT is also a well-studied algorithm, and tight bounds on the number of operations in an FFT are known.For an -element Cooley-Tukey style 1D FFT [33], the required number of floating point operations is bounded by O(5 log 2 ), with some implementations requiring only 80% of this upper bound [34].The number of cache misses has also been bounded for any FFT in the I/O complexity literature (on any two-level memory hierarchy which meets the tall cache assumption [35]) as where is the cache line size in words and is the cache capacity in words.For sufficiently large , the number of cache misses, , approaches = max(log , 1), where is a constant [19,35] which translates the upper bound to an explicit count.Using the same variable names, these bounds roughly translate to two Aspen kernel clauses, as shown in Listing 1. The listing also highlights the use of Aspen traits to add semantic information to specialize the flops, indicating that they are double precision, complex, and amenable to execution on SIMD FP units.The trait on the second clause specifies that the memory traffic in this kernel is from the fftVolume data structure. The other variable, , is a constant that arises from the nature of characterizing requirements by asymptotic bounds (e.g., O()) [35].Due to the complexity in modeling the memory hierarchy (e.g., from multilevel cache hierarchies, replacement policies) this type of constant is frequently measured using performance counters on an existing implementation of the algorithm to calibrate the model.It is a particularly common approach for characterizing memory traffic, even in the case of much simpler kernels, like matrix multiplication [36]. Modeling Methodology In order to facilitate the evaluation of optimization problems, Aspen has been extended with three new language constructs to increase expressiveness. User-Defined Resources. Prior work [1] with Aspen constrained modelers to a small set of predefined quantities of interest: flops, loads, stores, and messages.Since then, requests for modeling more exotic resources like system calls, allocation/deallocation, and more detailed modeling of system data paths (PCIe, QPI) have necessitated a more flexible system. The first addition to Aspen is the ability for custom resources to be defined at arbitrary points in the abstract machine model (AMM) hierarchy.For instance, integer operations can be defined at the core level and access to a centerwide, shared filesystem could be defined at the machine level.Resources may also define custom traits with optional arguments.All new definitions, however, must provide an expression for how the resource maps to time and how the traits commutatively modify or replace the base expression (the mapping when no traits are present).An example of the new syntax is shown in Listing 3. Note that the new conflict statement describes the sets of resources that cannot overlap. Furthermore, the AMM's assumptions of a completely connected socket topology and linear contention [1] are unchanged and apply equally to user-defined resources. Ranges. The next construct is the range, illustrated in Listing 2. The range or interval is a familiar concept to programmers, has implementations in most modern languages, and is fairly easy to express and reason about. More precisely, a range in Aspen is a closed, inclusive, connected, and optimal set of real numbers, .A range that is closed and inclusive indicates that the interval contains lower and upper bounds and such that ≤ ≤ , ∀, , ∈ , and ∀ ∈ R. Optimal, in this case, means that range should be as narrow as possible.Aspen also allows for the specification of an explicit default value.This default value provides a convenient way for modelers to encode the "common case." When left unspecified, the lower bound is used (by convention) in single analyses which do not consider ranges. Including Costs in the Abstract Machine Model. The second extension to Aspen includes the incorporation of several new types of costs into the abstract machine model: rack space, die area, static power, dynamic power, and component price.Each type of cost has rules for which components of the AMM hierarchy are applicable.However, all of these costs are optional.The only required cost is the specification of the time it takes to process a given resource. Available rack space, the simplest cost, is specified at the machine level and associated costs are defined per node in standard units. Total available die area is provided at the socket level and area costs are listed explicitly for all core, cache, and memory components.This allows, for instance, exploration of the tradeoff between die area spent on cache and the number of cores. Static power costs are specified by providing each component of the AMM hierarchy with an idle wattage.Dynamic power is similarly specified at each point in the hierarchy, but it is also split by resource.That is, for a given component, performing different operations may result in different dynamic power requirements.A trivial example of this difference is an AMM where the cost of a floating point operation exceeds the cost of an integer operation. Consider the example shown in Listing 3, where an AMM model for an Intel Sandy Bridge processor distinguishes between the power costs of a standard integer operation and the execution of the new advanced encryption instruction set.While this example may seem somewhat contrived with existing hardware, its inclusion as a feature is important in future-proofing Aspen against the general trend towards more specialized instructions and fixed-function units that may vary widely in energy consumption. These power costs also allow specifying constraints for maximum instantaneous power draw (i.e., highest wattage) and total energy consumption.Maximum power draw for an application is computed as the sum of all AMM component static costs and the largest of the sums of dynamic costs for each kernel: where M is the set of all components in the AMM, is the idle power draw of component , K is the set of all kernels in the application model, R is the set of all resources required by kernel , and dyn is the dynamic power cost of resource . In the absence of an application model, the maximum power draw is given by upper bound as the sum of static costs and the dynamic costs of all nonconflicting resources.Similar to Aspen's other assumptions, these power calculations represent a simplified model which neglects several physical factors including cooling costs and transitions between component idle/peak states. The Aspen tools already include the capability to produce bounds on predicted runtime by kernel clause [1], and the total energy cost of an application model is hence computed by the following: where idle is the total system idle power, total is the total runtime, K is the set of all application kernels, calls indicates the number of calls to kernel , C is the set of all clauses in kernel , is the runtime bound on clause , and is the dynamic power cost of the resource associated with clause . Nonlinear Optimization Solver Using these new ranges and costs, a variety of optimization problems can be derived from Aspen models.These optimization problems have the following form. (i) ( ⃗ x) is an objective function which must be maximized or minimized such as runtime, energy consumed, or problem size. (ii) ⃗ x = 1 , 2 , . . ., is a vector of decision variables with upper and lower bounds, sometimes called free variables.These bounds are typically derived from a range construct.Some examples include the number of nodes, problem sizes, and clock frequencies.The number of decision variables is known as the dimensionality of the problem. (iii) ℎ ( ⃗ x) = 0, ∈ 1, . . ., , is a set of equality constraints, which are arbitrary functions of the decision variables that must be equal to zero. (iv) ( ⃗ x) ≤ 0, ∈ 1, . . ., , is a set of inequality constraints, which are functions on the decision variables that must be less than or equal to zero. The difficulty of these optimization problems depends on several factors.In the best case, the constraint functions and the objective function are linear, and all of the decision variables are reals.This results in a traditional linear programming problem which can be trivially solved given the relatively low number of decision variables derived from an Aspen model. If, however, some decision variables are integers, the problem is a mixed integer-linear program and is NPcomplete.Similarly, difficulty is increased if the objective function or any of the constraint functions is nonlinear (i.e., nonlinear programming).And, if the objective function is not differentiable, a large class of efficient gradient-based methods cannot be used. The current set of Aspen optimization tools relaxes all integer variables such that the typically generated optimization problem is a completely bounded, pure nonlinear program where the objective function may not be differentiable.An example of a relaxed integer variable might be the number of nodes (which, in practice, is easy to round to the nearest integer after optimization). Since the objective or constraints may be complex, derived expressions (e.g., projected runtime, energy costs, and operation counts), these functions may be nonlinear and nondifferentiable.Hence, all optimization problems are solved using a gradient-free improved stochastic ranking evolution strategy (ISRES) [37] algorithm from the NLopt package [38]. Because no feasible point may be known a priori, these are considered global (as opposed to local) optimization problems.Establishing the criteria for termination is not always straightforward.However, due to the relatively low dimensionality (ISRES scales to thousands of variables) of Aspen-generated problems, we select NLopt's time-based stopping criterion with a threshold of a few seconds. An interesting facet of this approach is that a user can constrain any combination of the parameters, leaving the objective function to include the remaining parameters.For example, in the Machine Planner scenario, the user defines the application model and constraints, general parameters of time to solution or power, and they use the design space exploration to search for the best combination of machine parameters.In another example, the Problem Size Planner, the user defines the machine parameters, constrains the same general parameters of time to solution or power, and then maximizes the application input problem that can be solved with that configuration. Design Space Exploration Combined with the existing analysis tools, the new range and cost constructs enable the formulation of a vast number of optimization problems for design space exploration.Combinations of the number and type of Aspen models involved, the portions of those models that are fixed or free variables, the goal (maximization or minimization), objective function, and additional constraints rapidly grow out of control.To constrain this otherwise unwieldy variety, the tool interface for design space exploration is centered on four common scenarios, summarized in Table 1. Implementation Overview. The implementation of the tools, however, enables roughly the same workflow for each of the four scenario types, as depicted in the process diagram in Figure 1.This workflow has two main phases, problem formulation and optimization. First, depending on the scenario, one of the Aspen optimization tools is run.This tool consumes one or more Aspen model files as input and collects the relevant ranges from the model into the vector of decision variables, ⃗ x.Additional constraints such as time, energy, space, capacity, or price are specified via command line option.Also specified via the command line are nonstandard objective functions, which may include one or more parameters, derived capabilities, or weighted combinations of parameters and capabilities. Based on these inputs, the Aspen optimization tools generate a single C++ code file that drives NLopt's standard API.This generated code preserves the semantics of the original Aspen models such that variable names are consistent and the code is amenable to inspection and modification for special use cases. In the optimization phase, the generated C++ code is compiled and run.This code prints the value of the objective function at the optimum as well as the values of all of the decision variables.Or, in the case of unfeasible problems, it indicates that no optimum was found.It optionally generates a trace file that contains all the values of ⃗ x and ( ⃗ x) for each evaluation of the objective function for postprocessing and visualization. DSE Scenarios. In the following sections, we provide an overview of each scenario (and Aspen tool) in more detail and provide some pertinent example analyses.Note that, for these examples, we use relatively straightforward objective functions and only a handful of decision variables, but Aspen can handle problems of arbitrary complexity and dimensionality (given a reasonable solution timeframe). Parameter Tuner. The first optimization tool addresses application models with tunable parameters that have a significant impact on performance.While this is generally applicable to application-specific parameters, our motivating use case is a tiling factor.This type of factor (equivalent to blocking and chunking factors for our purposes) is quite common due to data-parallel decomposition and cacheblocking techniques. As a motivating example, we consider the DARPA UHPC Streaming Sensor Challenge Problem (SSCP) [3].In this challenge problem, dynamic sensor data are converted to an image and pushed through a multistep, data-parallel analysis pipeline.The image is split into tiles according to a tiling factor, tf, which specifies how many tiles to use in each dimension.The two primary phases of the pipeline are digital spotlighting and backprojection. The tf factor has a particularly interesting effect on total floating point operation count.Digital spotlighting kernels tend to require less work with smaller tiling factors (largely due to a requirement for fewer FFTs) while backprojection is more efficient at larger tiling factors.Choosing poor tf results in a potential for substantial unnecessary work (and, consequently, poor performance and low energy efficiency). In order to characterize this tradeoff with the Paramater Tuner, the Aspen model for SSCP encodes the tiling factor as a range: param tf = 32 in 16 .. 64 Combined with a command line argument for the resource of interest (e.g., flops, memory capacity), the Parameter Tuner generates a minimization problem with one bounded decision variable (tf) and an objective function that computes the total number of that resource required by the kernels in SSCP. Prior to this work, Aspen had the capability to plot resource requirements in terms of one or two variables [1]. Figure 2 depicts a standard resource plot annotated with a tick for the first 250 points where the objective function was evaluated, with the minimum found at 7.009e + 13 total flops at a tf of 34. We note two observations concerning Figure 2. First, each objective function evaluation is consistent with the analytically computed total flop count, indicating consistency across different Aspen tools.Second, the linear relaxation of tf (an integer) introduces some minor inefficiency, as the objective function is evaluated multiple times for equivalent values. Problem Size Planner. The second optimization tool is focused on the exploration of what problems are feasible to solve on a machine given a set of constraints.These constraints can consist of time, power, energy, and/or capacity limits.In addition to traditional runtime and allocation planning, searching this design space can help provide an application-specific perspective on the benefits of obtaining new hardware by comparing results across different machine models. To motivate this tool, we consider a model for a 3DFFT [1] and want to answer the question of what is the largest 3DFFT we can solve such that (i) the fftVolume data structure fits into the aggregate memory of the GPUs on the NSF Keeneland system [39]; (ii) it has an estimated runtime of less than ten seconds; (iii) it has an estimated total energy consumption of no more than five megajoules. Our optimization problem, then, is a maximization problem of dimensionality one where the single decision variable (and objective function) is n, the dimension of the 3DFFT volume.Furthermore, each of the three requirement statements above corresponds to a single inequality constraint.Figures 3, 4, and 5 show how the requirements for the 3DFFT scale with n. This energy calculation is based on a simple power model where the dynamic power requirement of the GPU is the manufacturer's stated thermal design point (250 W) when performing floating point operations or memory transfers, and the static/idle power is that measured using the NVIDIA system management interface (30 W).Transitions between states are assumed to be instantaneous and without cost.While simple, this model approximates the race-to-idle behavior.In future work, this model could be improved by measuring power draw for each resource using a synthetic benchmark (e.g., only flops, only loads/stores, and only MPI messages). Machine Planner. The third interface for formulating optimization problems with Aspen models is the Machine Planner.In contrast to the first two tools, the Machine Planner fixes the application model and focuses on identifying application-specific targets for machine capabilities.In other words, it explores what minimum level of performance the machine must attain to complete a workload within a set amount of time, energy, and/or other constraints.This scenario is typically a minimization problem over parameters in the abstract machine model.As an illustrative example, we consider a model for the CoMD molecular dynamics proxy application and the Keeneland AMM.Specifically, we want to find the minimum clock frequencies for a Fermi GPU's cores and memory that are required to complete a thousand iterations of CoMD's embedded atom method (EAM) force kernel for just over a million atoms (1048576) in one second.The parameter hierarchy in Listing 4 shows how the effective memory bandwidth is computed as a derived parameter from the clock rate that incorporates aspects of the GDDR5 architecture including the interface width and the measured overheads associated with using ECC.GDDR5's quad pumping, transferring a word on the rising and falling edge of two clocks, is accounted for within the gddr5Clock parameter, although this could be broken out into a separate parameter.Furthermore, eccPenalty accounts for overheads and sustained is based on measurements from the SHOC benchmark suite [40] that accounts for the difference between maximum sustained and peak bandwidth. Figure 6 shows the feasible range for both clocks and provides two insights.First, the EAM kernel is strongly memory-bound and is feasible at the lowest point in the core clock range.And second, the increased concentration of evaluation points toward the computed optimum (1e + 08, 1.27e + 09) shows NLopt converging on the solution. AMM Architect. The fourth tool is the AMM Architect which focuses on application-independent analyses.It primarily facilitates solving two types of problems-capacity planning under constraints and optimizing within a bounded projection for future performance targets (similar to the projections from the Echelon project [41] and DARPA Exascale Study [42]).These scenarios are typically maximization problems, where the objective function is some machine capability like peak flops, bandwidth, or capacity. As an example calculation, we consider a sample problem which maximizes the floating-point capability for a Keeneland-like architecture under the following constraints: (1) space and power budget of 42 U (one rack) and 18 KW, respectively, (2) minimum double precision FP capability of 50TF, (3) minimum aggregate FP capability to memory bandwidth ratio of 3 : 1 flops : bytes. After running the AMM Architect, we discover that this problem has no feasible solution.This problem was chosen to highlight one of the limitations of the optimizationbased approach: when there is no feasible solution for a multiconstraint problem, determining why the solution is not feasible or how "close" to feasibility the best point is requires nontrivial postprocessing.In practice, however, this can usually be overcome by iteratively relaxing the constraints. Conclusions Most scientists that use performance modeling are seeking to understand systems or optimize specific configurations, rather than generating a single forward performance projection.Likewise, many of the performance modeling scenarios facilitated by Aspen are concerned with the exploration of a multidimensional design space.The addition of user-defined resources, parameter ranges, and AMM costs substantially increases Aspen's flexibility and helps facilitate more complex modeling workflows.The ability to specify static and dynamic energy costs is especially important for models that describe extreme-scale or energy-constrained environments. With these new costs, the vast array of potential optimization problems can be unwieldy.Aspen attempts to streamline problem formulation by constraining the interface to four specific scenarios.While these tools do not address all potential problems of interest (and we anticipate that expert users will modify these tools and generate their own scenarios), they do automate the process for common performance modeling tasks. Future Work. In the course of this work, we have identified two major challenges that require further study.First, complex models, especially those with high dimensionality, will require additional techniques to effectively visualize the design space.While some visualizations geared towards multidimensional data exist (e.g., parallel coordinates), visualizing ten or more dimensions is a common problem in scientific visualization.The current optimization tools write out a data file that contains each evaluation of the objective function, and the search space can be visualized a few dimensions per plot. Another challenge for generating optimization problems involves specifying weights for complex objective functions.Directly adding weights to Aspen parameter definitions proved cumbersome and failed to address objective functions with nonparameter, derived quantities.Instead, the current tools require explicit command-line options for these weights. Figure 1 : Figure 1: Process diagram of the design exploration workflow.CLI indicates inputs specified via command line options to the four problem formulation tools.( ⃗ x) refers to the objective function and ⃗ x is the vector of decision variables. Figure 2 : Figure 2: A characterization of the total number of flops required for SSCP image formation by kernel.Each black tick indicates a single evaluation of the objective function by the nonlinear optimizer as it is executed. Figure 3 :Figure 4 : Figure 3: This chart shows the growth in the fftVolume data structure as a function of n relative to the memory capacity constraint (GPU physical memory on Keeneland). Figure 5 : Figure5: This chart shows the relationship between n and overall energy consumption, relative to the constraint of five megajoules.In addition to overall and per-kernel dynamic consumption, static costs for Keeneland are also shown. Figure 6 : Figure 6: This figure shows the feasible core and memory clock ranges for computing one thousand iterations of the CoMD EAM force kernel on a Fermi GPU within one second.Green markers indicate evaluation points that satisfy the constraints and red markers indicate infeasible clock settings. Listing 3: Aspen core model with static and dynamic costs. Table 1 : Comparison of Aspen scenarios for design space exploration.
7,051.8
2015-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
The X-Rotor Offshore Wind Turbine Concept The following paper provides an overview of a novel wind turbine concept known as the X-Rotor Offshore Wind Turbine. The X-Rotor is a new wind turbine concept that aims to reduce the cost of energy from offshore wind. Cost reductions are achieved through reduced capital costs and reduced maintenance costs. The following paper includes results from an early feasibility study completed on the concept. In the feasibility study exemplary designs were created and structural analyses were carried out. Turbine capital costs and maintenance cost of the X-Rotor concept were then roughly estimated. X-Rotor turbine costs and O&M costs were compared to four existing wind turbine types to investigate potential cost savings from the X-Rotor concept. Results show that the X-Rotor has potential to reduce O&M costs by up to 55% and capital costs by up to 32%. The combination of the capital cost and O&M cost savings show potential to reduce the CoE by up to 26%. Introduction Vertical axis wind turbine (VAWT) designs have as yet been unsuccessful as far as large scale commercial generation of electricity is concerned. The only commercial manufacturer of consequence, Flowind Inc., supplied more than 100 MW (over 500 wind turbines) of the Darrieus "egg-beater" type VAWT to Californian wind farms in the 1980's before becoming bankrupt in 1997. VAWT Ltd. in the UK (a subsidiary of Sir Robert McAlpine and Sons) built a number of prototypes up to 500kW rating of the H-rotor type. The 500kW VAWT 850 prototype ran for a period on a test site at Carmarthen Bay. High torque levels on the main transmission shaft caused a number of failures and the designs were never commercialised. Compared to state of the art horizontal axis wind turbine (HAWT) designs, there are two fundamental challenges for VAWT design. First, the aerodynamic efficiency is intrinsically lower and a VAWT must be 15% to 20% larger than a HAWT in order to produce the same power. Second, the optimum rotor speed is less than half that of comparable HAWTs and the drive train has more than double the rated torque for any given power rating, therefore, tends to be at least twice as heavy and expensive. In respect of these key issues, the Flowind design avoided high torque by running at higher than optimum speed but these designs produced about half the energy output of a similarly rated HAWT. In a VAWT with a V-rotor, the drive train components can be situated at ground level with associated maintenance and assembly benefits. The V-VAWT requires the least rotor material of all VAWT configurations for the required swept area and the low centre of thrust has advantages for offshore siting. Unfortunately, it suffers from extreme overturning moments on the main bearing. In a novel concept proposed by Sharpe [1], large transverse aerodynamic surfaces towards the ends of the blades are added to the rotor to cause some cancellation of the overturning loads. However, the large rotor and resulting intrinsically low rotational speeds drive up the weight and cost of the dive-train. The X-rotor concept is a radical rethink of the VAWT that directly addresses its disadvantages, see Figure 1. (These images are provided to give a rough idea of how the concept will look. They are not design drawings.) It is essentially a heavily modified V-VAWT, thereby, retaining some of the advantages of that concept. Similarly, to the Sharpe concept, transverse blade elements are added to the main rotor, the upper half of the X, to reduced overturning moments on the main bearing but in a simplified manner as the lower half of the X, keeping down the size and weight of the rotor. The rotor has relatively conventional blades angled both up and down from the ends of a relatively short, stiff cross-arm. Attached to ends of the lower half of the X-rotor are two secondary horizontal axis rotors, see Figure 1. The role of the lower half of the X-rotor is, thus, to reduce the overturning moment and support the secondary rotors but it more than pays for itself by, also, increasing the energy capture of the turbine. Figure 1. Design representation of the X-Rotor Concept The secondary rotors primary role is to provide power take-off. Because the rotor speed is very low and the rated torque is very high, a fundamental issue for large scale VAWTs is in providing power take off. Irrespective of the technology used, whether direct or indirect drive, conventional systems for power take-off from the main rotor shaft have a weight and cost which is directly proportional to torque rating. Being able to dispense with the drive-train and replace it by direct power take-off from the secondary rotors potentially has a substantial cost benefit. As described above, the X-Rotor concept is a hybrid of vertical and horizontal axis wind turbines. Unlike conventional vertical axis wind turbines, the purpose of the vertical axis rotor is not for power take-off. Rather, it is to considerably increase the wind speed impacting on the secondary horizontal axis rotors. The size of the secondary rotors is consequently much reduced and their rotor speed much increased, sufficiently to enable the power take-off to be by direct drive using a conventional generator, i.e. no gearbox or bespoke generators are needed. Ignoring generator losses, the efficiency of the secondary rotor power take-off is ) * /(  T P , where P is the aerodynamic power, T is thrust on the rotor and Ω is the rotational speed of the X-rotor. To keep this efficiency high, the secondary rotor is designed to have lower optimal tip speed ratio, approximately 4, and a lower aerodynamic efficiency than normal. However, since the wind speed relative to the secondary rotor varies sinusoidally as the X-rotor rotates, the average efficiency over a rotation is about 5% higher than that with a constant wind speed. Consequently, the overall efficiency of the secondary rotor power take-off system approaches 90%. Since the optimal tip speed ratio for the X-rotor is, also, approximately 4, the overall tip speed ratio for the combined X-rotor and secondary rotors is approximately 16. Hence, the maximum tip speed for the secondary rotors is 160m/s to 180m/s. (Since the X-rotor concept is for offshore deployment, noise is not considered to be an issue.) Consequently, the characteristics of the horizontal axis secondary rotors are substantially different than existing horizontal axis wind turbines, being designed for low tip speed ratios and low aerodynamic efficiency. A further feature of the secondary rotor power take-off system is that, due to the relative wind speed on the rotors being induced by the near constant rotation of the Xrotor, the torque acting on it varies much less than the torque acting on a conventional power take-off on a VAWT. The secondary rotors have the additional role to control the rotor speed about the vertical axis in below rated wind speed. This is achieved by varying the rotor speed of the secondary rotors to adjust their thrust. A fairly conventional below rated control strategy is proposed whereby in below rated wind speeds the rotor speed about the vertical axis is varied to maximise the aerodynamic efficiency of the X-rotor and in above rated wind speeds the rotor speed about the vertical axis is kept constant. The secondary rotors are designed to ensure that they are at optimal tip speed ratio when the X-rotor is at optimal tip-speed ratio. The variable speed operation of the secondary rotors is achieved by independently varying the frequency of the AC power generated at each rotor. The power electronics to provide this frequency variation are housed at the hub of the X-rotor. It is proposed to use a rotory transformer to transfer power from the rotating hub to the stationary support structure, i.e. an electrical machine for which the frequencies of the stator and rotors differ by the rotational speed of the hub so that their magnetic field rotate with the same speed. The frequency of power delivered to the rotary transformer is varied in line with the rotational speed of the hub by the power electronics. [2] To assist with shut-down and to provide over-speed protection in the event of faults, it is proposed to pitch the blades on the upper half of the X-rotor. In below rated operation, the capability to pitch the blades could be exploited to increase the aerodynamic efficiency of the X-rotor by cyclical pitching. Doing so has the potential to increase the efficiency by 10%. In above rated operation, the pitching capability is exploited to the control the rotational speed of the X-rotor about the vertical axis. Because of the simplicity of the power take-off systems in the X-rotor concept, its light weight and closeness to the surface of the sea, 20-25m, the O&M costs are expected to be much reduced compared to alternative wind turbine concepts. For example, almost all repair and maintenance could be undertaken without recourse to heavy lift vessels. Given that the total weight of the power take-off system is under 10 tonnes, they could be made replaceable so that maintenance and repair can be done onshore. Some of the potential advantages of the X-Rotor concept are summerised below. Not all are investigated in this paper, but are listed below for overview/introductory purposes: 1. Cost of energy reduction in comparison to similar wind energy technologies due to: − Lower capital costs (No gearbox, no requirement for bespoke multi-pole generator, no twist in blades) − Lower operation & maintenance costs (Greater reliability and the need for heavy lift vessels almost eliminated) 2. Floating platform potential due to: − Lower centre of gravity than current technologies − Lower centre of thrust and reduced overturning moments allowing for a smaller/cheaper floating platform 3. Up-scaling potential due to: − The ability to add additional secondary rotors − Up-scaling issues for horizontal axis wind turbines (HAWTs), such as size and mass of the drive-train at hub height, are avoided and some such as, gravitational blade loads, are reduced. The remainder of this paper consists of 5 sections. Those sections include an overview of an exemplary X-Rotor concept design, a detailed structural analysis on the X-Rotor main structure, an O&M cost analysis, a short section on the CoE of the X-Rotor with some simple comparisons to existing offshore wind turbines and lastly a conclusion and further work section. Exemplary X-Rotor Configuration In terms of primary and secondary rotor design, the optimised X-Rotor concept configuration is yet to be determined. However, the design of the turbine is subject to a number of constraints. Aerodynamic considerations impose a maximum value on the tip speed of the blades of the second rotors. In a system each secondary rotor directly driving a generator without the need for a gearbox, generator considerations impose a minimum value on the rotational speed of the second rotors. Taking the constraints into account, the mechanical power extracted by the first rotor is proportional to the fifth power of the tip speed of the blades of the second rotors, , and inversely proportional to the cube of the tip speed ratio of the blades of the second rotors, , and the square of the rotational speed of the In an exemplary arrangement that meets all the requirements, the rotor comprises an upper part with two blades in the form of a V and a lower part with two blades in the form of an inverted V. A secondary rotor is attached to the tip of each lower blade. Each secondary rotor directly drives a generator, with 4 pole pairs and a nominal frequency of 25Hz. The primary rotor has maximum aerodynamic efficiency at a tip speed ratio of 4.65. The secondary rotors operate at a tip speed ratio of 3.13 with an aerodynamic power coefficient of 0.27 and an aerodynamic thrust coefficient of 0.3375. (The combined tip speed ratio, the product of the tip speed ratios for the first and second rotors, is 14.57.) The power extracted from the wind by the primary rotor, , is = , / , where is the wind speed. When the rotational speed of the secondary rotors is 39.21rad/s and rated wind speed is 12.66m/s, the tip speed for the secondary rotors is 184.3m/s and the mechanical power extracted from the wind by the primary rotor is 5.99MW. The secondary rotors deliver 5.02MW of mechanical power to the generators in 12.66m/s wind speed, 84% of the power extracted from the wind by the primary rotor, increasing to 5.50MW in 20m/s wind speed, 92% of power delivered by the primary rotor. The secondary rotors have a combined area of 138.8m 2 . The primary rotor has a maximum value of aerodynamic power coefficient of 0.39 and an area of 12,351m 2 . Structural analysis of primary rotor blades for two-blade design The following sub-sections provide an overview of the structural analysis carried out on an exemplary two blade X-Rotor concept, as shown in Figure 1. Primary rotor Blade design The primary rotor blades consist of symmetric aerofoil profiles, which are strengthened by two spar caps that take most of the bending loads. The spar caps are connected by two parallel shear webs. The blade shell is also reinforced at the leading and trailing edges. In Figure 2 the layout of blade internals is presented. The dimensions of the primary rotor blades can be seen in Section 3.2.1. According to the wind turbine design guidelines [3], [4], the wind turbine blade structure should be able to pass the followings checks: • Ultimate strength • Buckling • Fatigue • Deflection The deflection control refers to clearance between tip of the blade and tower for horizontal axes turbines. Due to the innovative configuration of X-Rotor, this check is irrelevant here, but the deflection values should not be too high, since unfavourable aerodynamic damping might occur. X-rotor's loads are obtained based on the design configurations for the turbine operational and extreme wind conditions. The operational and extreme loads are required for the sake of turbine's power production adjustment, fatigue analysis and the design of blade section internals. The total power extracted from the wind at 12.5 m/s is 6.47 MW, but electrical power is inevitably lower due to losses. Figure 3 depicts the correspondence between rotational speed and wind speed together with an adjusted power curve of the X-Rotor (with an assumed efficiency of 90%). Extreme loads Extreme loads correspond to the parked position of the turbine, where the 50-year occurrence extreme mean wind speed of 52 m/s is acting and a simultaneous loss of grid connection occurs (DLC 6.2 of IEC 61400-1 [3]). Thus, there is the possibility for the turbine to be caught in a locked position with the wind blowing parallel to the rotor plane. To simulate this situation, Ansys CFX is used. The pre-dimensioning of the cross sections of the blades is performed by imposing the stress in the material not to exceed the allowable value when the structure is subjected to the extreme loads. A partial safety factor of 1.35 is chosen for loads. For the short term verification, the partial safety factor for the material is set to 2.2 [3], and 1.85 for buckling [5]. The blade is divided into 17 elements having 18 sections, starting with NACA 0025 at the root and ending with NACA 0008 at the tip. Pre-dimensioning is performed for each section, where the profile is changing. The bending and edge moments are computed in each section from the Ansys CFX analysis results. Afterwards, for each section a finite element (FE) model of the blade section with a constant cross section is developed, using shell elements. According to [5] a structure, whose load-bearing laminate consists of unidirectional carbon-fibre reinforcement layers, may qualify, with regard to short-term and fatigue strength, for a simplified strain verification, provided a high laminate quality can be verified. The simplified strain verification states that the strain along the fibre directions shall remain below the following design values: tensile strain ɛ , ≤ 0.24% compressive strain ɛ , ≤ |−0.18|% Dimensions are adopted such that the above strain constraints can be fulfilled together with the stress criteria. Therefore, there will be no need for a detailed fatigue check under these circumstances and the rigidity condition may be fulfilled from the beginning. For all sections the position of the internals relative to the leading edge is: leading edge reinforcement ends at 1% of the chord; spar caps are starting from 15% and ending at 47% of the chord; shear webs are equally spaced inside the spar caps; the trailing edge reinforcement begins at 99% of the chord. The locations of the internals are determined by trial. The spar caps are positioned to be far from the symmetry axis so as to achieve maximum bending resistance, but keeping the mass as low as possible. Based on the above masses the two bladed system shown in Figure 1 has a primary rotor mass of 127768 kg. This mass is in line with masses of existing wind turbine rotors of similar rated power, which is relevant for the CoE comparison in Section 5. Fatigue evaluation The tensile and compressive strains under operational loads at the highest mean wind speed, namely 25 m/s, are lower than those under extreme loads. In such a case, according to existing standards, a detailed fatigue check is unnecessary. Nevertheless, it is recognised that a detailed fatigue check is required and will be undertaken during the next phase of the X-Rotor concept's feasibility analysis, once a full aeroelastic model has been developed. Buckling verification DNV-GL recommends for the stability analysis that the partial safety factor for materials be applied to the mean values of the material stiffness in order to determine the design values of the component resistances. Based on those guidelines a safety factor of 1.85 is obtained [5]. For upper and lower blades FE models are constructed using beam elements. For each of the 17 elements of on each blade an average of the initial and ending section is used for modelling. The extreme load is applied as pressures on each element. For the lower blade a concentrated mass is put at the tip to simulate the nacelle. For the initial control, the buckling eigenvalue analysis, taking into account the first ten modes, is performed. The resulting buckling load factors of the upper and lower blades are computed to be 2.67 and 9.85. These values are so high that the blades will not become unstable under the extreme loads and as such, nonlinear buckling analysis seems to be no longer necessary. Modal analysis Blade modal analysis is performed, in order to control the blades natural frequencies according to the Campbell plot. For each lower blade a concentrated mass without rotational inertia is added at the blade tip, to be able to simulate the nacelle of the secondary rotor. The smallest natural frequencies of the upper and lower blades are respectively 0.59 and 0.52 Hz. Therefore, the resonance phenomenon for the upper and lower blades due to the resulted harmonic rotational peaks of the rotor within the operational wind speeds will be unlikely. O&M Cost Analysis The O&M costs of existing offshore wind farms make up approximately 30% of the levelised cost of energy for a given offshore wind farm [6]. The X-Rotor offshore wind turbine has the potential to reduce the O&M costs for a number of reasons. The primary reason is the X-Rotor does not contain either of the two components that contribute most to offshore wind turbine downtime in existing wind turbines, namely the gearbox and/or a multi-pole generator. Secondly the X-Rotor does not require a jack up vessel for drive train failures, substantially reducing O&M vessel costs. The following subsections outline the methodology and results obtained from comparing the X-Rotor O&M costs to the O&M costs for existing offshore wind turbines. O&M Cost Modelling Method In order for X-Rotor O&M costs to be compared to existing offshore wind turbines the O&M costs for the X-Rotor must first be calculated. A comparison to O&M costs for existing wind turbine types can then be carried out. To ensure a like for like comparison a hypothetical site was used for each wind turbine type in the comparison. The hypothetical site is located 50km from shore and contains the same environmental (wind speed and sea state) conditions for each turbine type. To calculate the O&M costs for the X-Rotor the same methodology and O&M cost model from [6] were used. The O&M model chosen for this analysis was the one reported in [6,7]. The model is a time based simulation of the lifetime operations of an offshore wind farm. Failure behaviour is implemented using a Monte Carlo Markov Chain and maintenance and repair operations are simulated based on available resource and site conditions. The model determines accessibility, downtime, maintenance resource utilisation, and power production of the simulated wind farms. The outputs of the model for this work a) The site wind speed and sea state data were taken from reference [6,9]. Data included wind speeds, wave height and wave period data from the FINO platform in the North Sea. b, c, d, e) The wind turbine component failure rate, repair times, repair costs and "number of technicians required for repair" inputs came from field data. They were obtained from an empirical analysis of existing operational wind turbines as detailed in [6,9,10]. Inputs b, c, d and e were assumed the same for a number of the wind turbines components that are common between the X-Rotor and existing wind turbines, for example, tower, blades, pitch system etc. Examples of X-Rotor O&M model input differences from existing wind turbines are the gearbox failure rate, repair times, repair costs. Those inputs were removed, while the generator failure rate, repair times, repair costs were doubled (as a conservative estimate to represent the two lower rated generators). Additionally, the failure rate, repair times and repair costs of the X-Rotor power take off were added and assumed to be the same as a wind turbine transformer. f) Vessel costs were assumed the same as [6]. Once all model inputs were adjusted to represent the X-Rotor, the O&M cost model was populated and utilised to determine the X-Rotor O&M costs. The results were then compared to O&M costs for 4 different existing wind turbine types at the same hypothetical wind farms. The four existing wind turbine types for comparison are detailed in Figure 5. Results of the comparison are shown in subsection 4.2. Figure 6 shows the O&M cost comparison between the X-Rotor and the 4 existing wind turbine types described in the previous section. When compared to the average O&M cost of existing wind turbine types the X-Rotor has ~43% lower O&M cost. It can be seen that the X-Rotor has ~55% lower O&M costs than the worst performing existing wind turbine type (3 Stage DFIG) and ~25% lower O&M costs than the best performing existing wind turbine type (DD PMG). It is expected the X-Rotor O&M costs will be further reduced once future work is carried out on the modelling methodology to capture additional X-Rotor inputs. At the moment conservative assumptions were made for all uncertainties in model inputs, for example around failure rates, repair times, repair costs etc. A conservative approach was adopted to ensure X-Rotor O&M cost benefits were not over estimated. CoE Comparison In order to complete an early stage CoE comparison between the X-Rotor and existing offshore wind turbines, inputs from the CoE equation for existing turbines were compared with the same inputs for the X-Rotor. The CoE inputs for comparison in this paper are O&M costs and turbine costs, all other costs are assumed the same for the purpose of this comparison. -Cost of energy savings from O&M As seen in section 4.2 the X-Rotor has lower O&M costs than each of the four existing wind turbine types. 55% lower O&M costs than the 3 Stage DFIG configuration, 49% lower than the 3 stage PMG, 44% lower than the 2 stage PMG and 25% lower than the DD PMG configuration. Based on O&M costs making up 30% of the overall cost of energy [6] the previously mentioned O&M costs saving equate to X-Rotor CoE savings of 17% when compared to the 3 stage DFIG configuration, 15% when compared to the 3 stage PMG configuration, 13% when compared to the 2 stage PMG configuration and 7% when compared to the DD PMG configuration. On average across all 4 existing wind turbine types, the X-Rotor achieves a 13% CoE savings through O&M cost reductions. It should be noted that all O&M cost savings presented in this section are based on the assumption that failure rates, failure costs, number of technicians required for repair and downtimes of components that are used in both the X-Rotor concept and existing turbines are assumed the same. -Cost of energy savings from turbine costs Turbine capital costs and turbine component costs for the 4 existing turbines shown in Figure 5 are provided in [11]. For the X-Rotor turbine cost comparison it is assumed that outside of the gearbox and generator all other turbine costs are the same as the existing turbines. This assumption is based on the fact that outside of the gearbox and direct drive generator the main difference between the X-Rotor and existing wind turbines is the X-Rotor's novel X shaped rotor. As shown in Section 3, the mass of the X-Rotor has been determined to be in the same region as that of existing direct drive turbines. Based on this mass similarity, it has been assumed that costs will be similar. In reality, the assumption may be a conservative estimate because no twist is required in the X-rotor blades, simplifying the production process. Assuming the cost difference is driven by the removal of the requirement for a gearbox and multi-pole generator the capital cost savings from the X-Rotor compared to the cost of the 4 existing turbines range from 5% for the DFIG and 32% for the DD PMG with an average of 17% across all 4 drive trains. Based on [12] indicating turbine costs make up 30% of the overall cost of energy it can be seen that the X-Rotor turbine costs saving compared to the average capital cost of existing turbines is ~5%. -Total cost of energy savings When comparing the 4 existing turbines with the X-Rotor it was found that for the existing turbine types the lowest capital cost has the highest O&M cost and vice versa. For example, the 3 stage DFIG configuration has the highest O&M cost but the lowest turbine cost. Consequently, to calculate the total CoE savings from the X-Rotor compared to the average of the 4 other drive train configurations the 10 addition of the average cost of energy saving from O&M and the average cost of energy saving from turbine costs does not provide a true indication of the overall CoE savings. Instead, each of the four drive train types must have their O&M cost saving and capital cost saving added together individually before an average can be taken. When the O&M and turbine cost saving are added for each of the four turbine types the CoE saving from the X-Rotor compared to the other drive train types ranged from 22% to 26% with an average of 24%. It should be noted that all CoE savings presented in this section are based on the assumption that outside of the differences outlined earlier in Section 4 and 5, all costs remain equal between turbine types compared. Conclusion This paper describes a novel offshore wind turbine concept known as the X-Rotor. Possible design configurations are outlined and structural and O&M analyses are completed. Four existing wind turbine types are compared to the X-Rotor in terms of O&M costs, turbine costs and overall CoE. This work found that the X-Rotor can provide: -O&M cost savings of up to 55% compared to existing wind turbines -turbine cost savings of up to 32% compared to existing wind turbines -CoE savings of up to 26% compared to existing wind turbines Further work in this area will focus on obtaining greater certainty in the modelling carried out to date and optimising the design of the X-Rotor concept in terms of dimensions, angles, number of blades and number of secondary rotors.
6,609.4
2019-10-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Beauty is more attractive: particle production andmoduli trapping with higher dimensional interaction We study quantum effects on moduli dynamics arising from particle production near the enhanced symmetry point (ESP). We focus on non-renormalizable couplings between the moduli field and the field that becomes light at the ESP. Considering higher dimensional interaction, we find that particle production is significant in a large area, which is even larger than the area that is expected from a renormalizable interaction. It is possible to find this possibility from a trivial adiabatic condition; however the quantitative estimation of particle production and trapping of the field in motion are far from trivial. In this paper we study particle production and trapping in detail, using both the analytical and numerical calculations, to find a clear and intuitive result that supports trapping in a vast variety of theories. Our study shows that trapping driven by a non-renormalizable interaction is possible. This possibility has not been considered in previous works. Some phenomenological models of particle physics will be mentioned to complement discussion. Introduction Supersymmetric (SUSY) models are one of the most promising candidates of the theory beyond the standard model (SM). One of the characteristic features of the SUSY models is that they have a number of light scalar fields called flat directions or moduli, which describes deformations of the effective system. Therefore, these models may have many quasi-degenerated vacua, which could be stable or metastable. Our Universe may arise from one of those vacua. Since the vacuum expectation value (VEV) of the moduli determines the low-energy effective theory, it is important to find the way how these moduli find their vacuum when the moduli are not static. Thinking about the very early stage of the cosmological evolution, these fields might have a large kinetic energy compared to their potential energy. A decade ago, Kofman et al. suggested in their paper "Beauty is attractive" [1] that a vacuum could be chosen because it is attractive. Their critical observation is that since new degrees of freedom become light when the modulus passes through the enhanced symmetry point (ESP), these species could be produced by the quantum effects and could alter the dynamics in such a way as to drive the moduli towards the ESP. They called JHEP01(2014)141 this mechanism "trapping". Their basic argument was based on the effective theory that contains an interaction between two scalar fields: (1.1) The interaction may arise in a system of moving branes, in which a gauge symmetry of the effective action could be enhanced when branes pass through. On the other hand, we know that in some cases moduli in the effective action may not be coupled to light fields through renormalizable couplings, in the sense that they may have interactions suppressed by the cutoff scale. A typical example could be the neutrino mass in the see-saw mechanism, which couples to the Higgs scalar field through higher dimensional interaction in the low energy effective theory. 1 Note that in this paper we are considering a more general situation, which has not been considered in ref. [1]. We are extending the original scenario to include higher dimensional interaction, to show that such interaction is indeed responsible for particle production and trapping. We consider interaction: which is sometimes called "higher dimensional" interaction, since the mass dimension of |φ| 2n χ 2 is higher than four. 2 One might think that higher dimensional interaction is not important because it is suppressed by the cutoff scale; however a trivial adiabatic condition (|ω k /ω 2 k | > 1, which will be defined later in this paper) suggests that particle production is possible in a wider area than conventional trapping. Although the qualitative argument based on the adiabatic condition is suggesting that higher dimensional interaction could be important, it is not quite obvious whether such interaction is responsible for trapping. Obviously, the quantitative estimation of quantum particle production is far from trivial. In this paper, we study the production of the particles when higher dimensional interaction is responsible for the mechanism. The amount of particle production will be calculated extending the method developed in ref. [5]. Our conclusion is that particle production via higher dimensional interaction is significant in a wide area around the ESP. Our result is consistent with the naive estimation based on the adiabatic condition. Moreover, using the numerical calculation, we demonstrate thattrapping is indeed possible via higher dimensional interaction. Interestingly, as is expected from the adiabatic condition, we confirmed that the area of trapping becomes wider than conventional trapping. Our conclusion is that the mechanism can play an important role in selecting the vacuum in the early Universe. Note that even if one is considering the conventional ESP, one might have to consider higher dimensional interaction, since in reality there could be multiple fields that are responsible for the symmetry breaking. In that case quantum particle production must be calculated when the original symmetry is already broken by the JHEP01(2014)141 other fields (but higher dimensional interaction could remain). 3 This possibility has never been considered before. We found that trapping is indeed possible for such ESPs, where the gauge symmetry could not be recovered but higher dimensional interaction may arise. 2 Basic mechanism of quantum particle production and trapping at an ESP First we review the basics of particle production and trapping of ref. [1]. Consider the Lagrangian: which consists of two scalar fields φ (complex) and χ (real). Here, φ is a field in classical motion, while χ is the field that is produced by the quantum effect. For simplicity, we assume that φ is homogeneous in space, so that we can write φ = φ(t). When χ ≃ 0 and the back-reaction is negligible, the equation of motion gives where v is the velocity of φ, and µ is called the impact parameter. This solution is valid as far as the adiabatic condition (ω k /ω 2 k < 1) is satisfied for ω k . Here, we defined where k is the momentum of χ. For a low momentum mode (k ∼ 0) and a small impact parameter (µ 2 ≪ v), the adiabatic condition leads to Consider the trajectory shown in figure 1, in which φ comes into the "non-adiabatic" area |φ| < ∼ v/g, where the adiabatic condition is violated and the quantum back-reaction is not negligible. The observation in ref. [1] is that in that area the mass of χ becomes light and the kinetic energy of φ can be translated into χ-particle production. The number density n χ of the field χ has been evaluated as This result shows that particle production is indeed significant in the area |φ| < ∼ v/g. The result is consistent with the naive estimation from the adiabatic condition. After the quantum excitation of χ, φ moves away from the ESP. However, the interaction causes back-reaction, since the motion increases the mass of χ particle (m χ = g|φ|). As a result, the energy density of χ increases as φ moves away from the ESP. One will find ρ χ ∼ m χ n χ = g|φ|n χ . The energy density ρ χ changes the potential of φ. The induced potential is a linear potential measuring the distance from the ESP. See also figure 2. After that, φ goes up the potential until the initial kinetic energy is comparable to the potential energy, where φ makes a turn and goes back to the ESP again. Then, there will be another production of χ, giving additional back-reaction. Finally, φ is "trapped" around the ESP. Numerical calculation of the trajectory is shown in figure 3. JHEP01(2014)141 3 Analytical calculation with higher dimensional interaction In the previous section we reviewed quantum particle production when φ passes through the non-adiabatic area around the ESP. Trapping has already been confirmed [1] for a renormalizable interaction given in eq. (2.1). In this section, we consider higher dimensional interaction instead of renormalizable one. We start with the Lagrangian: where n = 1 reproduces a renormalizable interaction. One can check the consistency of our calculation for n = 1. First consider the non-adiabatic condition. Fromω k /ω 2 k > ∼ 1, we obtain whereφ = v and |φ + φ * | ∼ 2|φ| are considered. The above (naive) condition shows that the area may become even larger for larger n. A similar speculation could arise from the effective mass near the ESP, which is given by One can see that m χ is lighter for larger n. Since the kinematically allowed mass (m χ ) is bounded from above by the initial kinetic energy of φ, we can expect that the area of particle production could be wider for larger n. On the other hand, the interaction seems to be weaker for larger n. Therefore, there could be a tension between these effects, which cannot be solved without looking into more details of the mechanism. Equations and trajectory without particle production Before solving eq. (3.1), we need to include δV for the normalization: where δV is defined by and ω k is a frequency of the quantum field χ defined by δV is needed to subtract a divergence in the Coleman-Weinberg potential. Note that we are considering the effective action that has an explicit cutoff scale. Next, we define the wave function u k for χ. Then the creation/annihilation operators a k , a † k are defined as where a k , a † k satisfy the commutation relations (3.8) and u k satisfies the norm conditionu * We obtain the equations of motion: Here, we assumed that φ(t) is homogeneous in space. Since ω k is the homogeneous function of φ(t), the wave function u k = u k = u −k is also homogeneous in space. Let us examine the initial conditions. For φ(t), we can define initial quantities φ 0 andφ 0 at t = −∞. Using the WKB approximation for u k (t), we consider the initial wave function This WKB solution is valid as far as the adiabatic condition is satisfied: Substituting (3.12) into (3.10), we obtain the trivial solution which reproduces the classical motion (2.2). The above solution is valid when quantum production and the back-reaction are negligible. Quantum particle production and the back-reaction As φ approaches toward the ESP, the adiabatic condition (3.13) will be violated. In that case the WKB approximation (3.12) is no longer valid. We consider [5,6] The equation of motion (3.11) giveṡ Table 1. where the initial conditions are α k (−∞) = 1 and β k (−∞) = 0. We can evaluate the number density of the light field from β k (∞), using the relation Considering φ(t) = vt + iµ as the 0-th order approximation, one can find the exact solution when n = 1. (See the review in section 2.) The situation changes when n ≥ 2, since we do not have the obvious solution that can be used for n χ . Therefore, we need another calculational method that is valid for higher dimensional interaction. An obvious way is to find n χ using the numerical calculation, which will be demonstrated later in this paper. The numerical calculation supports our analytical calculation. Because of the complexity of the method, the details of the analytical calculation will be described in appendix A. We obtained where C . (3.20) Considering n = 1, one can easily check that eq. (3.19) is consistent with eq. (2.5). Note that the leading order (the first term in eq. (3.19)) is independent of the impact parameter µ. (Note however that the condition (3.20) is crucial.) Moreover, if we consider a large-n limit, the area of particle production becomes larger. Significantly, the area of quantum particle production can become larger than the conventional scenario, although the number density (n χ ) has a suppression factor v gΛ 2 The above result is consistent with the naive argument based on the adiabatic condition. Of course, we need to examine the back-reaction from particle production in more detail; otherwise trapping is not obvious. The back-reaction is caused by the effective potential generated by the energy density ρ χ = m χ n χ : Note that the Hamiltonian gives JHEP01(2014)141 where H is the VEV of the Hamiltonian H. Here, we can see that trapping force is weak near the ESP, but it becomes stronger when φ is away from the ESP. Except for a low-velocity limit (v ≪ gΛ 2 ), trapping force is significant when φ approaches φ ∼ Λ. In the next section, we will examine the trapping mechanism using the numerical calculation. Numerical results In this section we present our numerical calculation and compare it with the analytical estimation, so that one can easily examine the consistency between them. Then, we can examine the trapping mechanism using the trajectory φ(t). Our numerical results are obtained by solving the equations of motion (3.10) and (3.11). The initial condition is defined using the WKB approximation, since φ is initially away from the non-adiabatic area. In this paper we consider φ(t 0 ) = 5Λ+iµ, where the initial velocitẏ is obtained by taking the differential of eq. (3.12), which giveṡ Number density after quantum particle production The number density after quantum particle production is obtained from u k andu k [7]: We show our results for v = −0.1Λ 2 and v = −0.01Λ 2 in figure 4. Considering the initial energy density (for v 2 = 0.01Λ 4 and 10 −4 Λ 4 ), we can examine the ratio of the induced potential energy at |φ| = Λ to the initial kinetic energy. Here, the vertical axis in figure 4 gives the potential energy at |φ| = Λ. From the figure, we can see that the ratio is roughly 1/100 after the first event of quantum particle production. The figure shows that our analytical estimation is in good agreement with the numerical calculation. We can also confirm that the area for quantum particle production is expanded for larger n. Time evolution and dynamics In section 3.2, the number density is calculated just for the first impact. In reality, φ is pulled back toward the ESP, and the production occurs many times. Since the analytical calculation is not suitable for that purpose, we solved eq. (3.10) and (3.11) numerically and examined the dynamics of trapping. We calculate the time evolution of |φ| and n χ for g = 1, v = 0.1Λ 2 , µ = 0.1Λ, and n = 1, 2, 3. The results are shown in figure 5, in which we can see that trapping is quick for n = 3. Although higher dimensional interaction becomes significant (i.e, the effective potential becomes steep) for |φ| > Λ, in reality the effective theory could not be valid for |φ| > Λ. On the left-hand side, the vertical axis is |φ|/Λ, while on the right-hand side, the vertical axis is n χ Λ 3 . The time evolution toward trapping is shown on the left-hand side. The produced number density n χ /Λ 3 for g = 1, v = 0.1Λ 2 , and µ = 0.1Λ is shown on the right-hand side. We can see that trapping is quick for n = 3, if it is compared with conventional trapping (n = 1). For n = 1, the typical time scale of trapping is beyond the scale of the figure. JHEP01(2014)141 In that case, the back-reaction has to be explained using the original action (or by another effective action which has another cutoff). In any case the calculation becomes highly model dependent and is not suitable for our purpose. Here, we have considered a modest JHEP01(2014)141 assumption that higher dimensional interaction is valid for |φ| < 10Λ. Alternatively, one can avoid the problem by increasing the total number of χ species to O(100). In that case, all the trapping processes are done within |φ| < Λ. As will be seen in the next section, the ESP in a realistic model of anomalous U(1) GUT can have the number of χ species more than O(100). Our numerical calculation shows that trapping within |φ| < Λ is possible after several oscillations. This is much less than O(100), which is naively expected from the first event of particle production. Efficient particle production after the first production is due to the statistical property of the bosonic field. For instance, one can estimate that the production rate of the bosonic field is proportional to (1 + f B ), while for the fermionic field the rate is proportional to (1 − f F ). Here, f B and f F are the phase space densities for the bosonic field and the fermionic field, respectively. Our numerical calculation shows that f B grows as f B = 2. , which is enough to explain the enhancement of particle production. Application to anomalous U(1) GUT In this section we consider the possibility of particle production and trapping in a phenomenologically viable model, which is called anomalous U(1) grand unified theory (GUT) [8][9][10][11][12][13][14][15][16][17]. The benefit of this scenario is that one can solve the doublet-triplet splitting problem as well as obtaining realistic quark (lepton) masses and their mixings under the natural assumption that all the interactions allowed by the symmetry have O(1) coefficients. However, since the theory will have an indefinite number of higher dimensional interactions, there could be an uncountable number of vacua in the low-energy effective action in general [9,14]. In this section we will discuss the selection of the vacuum in the light of trapping. The essential point is that the physically viable vacuum is near the origin, where all fields have vanishing VEVs and almost all fields become massless, while the other unphysical vacua are far away from the origin. Toy model First, we consider a toy model in which we have the anomalous U(1) gauge symmetry and the Fayet-Iliopoulos (FI) term. The superfields Z i (i = 1, · · · , n) have the anomalous U(1) charges z i , which are integer. The superpotential W (Z i ) contains all the possible interactions including higher dimensional interaction. We assume that coefficients of these interactions are always O(1). Here we suppose that they do not have explicit mass terms at the tree level. 4 The supersymmetric vacuum is determined by the F and D term conditions: 2) F and D term conditions will give the minimum of the potential at V 0 = 0. The number of the independent F-term conditions is n − 1, since the gauge invariance of the superpotential gives an additional relation ∂W ∂Z i δZ i = 0, where δZ i = iz i Z i . Then, n − 1 of the F -term conditions (complex) and one D-term condition (real) will determine the vacuum of n complex Z i , except for one degree of gauge transformation (real). Since the number of the interactions is indefinitely large, there could be a huge amount of SUSY vacua that can satisfy Z i = O(Λ). However, if the number of the fields with a positive charge (Z + i , i = 1, 2, · · · , n + ) is smaller than the number of the fields with a negative charge (Z − i , i = 1, 2, · · · , n − ; n + < n − ), there could be a vacuum with Z + i = 0 for all Z + i . In that case, since all the F -term conditions of Z − j become trivial, the F -term conditions of Z + i and the D-term condition will determine the VEVs of Z − j . In that case the VEVs of Z − j are much smaller than Λ, since we have the relation ξ 2 − |z i ||Z i | 2 = 0 and ξ 2 ≪ Λ 2 . This solution meets the requirement for the physically interesting vacuum. Especially when n + = n − − 1, the VEVs can be determined by their charges as where λ ≡ ξ/Λ. This vacuum is quite close to the origin where all fields have vanishing VEVs because Z i ≪ Λ. The above solution is interesting, in the sense that the coefficients of the interactions in the effective theory are determined by their U(1) charges. For example, consider a mass term that is generated from λ x+y ΛXY , where x and y are the anomalous U(1) charges of X and Y , respectively. Here x + y ≥ 0 is assumed. Note that the mass in the above scenario is generated from the VEVs of the negatively charged fields Z − j . When x + y < 0, such mass term is forbidden, because Z + j = 0. This is called the SUSY zero mechanism. In the same way, even for higher dimensional interactions the effective coefficients can be determined by their charges. This feature plays an important role in obtaining the realistic quark and lepton masses and mixings and in solving the doublet-triplet splitting problem in the realistic models. Note that in this model the effective mass of the low-energy effective action is induced by higher dimensional interaction, which explains the hierarchy of the mass scales. Obviously, the most beautiful and therefore the most attractive point would be the origin, where the VEVs of all (moduli/Higgs) fields Z i vanish and a lot of χ species Z i become massless. We call this special point the most enhanced symmetry point (MESP). Note that Z i fields behave not only as the moduli fields but also as the χ fields in this model. together at the MESP at the same time, they can be trapped at the MESP. 5 Since it seems unlikely to happen, a more plausible process for trapping is that the fields are trapped by a step-by-step process. We have a lot of attractive hypersurface where one or two χ fields become massless. Once moduli fields pass through the attractive hypersurface, the χ particles are produced. Because of the produced effective potential, the moduli fields can be trapped on the hypersurface. In that way, all moduli fields may be trapped one-by-one. Note that the supersymmetry is broken at the MESP, where V 0 = g 2 A 2 ξ 4 . After trapping, as the number densities of the particles are diluted by the expansion of the Universe, these moduli fields must move toward the SUSY vacuum of eq. (5.3), which is the nearest SUSY vacuum from the MESP. The other unphysical vacua with Z i ∼ O(Λ) are far from the MESP. As a result, the vacuum in eq. (5.3), which is important to obtain physically viable GUT, is selected by the trapping mechanism. Of course, it is not clear whether such trapping process truly happens or not. Even if trapping at the MESP does not happen, the vacuum of eq. (5.3) has much more advantage than the other unphysical vacua with Z i ∼ O(Λ) because of the effective potential induced by the produced χ particles. Realistic models There are several realistic SUSY GUT models with anomalous U(1) gauge symmetry. For example, in SO(10) models, we introduce three 16 and one 10 in the matter sector and two 45, two pairs of 16 and 16, two 10, and several singlets in the Higgs sector, and one 45 for the vector multiplets [8,10]. Therefore, more than 1000 real species will be massless at the MESP, including their superpartners. In the simplest E 6 model, we introduce three 27 in the matter sector, two 78, two pairs of 27 and 27, and several singlets in the Higgs sector, and one 78 for vector multiplets [13]. There are some E 6 models with an additional pair of 27 and 27 [9,11]. They have nearly 2000 massless fields at the MESP. Therefore, there are plenty of χ fields that may cause trapping. Since in these models the mass terms in the lowenergy effective theory are generated by higher dimensional interaction generically, we are expecting that higher dimensional interaction is really responsible for trapping and vacuum selection. As we mentioned in the previous subsection, it seems unlikely that all moduli fields meet together accidentally at the MESP even once. We are expecting that fields will be trapped one after another and finally all moduli fields are trapped around the MESP. One might be anxious about the decay of χ fields [18,19], which has not been mentioned in our analysis. Consider the situation in which several moduli fields are already trapped on certain attractive hypersurface. If those moduli fields are responsible for the mass of some χ fields (χ l ), basically χ l are light. When another moduli field comes across other attractive hypersurface, the χ h particles produced by the moduli field can become heavier than χ l at least when the moduli field moves away from the attractive hypersurface. In that case the heavy field χ h could decay into χ l , if the lifetime of χ h is shorter than the time scale of the oscillation. If χ h decays into χ l , the number density of χ h becomes smaller than that without decay, and therefore, attractive force to the attractive hypersurface is JHEP01(2014)141 weakened. Namely, χ h fields with longer lifetime produce more attractive force to the attractive hypersurface. It is reasonable to expect that the χ particles with masses from higher dimensional interaction have a longer lifetime, because they are lighter and expected to have higher dimensional interaction even for the decay. Therefore, such χ particles may have some advantages for trapping in this situation. Let us consider another situation in which two moduli fields A and B oscillate around different hypersurfaces. When the modulus A (B) passes through each attractive hypersurface, the particles χ A (χ B ) are produced. The masses of these χ particles are dependent on time because the distance from the hypersurface changes. When χ A becomes heavier than χ B , χ A can decay into χ B , and when χ B becomes heavier than χ A , χ B can decay to χ A . Therefore, when the moduli field passes through the attractive hypersurfaces, the χ particles are produced not only by breaking the adiabatic conditions but also by decaying from other heavy particles. The situation becomes highly complex. Again, the χ particles with longer lifetime have an advantage for trapping. It is unclear whether the vacuum is trapped at the MESP in the last or not, but in this subsection, we assume that the vacuum is trapped at least near the MESP. The true trapped point may not be the MESP if the masses of fields χ which dominate the energy density are from higher dimensional interaction with n ≥ 3. This is because their contributions to the effective potential cannot dominate V 0 around the MESP. This may be important to apply the GUT models because monopoles do not appear in this situation. (Note that the gauge symmetry is not recovered by trapping.) Note also that in the above scenario SUSY is broken at the MESP, and therefore, it is not the true vacuum. Moreover, because of the SUSY breaking, the scalar component fields have masses which are proportional to the FI parameter ξ 2 and the anomalous U(1) charges. The negatively charged fields like GUT Higgs fields have negative mass square, and therefore they are unstable without the support by the produced χ particles. Therefore, the GUT Higgs fields trapped around the MESP are supposed to move away from the MESP to find the true vacuum after dilution of the produced χ fields. Basically, the vacuum which satisfies the relation (5.3) is selected in the last because the other vacua with the VEVs O(Λ) are far from the MESP as discussed in the previous subsection. 6 However, in the realistic GUT models, we have several vacua which satisfy the relations (5.3). One of these vacua gives the realistic low-energy effective action. The situation is very similar to the minimal SUSY SU(5) GUT, which has three supersymmetric vacua with the gauge symmetries SU(5), SU(4) × U(1), and SU(3) × SU(2) × U(1). We have not understood why SU(3) × SU(2) × U(1) is selected. Usually, it is expected to happen accidentally. However, if we study the trapping process and the moving process after trapping in detail, this important question may be answered. Finally, we would like to mention the possibility of inflation after trapping. Our optimistic expectation is that inflation and vacuum selection are both explained by particle production. Here the MESP is a local maximum in the effective potential, whose height JHEP01(2014)141 is ξ 4 . ξ is just below the usual GUT scale (Λ GG ∼ 2 × 10 16 ), because of the relation ξ = λΛ ∼ O(1) × Λ GG [10,12]. More specifically, we can take λ ∼ 0.22 that gives ξ ∼ 5 × 10 15 GeV. The potential V 0 = (ξ 2 − Z) 2 is too steep to satisfy the slow roll condition for sufficient N value in usual way. There are many inflationary models in which trapping plays a crucial role [20][21][22][23][24] and also the many-field model may accommodate slowroll inflation on the steep potential [3,25], but it is unclear if the GUT theory could be responsible for inflation [26]. We expect that inflaton becomes sufficiently slow because of particle production. Since the origin is tremendously attractive (i.e., quite many massless fields appear at the origin) in the anomalous U(1) GUT models, the dissipation effect due to particle production cannot be negligible. We will study this subject in future. Summary In this paper, we have studied quantum particle production and trapping through higher dimensional interaction. Quantum particle production is possible when the moduli field φ passes through the non-adiabatic area around the ESP. We have extended the model [1] and considered quantum particle production in which the mass of the particles comes from higher dimensional interaction. We found that particle production is possible in the nonadiabatic area, which could be larger than the conventional scenario. We have confirmed quantum particle production and trapping for higher dimensional interaction. Then we have considered the possibility of vacuum selection via particle production in realistic GUT models, which are called anomalous U(1) GUT. In such models, the most serious problem called the doublet-triplet splitting can be solved as well as realistic quark and lepton masses and mixings are obtained under the reasonable assumption that all interactions including higher dimensional ones allowed by the symmetry are introduced with O(1) coefficients. However, they have infinite unexpected vacua in general because of the natural assumption. This issue can be solved by considering vacuum selection by quantum particle production, because the most attractive point is near the physically viable vacuum in these models. It is important to understand why vacuum selection is possible among various vacua in SUSY models. Our expectation is that the dynamical history of the Universe, including quantum creation of the particles and trapping, is important for that purpose. Hopefully, our investigation is helpful in understanding the dynamics. A Calculation of the number density In this section we show the analytical estimation of the number density n χ . The calculation is partly tracing the Chung's method in ref. [5]. We start with the WKB-type solution of the wave function for χ: which gives the solution of eq. (3.11). Here ω k is the frequency defined in eq. (3.6). The equation of motion for u k (t) giveṡ What we need for the calculation is β k , since the occupation number can be evaluated as n k = |β k | 2 . We consider the initial condition α k (−∞) = 1, β k (−∞) = 0, which means that eq. (3.15) is equivalent to eq. (3.12) at t = −∞. Therefore the zeroth order of the Bogoliubov coefficients can be taken as α to the leading order. Using the zeroth order solution (2.2), we find the frequency (A. 6) In order to evaluate the integral (A.4), we consider the steepest descent method. On the complex t-plane, there are 2n points where ω k (t) = 0 is satisfied. These are poles of an integrand in eq. (A.4). If we take a path of the integration to go around the lower half, we need to take into account the poles with the negative imaginary, whose number is n. For the integer m satisfying 0 ≤ m ≤ n − 1, we find the poles of the negative imaginary givest The exponential of (A.4) aroundt m can be expanded as C m denotes the path which approaches a pole on t =t m along the steepest descent path and goes around the pole (and leaves the pole). (See figure 6). If we take the angle going around the pole to be 4π/3, the outward-goind path becomes the steepest descent path. Then, we can obtain (A.13) Using the above method, the occupation number can be calculated as JHEP01(2014)141 As mentioned above, the first term gives eq. (A.22). We find that the second term gives JHEP01(2014)141 Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
7,821.8
2014-01-01T00:00:00.000
[ "Physics" ]
Non-Unique Fixed Point Results in Extended B-Metric Space † In this paper, we investigate the existence of fixed points that are not necessarily unique in the setting of extended b-metric space. We state some examples to illustrate our results. Introduction and Preliminaries Metric fixed point theory was initiated by the elegant results of Banach, the contraction mapping principle, and all researchers in this area agree on this.He formulated that every contraction in a complete metric space possesses a unique fixed point.Researchers have generalized this result by refining the contraction condition and/or by changing the metric space with a refined abstract space.One interesting generalization of metric space is b-metric space, formulated recently by Czerwik [1].Following this result on b-metric space, several authors have reported a number of fixed point results in the framework of b-metric space (see, e.g., [2][3][4][5][6][7][8][9] and related references therein). Definition 1 (Czerwik [1]).For a non-empty set X, a function m b : X × X → R + 0 is said to be b-metric if it satisfies the following conditions: In addition, the pair (X, m b ) is called a b-metric space, in short, bMS. One of the standard examples of b-metric is the following: Example 1.Let X = R be the set of real numbers and m b (x, y) = (x − y) 2 .Then m b is a b-metric on R with s = 2.It is clear that m b is not a metric on R. Remark 1.It is worth mentioning that, for s = 1, the b-metric becomes a usual metric. Recently, Kamran et al. [10] introduced a new type of generalized b-metric space.Furthermore, they observed the analog of a Banach contraction mapping principle in the framework of this new space.Definition 2. [10] Let θ : X × X → [1, ∞) be a mapping.For a non-empty set X, a function d θ : X × X → [0, ∞) is said to be an extended b-metric if it satisfies the following state of affairs for all ξ, η, ζ ∈ X.In addition, the pair (X, d θ ) is called an extended b-metric space, in short extended-bMS. Obviously, (d θ 1) and (d θ 2) hold.For (d θ 3), we have In conclusion, for any ξ, η, ζ ∈ X, the third axiom is satisfied.Accordingly, (X, d θ ) is an extended b-metric space.Notice also that the standard triangle inequality is not satisfied for the following case Hence, (X, d) does not form a standard metric space. In an extended-bMS, it is possible to obtain an analogy of basic topological notions, such as convergence, Cauchy sequences, and completeness.For more details, see, e.g., [10].Definition 3. [10] Let (X, d θ ) be an extended-bMS. (i) We say that a sequence ξ n in X converges to ξ ∈ X, if for every > 0 there exists N = N( ) ∈ N such that d θ (ξ n , ξ) < , for all n ≥ N, and it is denoted as lim n→∞ ξ n = ξ. (ii) We say that a sequence ξ n in X is Cauchy if, for every > 0, there exists N = N( ) In what follows, we recollect the notion of completeness: Definition 4. [10].An extended-bmetric space (X, d θ ) is complete if every Cauchy sequence in X is convergent. Lemma 1. [10] Suppose that the pair (X, d θ ) is a complete extended-bMS, where d θ is continuous.Then every convergent sequence has a unique limit. We say that the set O(ξ; ∞) is the orbit of T. Definition 6. Suppose that the pair (X, Remark 3. It is evident that the orbital continuity of T yields orbital continuity of any iterative power of T, that is, orbital continuity of T m for any m ∈ N. Definition 7. [12] Suppose that T is a self-mapping on a non-empty set X. Let α : X × X → [0, ∞) be a mapping.Then T is called an α-orbital admissible if, for all ξ ∈ X, we have ( Remark 4. We note that any α-admissible mapping is also an α-orbital admissible mapping.(see, e.g., [12]). Main Results Throughout the paper, we shall assume that d θ is a continuous functional. Lemma 2. [13] Let (X, d θ ) be an extended b-metric space.If there exists q ∈ [0, 1) such that the sequence {x n }, for any n ∈ N, then the sequence {x n } is Cauchy in X. Proof.Let {x n } n∈N be a given sequence.By employing Inequality (3), recursively, we derive that Since q ∈ [0, 1), we find that lim On the other hand, by (d θ 3), together with triangular inequality, for p ≥ 1, we derive that Notice that the inequality above is dominated by On the other hand, by employing the ratio test, we conclude that the series which is why we obtain the desired result.Thus, we have Consequently, we observe for n ≤ 1, p ≤ 1 that Letting n → ∞ in Equation ( 7), we conclude that the constructive sequence {x n } is Cauchy in the extended b-metric space (X, d θ ).Lemma 3. Let T : X → X be an α-orbital admissible mapping and x n = Tx n−1 , n ∈ N. If there exists x 0 ∈ X such that α(x 0 , Tx 0 ) ≥ 1, then we have Proof.By assumption, there exists a point x 0 ∈ X such that α(x 0 , Tx 0 ) ≥ 1.On account of the definition of {x n } ⊂ X and owing to the fact that T is α-orbital admissible, we derive Recursively, we have Theorem 2. Suppose that T is an orbitally continuous self-mapping on the T-orbitally complete extended-bMS (X, d θ ).Assume that there exists k ∈ [0, 1) and a ≥ 1 such that α(x, y) min{d θ (Tx, Ty), d θ (x, Tx), d θ (y, Ty)} − a min{d θ (x, Ty), d θ (Tx, y)} ≤ kd θ (x, y)) for all x, y ∈ X.Furthermore, we presume that (i) T is α-orbital admissible; (ii) there exists x 0 ∈ X such that α(x 0 , Tx 0 ) ≥ 1; (iii) Then, for each x 0 ∈ X, the sequence {T n x 0 } n∈N converges to a fixed point of T. Proof. By assumption (ii), there exists a point x 0 ∈ X such that α(x 0 , Tx 0 ) ≥ 1.We construct the sequence {x n } in X such that If x n 0 = x n 0 +1 = Tx n 0 for some n 0 ∈ N 0 , then x * = x n 0 forms a fixed point for T that the proof finishes.Hence, from now on, we assume that On account of the assumptions (i) and (ii), together with Lemma (3), Inequality ( 8) is yielded, that is, By replacing x = x n−1 and y = x n in Inequality ( 9) and taking Equation ( 12) into account, we find that min {d θ (Tx or, min {d θ (x n , x n+1 ), Since k ∈ [0, 1), the case d θ (x n−1 , x n ) ≤ kd θ (x n−1 , x n ) is impossible.Thus, we conclude that On account of Lemma 2, we find that the sequence {x n } is a Cauchy sequence.By completeness of (X, d θ ), the sequence x n converges to some point u ∈ X as n → ∞.Owing to the construction x n = T n x 0 and the fact that (X, d θ ) is T-orbitally complete, there is u ∈ X such that x n → u.Since T, is orbital continuity, we deduce that x n → Tu.Accordingly, we conclude that u = Tu. Let us first notice that for any x ∈ {1, 2, 3, 4}, the sequence {T n x} tends to 1 when n → ∞.For this reason, we can conclude that the mapping T is orbitally continuous and lim n,m→∞ θ(T n x, T m x) = 3 < 4 = 1 k , so (iii) is satisfied.It can also be easily verified that T is orbital admissible.If x = 1 or y = 1, then d(1, T1) = 0 so Inequality (9) holds.We have to consider the following cases. Therefore, all the conditions of Theorem 2 are satisfied and T has a fixed point, x = 1. In Theorem 2, if we presume that α(x, y) = 1 and θ(x, y) = 1, then we deduce the renowned non-unique fixed point theorem of Ćirić [14] as follows: Corollary 1. [ Ćirić [14]] Suppose that T is an orbitally continuous self-map on the T-orbitally complete standard metric space (X, d).We presume that there is a k ∈ [0, 1) such that min{d(Tx, Ty), d(x, Tx), d(y, Ty)} − min{d(x, Ty), d(Tx, y)} ≤ kd(x, y) for all x, y ∈ X.Then, for each x 0 ∈ X, the sequence {T n x 0 } n∈N converges to a fixed point of T. Theorem 3. Suppose that T is an orbitally continuous self-map on the T-orbitally complete extended-bMS (X, d).We presume that there exists k ∈ [0, 1) such that α(x, y)Γ(x, y) ≤ kd θ (x, y) for all x, y ∈ X, where where R(x, y) = 0. Furthermore, we assume that (i) T is α-orbital admissible; (ii) there exists x 0 ∈ X such that α(x 0 , Tx 0 ) ≥ 1; (iii) Then, for each x 0 ∈ X, the sequence {T n x 0 } n∈N converges to a fixed point of T. Proof.As a first step, we construct an iterative sequence {x n } as in the proof of Theorem 2. For this purpose, we take an arbitrary initial value x ∈ X and define the following recursion: x 0 := x and x n = Tx n−1 for all n ∈ N. We also suppose that as is discussed in the proof of Theorem 2. For x = x n−1 and y = x n , Inequality ( 16) becomes (taking into account Lemma (3)) where We obtain that which is a contraction, since k ∈ [0, 1).Consequently, we deduce that Applying Equation ( 22) recurrently, we find that The rest of the proof is a verbatim restatement of the related lines in the proof of Theorem 2. Then, for each x 0 ∈ X, the sequence {T n x 0 } n∈N converges to a fixed point of T. Proof.Basically, we shall use the same technique that was used in the proof of Theorem 2. We built a recursive {x n }, x 0 := x and x n = Tx n−1 for all n ∈ N (25) for an arbitrary initial value x ∈ X. Regarding the discussion in the proof of Theorem 2, we presume that For x = x n−1 and y = x n , Inequality (24) becomes (taking into account Lemma 3) where a contraction, since k ∈ [0, 1).Accordingly, we conclude that Recursively, we derive that By following the related lines in the proof of Theorem 2, we complete the proof. Then, for each x 0 ∈ X, the sequence {T n x 0 } n∈N converges to a fixed point of T. Proof.As a first step, we shall construct an recursive sequence {x n = Tx n−1 } n∈N , for an arbitrary initial value x 0 := x ∈ X, as in the proof of Theorem 2. By following the same steps in the proof of Theorem 2, we deduce that adjacent terms of the sequence {x n } should be chosen distinct, that is, x n = x n−1 for all n ∈ N. (m b 1 ) m b (x, y) = 0 if and only if x = y.(m b 2) m b (x, y) = m b (y, x) for all x, y ∈ X. (m b 3) m b (x, y) ≤ s[m b (x, z) + m b (z, y)]for all x, y, z ∈ X, where s ≥ 1.
2,851.8
2018-05-02T00:00:00.000
[ "Mathematics" ]
An eye-tracking based robotic scrub nurse: proof of concept Background Within surgery, assistive robotic devices (ARD) have reported improved patient outcomes. ARD can offer the surgical team a “third hand” to perform wider tasks and more degrees of motion in comparison with conventional laparoscopy. We test an eye-tracking based robotic scrub nurse (RSN) in a simulated operating room based on a novel real-time framework for theatre-wide 3D gaze localization in a mobile fashion. Methods Surgeons performed segmental resection of pig colon and handsewn end-to-end anastomosis while wearing eye-tracking glasses (ETG) assisted by distributed RGB-D motion sensors. To select instruments, surgeons (ST) fixed their gaze on a screen, initiating the RSN to pick up and transfer the item. Comparison was made between the task with the assistance of a human scrub nurse (HSNt) versus the task with the assistance of robotic and human scrub nurse (R&HSNt). Task load (NASA-TLX), technology acceptance (Van der Laan’s), metric data on performance and team communication were measured. Results Overall, 10 ST participated. NASA-TLX feedback for ST on HSNt vs R&HSNt usage revealed no significant difference in mental, physical or temporal demands and no change in task performance. ST reported significantly higher frustration score with R&HSNt. Van der Laan’s scores showed positive usefulness and satisfaction scores in using the RSN. No significant difference in operating time was observed. Conclusions We report initial findings of our eye-tracking based RSN. This enables mobile, unrestricted hands-free human–robot interaction intra-operatively. Importantly, this platform is deemed non-inferior to HSNt and accepted by ST and HSN test users. Supplementary Information The online version contains supplementary material available at 10.1007/s00464-021-08569-w. Within laparoscopic surgery, robotic devices have been developed to improve clinical outcomes, in so consolidating the shifts towards minimally invasive surgery (MIS). The first marketed surgical robot was the voice controlled laparoscopic camera holder (AESOP). Since then a new era of robotics emerged [1]. The da Vinci® (Intuitive Surgical, Inc.), first emerged in 1997, is a slave robotic manipulator controlled via a computer console by a master-surgeon. To date da Vinci® reports an excess of 1.5 million laparoscopic surgeries, demonstrating reduced post-operative pain, hospital stay and improved surgical accessibility and view in confined anatomical spaces [2]. Such findings have encouraged research of more sophisticated assistive robotic devices (ARD) during surgery. ARD in surgery describes machinery that is controlled by the surgeon in support of surgical task delivery. Recently, the United Kingdom has seen new legislation towards artificial intelligence funding to establish wider ARD within surgery supported by strong evidence [3,4]. ARD has been hypothesized as an approach to disrupt preventable healthcare human errors, described as one of the main culprits resulting in patient harm [5]. ARD afford surgical teams' touchless interaction, enhanced information accessibility and task execution; this is apparent in ad-hoc intra-operative retrieval of patient notes or radiological images [6]. From a surgeon's perspective, ARD may be a "third hand", thereby allowing the performance of a wider breadth of tasks. Gestix is one example and Other Interventional Techniques of an automated system which enables the surgeon touchless electronic patient record navigation. The surgeon is able to access imaging using their hand gesture intra-operatively. Hand gesture is captured through a 2D Canon VC-C4 camera mounted on top of a flat screen monitor, in so designating predetermined hand gestures respective functions such as replacing or magnifying the image [7]. In laparoscopic surgery using an ARD, such as the da Vinci® surgical robot, gives the surgeon seven degrees of freedom (DoF) compared to conventional four DoF; this represents the same range of a human wrist in open surgery [8]. ARD can also play a role to improve staff and patient safety, workflow and overall team performance. An example of automated laparoscopic devices initiated by the surgeon's head pose has been reported by Nhayoung Hong et al., which consequently triggers an endoscopic control system with four DoF to move, in so achieving the desired operative field of view. The system reports 92% accuracy, a short system response time of 0.72 s and a 10% shortening in task completion [9]. HERMES voice recognition interface (VRI) enables pre-determined voice initiated commands during laparoscopic cholecystectomy. The surgeon is able to remotely activate the laparoscopic camera and light source, insufflator to desired intra-abdominal pressure, and switch off all the equipment. One hundred patients were randomized into HERMES assisted surgery and standard laparoscopic surgery. Overall, the HERMES VRI assisted surgery showed significant reduction in completion time across all outcomes measured [10]. Robotic scrub nurses (RSN) support the surgeon in selecting and delivering surgical instruments. The Gestonurse is a magnetic RSN based on surgeon hand gestures which demonstrated 95% accuracy in trials, whilst Penelope is described as a semi-autonomous system based on verbal commands and machine learning [11]. Penelope developers report capability of desired instrument prediction, selection and delivery [11]. These are encouraging but limited by the practicality of disruptive hand gestures and failures in voice recognition when scrubbed in noisy operating theatres [12]. Consequently, there is a need to explore the feasibility of all sensory modalities, in turn enhancing the functionality of future RSN, to enable the surgeon an array of choice during user-RSN interaction. Within this study, we introduce a novel perceptually-enabled smart operating room concept, based on gaze controlled RSN [13]. This allows the surgeon unrestricted mobility, as naturally occurring intra-operatively [14]. Not only can gaze be tested for its use as a sensory modality to execute RSN tasks intra-operatively, but the integration of eye tracking glasses (ETG) offer the additional advantage over other sensory based interaction of being able to measure real-time surgeon visual behavior. In turn, correlations can be made with intra-operative surgeon mental workload, concentration and fatigue via standard measures including blink rate, gaze drift, and pupillary dilatation [15,16]. This interface enables dynamic gaze-based surgeon interaction with the RSN to facilitate practical streamlined human-computer interaction in the hope to improve workflow efficiency, patient and staff safety and address assistant shortages. We report on the usability and acceptability of our RSN and explore their impact on intra-operative communication taxonomy. Ethics Study ethics approval was granted by the Imperial College Research Ethics Committee (ICREC) reference 18IC4745. System overview System functionality relies on the user's 3D point of regard (PoR), provided by a real-time framework developed by Kogkas et al. [13,14]. The user wears an eye tracker, resembling framed glasses (Fig. 1). The pose of the ETG scene camera is estimated in a world coordinate system (WCS). The scene camera pose is equivalent to the user's head pose, and the gaze ray provided by the ETG helps map fixations to 3D points in the WCS. The WCS is defined by multiple co-registered RGB-D sensors and a motion capture system. These are depth sensing devices integrated with an RGB camera. The 3D fixation, combined with parameters retrieved by an off-line calibration routine, yields the user's fixation information on a screen. Finally, a graphical user interface (GUI) on the screen guides the user to gaze-controlled instrument selection, which in turn is delivered by an articulated robot. Equipment For eye-tracking, the SMI Eye Tracking Glasses 2 Wireless (SMI ETG 2w, SensoMotoric Instruments GmbH) are used. For RGB-D sensing, the Microsoft Kinect for Windows v2 time-of-flight camera (30 Hz, field of view-FoV of depth sensing 70° × 60°, operating distances 0.5-4.5 m) and for head pose tracking the OptiTrack motion capture system (NaturalPoint, Inc.) is used, with four Prime 13 cameras (240 fps, FoV 42° × 56°). The robot arm is a UR5 (Universal Robots A/S), a 6 DoF collaborative robot with a reach radius of up to 850 mm, maximum 5 kg pay-load and weighing 18.4 kg. It has the Robotiq FT-300 force-torque mounted on its end-effector. For the instrument selection GUI, a 42′′ LG screen is used. Offline calibration Eye fixations were mapped to the ETG's scene camera via a calibration routine, where users fixate at 9 pre-determined points in the scene camera's FoV, while keeping their head pose fixed. The position of the surgical instruments in respect to the robot is defined by manually moving the endeffector towards each instrument and recording the target pose. Instruments are intentionally positioned on the tray with corresponding instrument images. Interface design The GUI displayed on the screen consists of two parts: instrument selection (left 2/3 of the screen) and the image navigation (right 1/3) as shown in Fig. 2. Six designated blocks equally split demonstrate surgical instruments. When the user visually fixates on any block during instrument selection, a traffic light sequence (red-amber-green) initiates, followed by audio feedback. Starting with red block borders, dwell time of 0.6 s into the same block turns the borders into orange, then a further 1 s turns them into green. The interface is based on pilot experiments and provides audible and visual feedback for the detected fixation on an instrument block (red), signaling before final selection (amber) and action confirmation (green). The time intervals are decided based on a balance between avoiding the Midas touch problem (unintentional gaze-based selection) and disrupting task workflow. As shown in Fig. 2 right, three slides are presented to provide task workflow relevant information. The user can navigate through the slides by fixating on the top and bottom 1/6 parts of the screen for previous and next slide respectively. Dwell time here is 1 s. Application workflow The user wearing the ETG is able to roam freely. A traffic light selection sequence is triggered when fixation on a block is detected on the screen. The robot attaches to the selected stainless-steel instrument via a magnetic gripper on its end effector and delivers it to the user. Following user instrument collection, which is sensed by the F/T sensor mounted on the robot end effector, the robot returns to a pre-configured stationary position. Experimental setup and task Surgeons (ST) were recruited to perform ex vivo resection of a pig colon and hand sewn end-to-end anastomosis. Each surgeon performed two experiments in randomized order: Each procedure duration for each participant was estimated at 1 h, although no time restriction was introduced by the research team. Six relevant instruments were considered and assigned to a RSN instrument tray. All instruments were made of stainless steel. These included standard surgical instruments: a non-toothed (DeBakey) forceps, curved (Mcindoe) scissors, suture scissors, two surgical (mosquito) clips and a 2.0 vicryl suture on a surgical (mosquito) clip. The main stages of the task are presented on the right part of the screen (Fig. 2). For the R&HSNt, the ST performs offline ETG calibration for 1 min. During the task, the surgeon looks at the screen to select the instrument. Once collected from the RSN and used, an assistant surgeon is prompted to or instinctively returns the instrument to its tray position. ST verbally communicates with the HSN for more instruments and vocally indicates if a wrong instrument is delivered. If eye-tracking recalibration is necessary (due to inadvertent and considerable movement of the ETG), the task continues afterwards. During the HSNt the setup is identical without the screen or RSN. ST communicates with the HSN to deliver instruments. The ETG is utilized to capture and analyze visual behavior. During both experiments, distractions are introduced to the HSN. A scrub nurse assistant asks the HSN to stop and perform an instrument count twice and solve a cognitive puzzle at specific stages; start (dissection of mesentery), middle (formation of posterior wall bowel anastomosis), end (formation of anterior wall bowel anastomosis). After each task, the ST and HSN completed NASA-TLX and Van der Laan's technology acceptance questionnaires to compare perspectives of both groups (ST and HSN) on both experiments (HSNt and R&HSNt). Participant recruitment Participants were recruited voluntarily and could withdraw at any stage from the study. STs with normal and corrected vision (wearing glasses) were included in recruitment. Only participants aged 18 or older were included. All HSN included were exclusively theatre scrub nurses in their usual day to day nursing role. At the time of our study design, three groups of up to 20 participants based on their surgical experience would be evaluated; these are junior surgeons with 3-4 years surgical experience, middle grade surgeons with 5-7 years of experience and expert surgeons who had completed their surgical training such as fellows and consultants. Inclusion criteria consisted of STs who were specialist surgical registrars with a minimum of 3 years of surgical registrar experience. All surgeons included were novel to gaze based ARD. Task load After each task, the ST and HSN were asked to complete a NASA-TLX (System Task Load Index defined by NASA) questionnaire. The scale assesses the mental, physical and temporal demand, own performance, frustration levels and effort during the task. An overall task load score is calculated as described in [15]. Technology acceptance Technology usability and satisfaction feedback was collected immediately following the R&HSNt using the Van Der Laan acceptance scale [16]. The scale consists of five usefulness metrics (useful/useless, good/bad, effective/superfluous, assisting/worthless, raising alertness/sleep-inducing) and four satisfaction metrics (pleasant/unpleasant, nice/annoying, likeable/irritating, desirable/undesirable). Each item was on answered a 5-point semantic differential from − 2 to + 2. Workflow metrics Performance was assessed in terms of overall task completion time. The task starts with the surgeon assistant's oral instruction "START'" and finishes with the oral indication "FINISH". Workflow interruptions were measured for both tasks. Interruptions were defined in the HSNt as the events of a wrong instrument delivery by the HSN and the interruption of the task by the ST for > 3 s waiting for instrument delivery. During the R&HSNt, the HSN interruptions and RSN-related events are measured, such as incorrect delivery of instruments and eye-tracking recalibrations. Instrument delivery times were measured for both tasks ( Table 2). For HSN this refers to the interval between ST verbal commands to HSN delivery. For RSN it is defined as the interval between the moment the ST starts gazing on the screen to locate the instrument, until the robot delivers the instrument. Visual behavior Eye gaze data were collected during the experiments. Analysis of the metrics related to task load, attention and fatigue was conducted, namely fixations and pupil diameter. Verbal communication Verbal communication was observed through videos recorded during the experiments. A new verbal encounter is where there was silence for more than 3 s or a change in the type of communication classified as task, social or gratitude related communication. Data analysis The comparisons demonstrated in the following sections were conducted using within-subjects analysis when comparing: For within-subjects analysis, the Shapiro-Wilk test for normality of the paired differences was performed, followed by paired-samples t-test when the test was successful. The Wilcoxon signed-rank test was used in non-parametric datasets. For between-subjects analysis, the Shapiro-Wilk test for normality of the samples was performed, followed by independent-samples t-test when the test was successful. In case of non-normal distribution of any of the two samples, the Mann-Whitney U test was applied. For all types of statistical analysis tests, a p-value < 0.05 was considered significant. Data was missing for participant 5 due to technical issues during R&HSNt (verbal communications, HSN and RSN instrument delivery times). Participants Ten ST participated (7 male and 3 female). Two had corrected vision. Recruitment of staff was logistically challenging due to the complexity of multi-disciplinary ST and HSN recruitment. As such middle grade surgical registrars were recruited. Surgeons were between 30 and 40 years with 6 years surgical experience. Five trained theater scrub nurses were recruited. One ST, with 2 years surgical experience, assisted the ST in all experiments. A medical student acted as scrub nurse assistant. Task load (NASA-TLX) The NASA-TLX scores are depicted in Fig. 3. ST subjective feedback reported no significant difference overall Technology acceptance (Van der Laan) The ST group reported usefulness score of 0.5 ± 0.73 and satisfying score of 0.43 ± 0.74 (Fig. 4). ST reported that the RSN was likable 0.4 ± 0.84, useful 0.5 ± 1.08 and pleasant 0.8 ± 0.79. ST feedback was neutral about RSN desirability 0.1 ± 0.99. HSN feedback reported usefulness score of 0.76 ± 0.92 and satisfying score of 0.78 ± 0.79. HSN reported RSN was likable 0.6 ± 1.26, useful 0.7 ± 1.42 and pleasant 0.9 ± 0.99. RSN was perceived as desirable 0.7 ± 0.82. Upon comparison of ST vs HSN using RSN there was no statistically significant difference in technology acceptance domains; p = 0.491 for usefulness and 0.32 for satisfaction. Overall responses were positive in ST and HSN groups (usefulness score of 0.5 ± 0.73/satisfying score of 0.43 ± 0.74 vs usefulness score of 0.76 ± 0.92 and satisfying score of 0.78 ± 0.79, respectively). There was no statistical significance in the total number of workflow interruptions per task (p = 0.84) between R&HSNt and HSNt (2.3 ± 0.95 vs 2.4 ± 1.26, respectively). Workflow metrics During HSNt, the HSN reported a median instrument delivery time of 2.2 s (interquartile range 3.0). In R&HSNt, HSN reported 5.3 s (6.7) and the RSN 6.1 s (3.3). HSN within HSNt was significantly faster compared with HSN and RSN in R&HSNt, and non-significant across HSN and RSN in the R&HSNt. Visual behavior Comparative analysis between the two tasks showed no significant difference in all metrics related to gaze behavior, except for the pupillometry metrics. Fixation rate per second was 2.7 ± 0.46 HSNt vs 2.59 ± 0.62 R&HSNt, p = 0.455. Average fixation duration in milliseconds was 250 ± 58 vs 269 ± 69, p = 0.298. The average pupil diameter of both eyes during the HSNt (4.26 ± 0.71 mm) is larger than during the R&HSNt (3.74 ± 0.67 mm, p < 0.001). It has been shown that pupil dilation is related to increased difficulty with a task and cognitive effort, while decreased pupil diameter may indicate tiredness [17,18]. However, variations in the brightness of the environment can also produce changes in the pupil size. In our experiments, lighting conditions were kept uniform by using the same lighting, blinding the windows, and using screens around the operating space. Verbal communication There was a statistically significant difference in verbal communication upon comparison, in that HSNt exhibited twice as many verbal communication episodes; task related communication 34.3 HSNt vs 16.8 R&HSNt, p = 0.008. There was no significant difference in social or gratitude related communication (Table 3). Subjective feedback The general consensus was positive about the potential of RSN in surgery. All ST reported that fixating on the screen away from the operative view impacted on overall task flow. Seven STs mentioned a combination with verbal based commands would enhance the RSN. An intuitive RSN platform which can learn surgeon selections and predict instruments was raised. Three ST highlighted HSN would supersede an RSN in the event of unpredictable events or emergencies. All HSN were positive in describing the RSN and had no concerns over role replacement. Discussion Our novel eye tracking RSN augments existing modalities in facilitating surgeon-RSN interaction [9,11]. The surgeon fixates on the desired instrument via a screen, initiating the respective retrieval and delivery by the RSN [13]. Our study addressed gaze-controlled RSN assistance compared against traditional setups of a scrub nurse alone, to identify system usability and limitations. Task metrics Use of gaze demonstrated no significant difference in the overall task completion time (p = 0.074), despite longer instrument delivery times in R&HSNt vs HSNt ( Table 2). Randomization of sequence of R&HSNt vs HSNt was performed to eliminate learning bias. Similarly, recruited ST were at similar residency training. All RSN tasks were completed, with no significant difference in task interruptions in R&HSNt vs HSNt. The RSN was occasionally interrupted for recalibration. RSN exhibited 100% correct instrument selection rate. HSNt interruptions mostly occurred during instrument count/puzzle solving tasks to simulate intra-operative nurse disruptions. These interruptions included incorrect instrument transfer or instrument transfer delay. An instrument count is protected scrub nurse time to avoid inadvertent loss or retention of surgical instruments inside the patient [19,20]. These observations are congruous with the reported 3.5% errors in drug administration during nurse task disruption [21]. The disruption may result in cognitive load shifting towards that new task, affecting time to primary task completion or total neglect [22]. This poses a risk within the context of nurse shortages, especially in more complex and longer operative tasks with frequent scrub nurse interruption and instrument demand [23]. User metrics NASA-TLX feedback shows positive perception towards the RSN (Fig. 3). ST and HSN users perceived no significant differences in task performance between HSNt and R&HSNt. This infers non-inferiority of the RSN. ST mental demands in delivering tasks were not significantly different across HSNt vs R&HSNt, suggesting no cognitive overload, linked to negative performance, thus avoidable adverse outcomes [24]. All surgeons were novel to gaze based ARD which may partly explain ST frustration. Qualitative feedback reported interrupting gaze away from the surgical field caused frustration. To negate this, we suggest researchers develop RSN that utilizes a combination of gaze with light see-through wearable displays, hand gesture and voice-based recognition, allowing surgeons the choice between communication modalities, as within conventional surgery. This freedom should address user frustration, improve system practicality and enhance the surgeon's operative skill development and ability [25]. In our study, we simulated open surgery, whereas in laparoscopic procedures, the operative field is screen-based, hence the RSN utility would be less disruptive. Further studies are planned. Standardized Van der Laan technology acceptance scores reported positive outcomes across usefulness and satisfaction with HSN group exhibiting higher scores (ST and HSN groups: usefulness score of 0.5 ± 0.73/satisfying score of 0.43 ± 0.74 vs usefulness score of 0.76 ± 0.92 and satisfying score of 0.78 ± 0.79, respectively) (Fig. 4). HSN dismissed fears about ARDs replacing their role, but welcomed their assistance. The RSN enables HSN to perform more complex tasks within major multisystem operations where specialist instrument assemblies, or an "extra hand" is needed. Communication taxonomy Communication intra-operatively impacts on patient safety and takes place between the surgeon and scrub nurse via verbal and non-verbal cues. Non-verbal cues include body language, eye contact, and hand gestures [26]. Researchers report a failure of shared information quality a third of the time, risking patient safety due to cognitive overload and task interruption [27]. The impact of ARD on communication breakdown in surgery, therefore information relay and safe task execution, has been questioned [28]. The author stipulates a structured team-based communication framework would negate any communication breakdown and enhance team usage of ARD [29]. We demonstrate a significant reduction in verbal communication frequency between ST and HSN, accounted by task related communication, where the surgeon asked for an instrument or operative command (Table 3). Gratitude was displayed to HSN following their instrument delivery (mean 5.0 HSNt vs 1.7 R&HSNt). This difference during R&HSNt reflects reduced HSN instrument delivery. The social communication, defined as communication unrelated to task performance, showed no significant difference across HSNt and R&HSNt. Arguably, this communication type enhances team personability thereby improving team dynamics and reducing communication failures [30]. Emerging evidence, during the COVID-19 pandemic, suggests wearing additional personal protective equipment could impede surgeons' ability to communicate with the surgical teams including the scrub nurse. This may be related to reduced visibility and voice clarity through a filtering surgical mask [31]. In such circumstance, the RSN is an alternative useful adjunct. Additionally, the RSN may limit avoidable staff exposure to infected patients during aerosol generating procedures. Crew resource management: a shift in paradigm Crew resource management derives from aviation to describe management structures which optimize available resources including people, processes and equipment to maintain safety and efficiency [32]. One example is the potential risk reduction in sharps injuries during instrument transfer estimated at up to 7% of all surgical procedures [33]. Using the RSN enables the HSN to be more involved in complex surgeries repeatedly, utilizing and reinforcing their experience, causing a paradigm shift in HSN job roles towards "assisting". Similarly, HSN can become experts in those procedures, enhancing performance and patient outcomes, as adopted across America [34]. We observed the HSN often only responded to verbal communication initiated by the surgeon. This may reflect old hierarchical attitudes leading to a lack of nursing empowerment to raise clinical opinions and challenge their concerns; "Cannot intubate, Cannot ventilate" is an example of this. A young woman admitted for routine surgery could not be intubated. A nurse brought an emergency airway kit for tracheostomy and then informed the anesthetists who dismissed her. Delayed tracheostomy led to fatal brain anoxia [35]. The RSN enables the senior surgeon, in effect the team leader, to adopt a type of hands off leadership, coined Lighthouse Leadership. Lighthouse Leadership is used within cardiac arrest resuscitations. Team leaders only directly intervene when needed, taking a "step back" to observe and plan situations more effectively. This leadership ethos embeds stronger team structures, empowers surgical residents in training, in so significantly improving task performance and resuscitation outcomes [36]. Limitations and future work In this study, we report on a gaze based RSN which was tested within an open surgical approach. We accept MIS lends itself as a more natural setting where the surgeon looks at the monitor to operate, in so avoiding any gaze related interruptions during instrument selection. Open surgery was selected within this setting to enable a skilled procedure of intermediate time duration which demanded constant instrument replacement throughout the task. This in turn allowed the testing of the gaze-based screen fixation and instrument selection to demonstrate its reliability when used frequently. In comparison, a MIS such as laparoscopic cholecystectomy would require minimal instrument replacement and would not reflect the usability of the system. Admittedly, we plan future testing in MIS to demonstrate its utility. The RSN system appears physically large but it is mobile and has been positioned as a second surgical assistant; users expressed no concerns regarding the size of impediment in task completion. Another limitation of the study is the requirement of an assistant to return the instruments once delivered by the RSN then used by the ST. We acknowledge this RSN system is a proof of concept for gaze as a modality of interaction in a master/slave interface and further development in the RSN is required to return the instrument to the instrument tray after use. We believe this would greatly enhance its practicality as an independent RSN. We stipulate an RSN system should combine a variety of sensory modalities such as voice, hand gesture and gaze to mimic natural human-human interaction during surgery. This would address user frustration through the use of gaze alone, particularly within open surgery rather than MIS. The use of gaze offers exciting future potential supported by continuous developments in artificial intelligence (AI) and computer vision; this will allow surgical phase recognition and prediction. The use of an RSN in combination with AI could pre-empt surgeon needs of instruments and make them available earlier, while saving this "dead time" for the HSN to use more efficiently. Conclusions In this study, we report on a gaze based RSN which enables the surgeon hands-free interaction and unrestricted movement intra-operatively. Initial findings for proof of concept demonstrate acceptability of RSN by ST and HSN; all participants were novel to this system. Importantly, RSN is shown to be non-inferior to conventional HSN assistance, in that operative completion rate, duration and perceived user task performance were not significantly different. No instrument delivery errors were reported by the RSN. Social communication behaviors amongst staff did not significantly differ intra-operatively. There is particular scope in laparoscopic surgery to use gaze-based RSN, where the ST naturally fixates on the screen, in so reducing RSN related frustration during gaze interruptions observed within open ex-vivo surgery during instrument selection. In the latter, researchers should endeavor in developing an intuitive RSN based on multiple sensory modalities, in so affording the ST the choice in interaction as with human-human interaction. We aim to expand our eye tracking RSN to recognize and track instruments in real-time, enabling workflow segmentation, task phase recognition and task anticipation. Funding This research is supported by the NIHR Imperial Biomedical Research Centre (BRC). Declarations Disclosures Dr Ahmed Ezzat, Dr Alexandros Kogkas, Dr Rudrik Thakkar, Dr Josephine Holt, Dr George Mylonas and Professor Ara Darzi have no conflicts of interest or financial ties to disclose. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
6,756.4
2021-06-08T00:00:00.000
[ "Engineering", "Medicine" ]
Gram matrix: an efficient representation of molecular conformation and learning objective for molecular pretraining Abstract Accurate prediction of molecular properties is fundamental in drug discovery and development, providing crucial guidance for effective drug design. A critical factor in achieving accurate molecular property prediction lies in the appropriate representation of molecular structures. Presently, prevalent deep learning–based molecular representations rely on 2D structure information as the primary molecular representation, often overlooking essential three-dimensional (3D) conformational information due to the inherent limitations of 2D structures in conveying atomic spatial relationships. In this study, we propose employing the Gram matrix as a condensed representation of 3D molecular structures and for efficient pretraining objectives. Subsequently, we leverage this matrix to construct a novel molecular representation model, Pre-GTM, which inherently encapsulates 3D information. The model accurately predicts the 3D structure of a molecule by estimating the Gram matrix. Our findings demonstrate that Pre-GTM model outperforms the baseline Graphormer model and other pretrained models in the QM9 and MoleculeNet quantitative property prediction task. The integration of the Gram matrix as a condensed representation of 3D molecular structure, incorporated into the Pre-GTM model, opens up promising avenues for its potential application across various domains of molecular research, including drug design, materials science, and chemical engineering. Introduction Small chemical molecules interact with biological macromolecules based on the principle of shape complementarity, forming the cornerstone of life processes regulation and drug therapy [1,2].Researchers are dedicated to studying molecule representations and properties [3][4][5][6][7][8][9][10][11], thereby advancing the drug discovery process [12].These approaches, such as Attentive FP [13], D-MPNN [14], and TrimNet [6], primarily focus on using the graph convolutional neural network to represent molecular topological structure.Except for this, some strategies exist to improve model representational capability.Models relying on large-scale self-supervised pretraining like MG-BERT [5] may exhibit more robust performance under scenarios with limited labeled data.Additionally, models like NNPS [3], DSDP [4], and DRWBNCF [12], which utilize the biological profile of small molecules, provide valuable and associative information for downstream tasks such as drug repositioning and drug-drug interaction prediction.Each method has its unique strengths and weaknesses, tailored to different application contexts. The prevailing method for representing molecular topological information is through the use of molecular fingerprints [36].Among these, the extended-connectivity finger-prints (ECFPs) [24] stand out as the most widely adopted, renowned for their ability to accurately depict underlying chemical substructures.To incorporate molecular 3D information, Seth et al. proposed a spherical extended 3D fingerprint (E3FP) [37] as an extension of the circular ECFP.E3FP not only retains the advantages of 2D topological fingerprints but also encodes 3D information in a faster way.However, since E3FP relies on principles of feature engineering, its performance may not consistently meet expectations for all tasks. Subsequently, AI-based methods for extracting 3D conformational information from molecules have been developed.A notable example of these methods is the 3D equivariant neural network.Schnet [38] used successively filtered convolutional layers, enabling the model to obtain energy predictions that vary continuously with coordinates.DimeNet [39] introduced directional message passing, simultaneously considering the vector representation of the atom itself, interatomic distances, and bond angles, effectively leveraging directional information within the molecule.HMGNN [40] proposed a novel heterogeneous molecular graph representation that relied on interatomic distances and atomic numbers, featuring nodes and edges of different types to model many-body interactions.While 3D representations obtained through equivariant networks yield superior outcomes compared to molecular fingerprints, they suffer from several limitations.Principally, such representations are intricate and heavily reliant on precise conformational data, posing challenge in real-world scenarios where high-quality 3D data are often lacking.Fortunately, due to the success of pretrain and fine-tune pipelines [41,42], some recent works have successfully encoded meaningful 3D information for downstream tasks through pretraining [43][44][45].However, it is still challenging to convert this representation back and forth with the original 3D structure. The 3D representation of proteins serves as a valuable reference for effective 3D information representation of molecules.Before the Alphafold era [46], protein structure prediction often relied on predicting a 'Contact Map' [47][48][49], an image representation that encodes the distance between each amino acid residue in a protein into a binary value.Despite being 2D, this representation provides effective conformational constraints for protein folding algorithms and has therefore been extensively utilized in protein structure prediction.Essentially, the Contact Map can be viewed as a compressed representation of the 3D coordinate information of a protein.Similarly, exploring the geometric space of individual compounds holds comparable significance.Adopting methods akin to protein structure prediction, the use of Distance Maps for molecules as a concise representation, satisfying the E(3)-group, emerges as a viable option.This approach offers simplicity compared to complex network-based encodings mentioned above, with the practical advantage of being convertible into coordinates through the Distance Map.While prior research has explored this concept [50,51], the primary challenge lies in converting the Distance Map into coordinates, as this process requires the utilization of the Gram matrix [52]. Given that the Gram matrix serves as an intermediary variable for converting a Distance matrix into coordinates, this paper proposed using the Gram matrix directly as a molecular encoding for 3D conformation.Our approach offers several advantages compared to previous methods.Firstly, unlike methods employing equivariant networks for 3D representation, the Gram matrix is less complex and facilitates coordinate recovery via multidimensional scaling (MDS) [53], thus providing a more concise and systematic representation.Secondly, owing to the inherent properties of the Gram matrix, which is invariant to rotation and translation, similar to the Distance matrix, it exhibits greater robustness than coordinate-based representations and enhances compatibility with networks.Thirdly, our testing results indicate that the Gram matrix outperforms the Distance matrix as molecular representation in molecular prediction tasks, besides its capability to directly recover coordinates.Overall, the Gram matrix emerges as an excellent compressed representation of the 3D structure of molecules.In this study, considering the challenges in obtaining precise 3D structural information for molecules in real-world scenarios, we develop pretraining models to generate 3D structures via Gram matrix.This approach enables our model to derive 3D representations from stable 2D features, subsequently enhancing the predictive performance in downstream tasks. Our work makes the following contributions: (1) We propose for the first time that the Gram matrix can be utilized as a learning target for molecular pretraining, serving as a compressed representation of the 3D structure of molecules, and we demonstrate that the 3D structure of molecules can be swiftly reconstructed through the direct prediction of the Gram matrix.(2) We observed that using the Gram matrix as the target for supervision during the pretraining phase resulted in superior outcomes compared to using the Distance matrix, bond length, and bond angle as the targets for supervision.(3) We have developed a Graphormer-based model, referred to as pre-GTM, which utilizes the molecular representation from the Gram matrix-based pretraining stage.This model outperforms the benchmark Graphormer and other pretrained models in predicting quantitative properties on the Quantum Machines 9 (QM9) and MoleculeNet [54] datasets. Fundamentals of Gram matrix The Gram matrix of 3D Cartesian coordinates serves as a compact and dense representation of molecular spatial information in this study.Given a conformer with N atoms and corresponding origin-centered coordinate matrix X ∈ R N×3 , the Gram matrix G is defined as: where x i (x j ) refers to the coordinate of the i-th (j-th) atom. For comparison, we introduce another similar approach to incorporating molecular 3D information: the Distance matrix D, defined as: Combining Equations ( 1) and ( 2), G and D can be converted into each other: where D 0i (D 0j ) refers to the distance between the origin and the i-th (j-th) atom.It is interesting to observe from Equation ( 4) that G ij contains more information than D ij , specifically D 0i and D 0j . This can be considered one of the reasons why G is a better representation than D. The conversion from Gram matrix to bond angles Bond lengths and bond angles are two critical geometric parameters typically concerned during modeling [55,56].These parameters can also be directly derived from the Gram matrix.For the conversion to bond lengths, Equation ( 3) is sufficient, as bond length is a special case of atom distance.Equation ( 5) demonstrates how to convert G into cosine values of any bond angles existing in the molecule: where ijk denotes the degree of the bond angle connecting bonds (i, j) and (j, k). Since G is a compact representation of a 3D conformer, an important question arises: how do we reconstruct the corresponding conformation from a true or predicted G? A strategy named MDS can be employed to address this issue.A classical MDS method takes the Gram matrix as input and outputs the coordinates of the items that fulfill the constraint given by the input matrix; the procedure of MDS is illustrated in Fig. 1. Specifically, for conformation reconstruction from G, eigen decomposition is utilized.As shown in Equation (6), G is decomposed into the eigenvector matrix Q and eigenvalue matrix : For true G that corresponds to a set of 3D coordinates, it can be proven that G only has three positive eigenvalues λ 1 , λ 2, and λ 3 with others remaining to be zeros.Thus, the k-th coordinate of atom i (k = 1, 2 or 3) is given by: Conversely, for predicted G with inherent noise, the three largest eigenvalues and corresponding eigenvectors are selected to reconstruct the molecular conformation according to Equation (7). Model architecture As demonstrated in Fig. 2, to illustrate the practical value of the Gram matrix, we propose a two-step procedure termed Pretraining Graph Transformers (Pre-GTM).Pre-GTM comprises the following steps: (i) Pre-training Stage: This stage involves supervised pretraining with the Gram matrix (optionally together with bond lengths and angles) on an unlabeled training set with known geometry.Atom and bond features, as shown in Table 1, are explicitly incorporated into modeling on tasks without 3D information.(ii) Property Prediction Stage: In this stage, a molecular prediction model is trained on the labeled dataset, and predictions are made on the test set without known geometry.The 3D representations derived from the pretrained model are frozen and concatenated into the downstream model [57]. Pretraining stage In the pretraining stage, a Graphormer [9] model M a is employed to predict the Gram matrix on the pretraining dataset, as illustrated in Fig. 2A.To enhance the learning of representations related to 3D conformations, we also introduced bond length and bond angle prediction as auxiliary tasks.The input molecule is represented as an undirected graph G = (V, E) where the node set V corresponds to atoms and edge set E corresponds to chemical bonds, and then fed to the Graphormer model, a kind of graph neural network. During the modeling with Graphormer, each atom u ∈ V is initialized with a state vector h 0 u , The central encoding z deg(u) is computed based on deg(u) (the degree of atom u).The spatial encoding b φ(u,v) is computed based on φ ( u, v) (the shortest path between atom u and v).Edge encoding c ij is calculated using the topology of the graph. The provided encodings for pretraining in Graphormer involve several key steps.First, the atomic initial state vector h 0 u and the central code z deg(u) are added, and the resulting values are input into the Graphormer model to obtain the new atomic initial state vector h 0 u .Subsequently,h 0 u is normalized, and multihead attention is calculated to obtain the attention A ij between atomic pairs.This attention is then added to the spatial encoding b φ(u,v) and the edge encoding c ij , replacing the previous attention to form a new attention, also denoted as A ij .The new attention A ij andh 0 u are residually connected, and the result is passed through a feedforward network block.This process is repeated until the specified number of network layers N is reached, followed by a fully connected layer, ultimately yielding the final state vector h u of the atom.To directly predict G uv between any two nodes u ∈ V and v ∈ V, we simply calculated the inner product between them: where • denotes inner product.In order to improve the generalization of the model, we adopt a methodology from previous studies [58] by incorporating noise to differentiate between identical chemical environments.During the training phase, we add noise that obeys Gaussian distribution to the initial state vector h 0 u for all atoms.During the testing phase, the same amount of noise added during the training phase can be applied. where μ and σ are the mean and variance of the Gaussian distribution, respectively. Figure 1.This graph provides a detailed demonstration of the MDS and the conversion between the Gram matrix G and distance matrix D using real values.The procedure involves the following steps: 1 obtaining the Gram matrix from origin-centered coordinates results in a unique G, regardless of the rotation of the conformation. 2Decomposing G into the eigenvectors and eigenvalues. 3Restoring coordinates by selecting the three largest eigenvalues and their corresponding eigenvectors. 4and 5 illustrate the interconversion between the G and D. Besides, we demonstrate the four combinations of our supervised objects during pretraining.The predicted G is further used to predict the length of any bond l uv and bond angle uvw where (u, v) ∈ E and (v, w) ∈ E using Equations ( 3) and ( 5), respectively.The unit of bond length l uv and bond angle uvw is Å and rad, respectively.According to [43], the bond length and bond angle can also be predicted by the concatenation of h u and h v : The D and bond length are considered compatible as supervised targets.Abbreviations: G, Gram matrix; D, distance matrix.uvw = MLP CONCAT h u , h v , h w (11) where MLP refers to the multiple layer perceptron.The test results indicated that this method of constructing auxiliary tasks outperforms the method using Equations (3) and ( 5), as it is simpler and easy to converge.In has been introduced that the Gram matrix can be directly transformed into the molecular coordinates through eigenvalue decomposition using Equation ( 6), which can be regarded as a conformation prediction model.Thus, we also calculated the root mean-squared deviation (RMSD) between the generated and true conformations in the test set to evaluate the performance of the minimum energy conformation prediction. Property prediction stage In the property prediction stage, a new Graphormer model, denoted as M b , is built from scratch.It utilizes the atom embeddings acquired in the preceding stage to predict downstream molecular properties on a dataset without known geometry.This process is depicted in Fig. 2B. To explicitly incorporate the geometric knowledge learned by M a into the modeling process of M b , the final atom embedding h u for atom u ∈ V, derived from the trained M a , remains fixed.It is then concatenated with the h u generated by M b , to compute the super node embedding h s of the molecule: Lastly, the super node embedding h s is passed into a fully connected layer to predict downstream tasks as a typical Graphormer does. Loss function According to Equation ( 4), the Gram matrix G ij can be decomposed into two components: interatomic distances D ij and distances between atoms and the origin D 0i (D 0j ).Therefore, we propose three model settings: 1 Directly supervising the Distance matrix (Pre-GTM a ). 2 Decomposing G into the two components and supervising them separately (Pre-GTM b ). 3 Directly supervising G (Pre-GTM c ).The corresponding loss functions for the three models are shown in Equations ( 13)- (15), respectively.To incorporate global information, we use a super atom in Graphormer to represent the origin (0) in Equation ( 4).As a result, D 0i can be obtained through a fully connected layer after concatenating the embedding vector of the super atom and the embedding vector of node i. To explore whether the performance of the model can be further improved, we introduce two important geometric parameters, bond length and bond angle, on top of the supervised task in Pre-GTM c and establish a new model, Pre-GTM d , with a corresponding loss function shown in Equation ( 16).Table 2 further details which supervision tasks correspond to each of the four models. where l ij denotes the length of the bond connecting atoms i and j, with ε as the set of bonds; ijk denotes the degree of the bond angle connecting bonds (i, j) and (j, k), with A as set of angles. Baseline models We primarily compare the performance of Pre-GTM with other baseline models in two application scenarios: molecular conformation generation and molecular property prediction.Different computational models are selected as baseline comparison models for each of these scenarios. To assess molecular conformation prediction, the evaluation metric most relevant to drug design scenarios involves examining the similarity between predicted conformations and the ligand-binding conformations in protein-ligand cocrystal structures.Motivated by these considerations, we reference the work of Hou et al. [59] and compare the performance of our model with other conformation generation models based on the platinum diversity benchmark.The methods under comparison include traditional conformation prediction approaches (ConfGenX [60], Conformator [61], OMEGA [62], and RDKit) as well as six AIbased conformation prediction methods (ConfGF [63], DMCG [64], GeoDiff [65], GeoMol [58], torsional diffusion [66], and Uni-mol).Except for Uni-mol (a recently proposed universal 3D molecular representation learning framework), the metrics for other methods are directly taken from the original study by Hou et al. In the context of molecular property prediction, we specifically compare Pre-GTM with other methods, particularly those that share similar application scenarios.These models are pretrained using molecular 3D conformation information and subsequently fine-tuned on downstream task datasets lacking conformation information through transfer learning.Among them, D-MPNN [34] and Graphormer [9] are widely used Graph neural network (GNN) architectures, and these models were trained from scratch.Hu et al. [67], N-Gram [68], and MolCLR [69] are pre-training methods, where molecular representations are generated in an unsupervised manner.DisPred predicts the distance between all atoms of the conformation with the highest probability (i.e. the lowest energy conformation).ConfGen is pretrained by generating up to 10 conformations.GraphCL is a traditional pretraining method based on data augmentation, requiring the model to learn to produce representations that are invariant to the augmentation of the data in a self-supervised manner.3D Infomax [10] enforces the representation provided by a GNN model to incorporate latent 3D information by maximizing the mutual information between the GNN representation and 3D summary vectors.TransFoxMol incorporates a multiscale 2D molecular environment into a graph neural network + Transformer module and uses prior chemical maps to obtain a more focused attention landscape.Except for Graphormer and TransFoxMol, the metrics for other methods are directly taken from the original study of 3D Infomax. Metrics We employed multiple evaluation metrics model comparison.These metrics encompassed the accuracy of predicting the Gram matrix, molecular conformation, and the quantitative properties across the QM9, GEOM-DRUGS, and MoleculeNet datasets. We used R 2 , RMSE (root mean-squared error), and MAE (mean absolute error) to evaluate the performance of the model in predicting the Gram matrix.Additionally, the MAE metric was used to assess the model's prediction of quantitative properties: where y i represents the real value, y i represents the mean value, and ŷi represents the predicted value. The RMSD, a standard measure of the difference between two molecular structures, was employed to evaluate the quality of the conformations generated by the model: where N represents the number of heavy atoms, φ is the function used for aligning two conformations by rotation and translation, and R i and Ri denote the coordinates of the true and the generated conformation, respectively.The COV (coverage) and MAT (matching) metrics were utilized to quantify the quality of conformations [59].The metrics are defined as: where S g and S r are generated and reference molecular conformation ensembles for molecular G, respectively.δ is a given RMSD threshold.COV assesses the diversity and detects the model-collapse phenomenon, while MAT measures the closeness between the generated and reference conformations.In our study, we limited the conformation ensembles size to 1 in Equations ( 21) and ( 22), as we only use them to assess the minimum energy conformation prediction. Complexity In this section, we conducted an analysis of the algorithmic complexity of Pre-GTM.Suppose N is the number of atoms, k is the number of input features, H is pre-GTM's hidden size, and L is the number of layers; the complexity of the embedding layer is O N + k H , the complexity of the attention layers is O LH 2 , and the complexity of the feedforward layer is O N 2 H .Then, the complexity of pre-GTM is O LH 2 + N 2 + k H . Geometric Ensemble Of Molecules datasets To thoroughly explore the representation capabilities of Gram matrix in various scenarios, our study employs two categories of molecules for pretraining, namely, small-sized molecules from the QM9 dataset [70] and drug-like molecules with a higher number of heavy atoms.All molecular data are sourced from the Geometric Ensemble Of Molecules (GEOM) dataset [71].This dataset comprises highquality conformers for 133 258 molecules from the QM9 dataset.Additionally, it includes 304 466 drug-like species and their biological assay results, collectively known as GEOM-DRUGS dataset.These datasets were accessed as part of AICures (https://www.aicures.mit.edu).Table 3 provides summary statistics of the molecules constituting the dataset.The drug-like molecules from AICures are typically medium-sized organic compounds, with an average of 44.4 atoms (24.9 heavy atoms) and a maximum of 181 atoms (91 heavy atoms).These molecules exhibit significant variability, as evidenced by the mean (6.5) and maximum (53) number of rotatable bonds.In contrast, the QM9 dataset is constrained to 9 heavy atoms (C, O, N, and F) and 29 total Table 3. Dataset details: number of (heavy) atoms and rotatable bonds. Standard deviation Maximum Number QM9 was utilized to compare the property prediction performance of different molecular representation methods.Quantum mechanical properties and spatial information (the lowest energy conformation) were computed using the density-functional theory (DFT) method.Quantitative properties and spatial information were directly obtained from the MoleculeNet. To ensure the fairness in model evaluation, the entire QM9 and GEOM-DRUGS dataset was partitioned into training, validation, and test sets as a ratio of 8:1:1.Moreover, to prevent data leakage in the downstream task, we removed duplicate molecules from the GEOM-DRUGS dataset that were identical to those in the downstream tasks. MoleculeNet datasets In this study, two molecular regression datasets and seven molecular classification datasets from the MoleculeNet dataset were chosen as benchmark datasets.Details regarding these datasets are provided in Table 4.They cover various fields including physical chemistry, physiology, and biophysics, as outlined in Table 4. ESOL [72] is a standard dataset containing water solubility for common organic small molecules, It is extensively employed in the development of deep learning-based models for predicting water solubility.The Lipo (Lipophilicity) dataset was sourced from the ChEMBL database, which includes experimental results for the octanol or water partition coefficient (logP), a commonly used measure of a molecule's solubility.The human immunodeficiency virus (HIV) database includes over 40 000 molecules that have been experimentally assessed for their ability to inhibit HIV replication.The BACE database provides predicted results for the activity of human β-protease inhibitors.The Binary labels of Bloodbrain Barrier Penetration (BBBP) [73] dataset contains information on the permeability of the blood-brain barrier.The TOX21 dataset comprises toxicity testing data for compounds against 12 distinct targets, including nuclear receptors and cell signaling pathways.ToxCast serves as a repository of toxicological data for thousands of molecules, providing numerous toxicity annotations for a wide range of chemicals through high-throughput screening experiments.Additionally, Side Effect Resource (SIDER) is a collection of marketed drugs and adverse drug reactions (ADRs).ClinTox [74] is a database of U.S. Food and Drug Administration (FDA) approved drugs and drugs that failed clinical trials due to toxicity.For property prediction tasks on these datasets, we adhere to the recommended scaffold splitting methods, which have been shown to be more practically useful [54]. G is E(3)-invariant representation of three-dimensional coordinates In this section, we discuss the invariant properties of Gram matrix and the outcomes of converting the true Gram matrix to atom coordinates. Given the geometric nature of 3D molecule, it is often desirable for a method encoding spatial information to be equivariant or invariant with respect to E(3)-group, encompassing rotation, translation, and ref lection (inversion and mirroring).E(3)invariant implies that the spatial encoding remains unchanged under these transformations or any finite combination thereof [75].Clearly, the Gram matrix is E(3)-invariant.According to Equation (1), the Gram matrix represents the inner product of origin-centered coordinates, which inherently remains constant under the aforementioned transformations.This renders the Gram matrix an optimal means of encoding molecular 3D coordinates. Subsequently, we utilize all samples from QM9 to verify that the molecular conformation can effectively be reconstructed from the true Gram matrix through eigen decomposition.We calculate the RMSD between the conformations generated by each molecule using the actual G and their corresponding actual conformations, yielding an average value of 1.603 × 10 −8 Å. MDS demonstrates proficiency in reconstructing coordinates from an accurate Gram matrix and exhibits a degree of resilience to noise within the Gram matrix.To affirm this, we introduce noise to the Gram matrix with varying variances but consistent means, as depicted in Fig. 3.As observed in the first row of Fig. 3, MDS is capable of accommodating a Gram matrix with a certain degree of noise when the variance of the noise is relatively small.Theoretically, after the eigen decomposition, MDS only considers the first three largest eigenvalues and their corresponding eigenvectors according to Equation (7).For a true Gram matrix, all eigenvalues except the first three largest ones are zero.However, with a noisy Gram matrix, multiple nonzero eigenvalues may emerge, potentially altering the order and size of the first three largest eigenvalues, thereby leading to inaccuracies in the resulting coordinates. Learning G is a good proxy task for three-dimensional conformation generation After demonstrating the accuracy of coordinate transformation using the precise Gram matrix, in this section, we further discuss the predictability of the Gram matrix on the QM9 and GEOM-DRUGS dataset. To enhance the model's ability to predict molecular conformations, we have incorporated a variety of auxiliary tasks.Consequently, we assessed different combinations of these tasks and developed four distinct models, as outlined in Table 2.The precise combinations of auxiliary tasks allocated to each model are thoroughly elucidated in the Materials and Methods section. Table 5 presents the performance of these four models in predicting G. Additionally, we include RDKit [76,77] as a baseline for comparison.This study draws four notable conclusions: Directly supervising the Gram matrix G through Pre-GTM c yields precise predictions of the Gram matrix of molecular coordinates, with an R 2 value of 0.961 on the QM9 dataset and 0.74 on the GEOM-DRUGS dataset.This indicates that the graph neural network can accurately predict the Gram matrix of molecular coordinates. Compared with supervising Distance matrix (Pre-GTM a ) or separately supervises D ij , D 0i , and D 0j (Pre-GTM b ), the model directly supervised with the Gram matrix (Pre-GTM c ) performs better.This aligns with intuition, as the direct prediction of the Gram matrix entails less computational complexity than initially predicting the Distance matrix and subsequently utilizing Equations ( 3) and ( 4) to derive the Gram matrix. Pre-GTM d , integrating auxiliary tasks such as bond length and bond angle into Pre-GTM c, demonstrates a substantial enhancement in the precision of predicting G.The MAE associated with the prediction outcomes decreased from 0.344 to 0.242 on the QM9 dataset and from 1.722 to 1.619 on the GEOM-DRUGS dataset.This underscores the significance of bond length and bond angle, crucial geometric parameters, in facilitating conformation prediction. Models Pre-GTM c and Pre-GTM d demonstrated significant improvement compared to the RDKit baseline, while models Pre-GTM a and Pre-GTM b do not exhibit advantages. Aside from the Gram matrix G, Table 5 presents metrics for the Distance matrix, bond lengths, bond angles, and molecular conformations.While the four Pre-GTM models have different prediction targets, these targets (G, D, bond length, and bond angle) can be interconverted.Consequently, we assessed the prediction error for all targets across each model setting.To more clearly represent model performance, we also provided the RMSD of the predicted molecular structures for each model.It is evident that Pre-GTM d demonstrates superior performance across all prediction tasks.In conclusion, appropriate auxiliary tasks (bond lengths and bond angles) and learning objectives (Gram matrix) are crucial elements in ensuring the predictive accuracy of the model. To compare the effectiveness of conformation prediction models, we evaluated the performance of our Pre-GTM d model alongside other conformation prediction methods using a test dataset of 3354 high-quality ligand bioactive conformations [59].And the results are summarized in Table 6.Here we set the Maximum Ensemble Size to 1 for comparison as our model only utilized the lowest energy conformation for training.As we can see, Pre-GTM d performed best on the COV metric compared to other AI models, but does not perform as well as the traditional methods and performs close to Conformator.In terms of the MAT metric, Pre-GTM d slightly underperformed GeoMol and torsional diffusion, mainly due to higher prediction errors for larger molecules. We provide six instances of employing Pre-GTM d for predicting the minimum energy conformation of molecules in GEOM-DRUGS dataset (Fig. 4).It is apparent that the ground truth and the model generated conformation exhibit close alignment.In addition to assessing Pre-GTM's predictive capability on the Gram matrix within the QM9 and GEOM-DRUGS datasets, we investigated the impact of atom number and the number of rotatable bonds in a molecule on the accuracy of conformational prediction.This was achieved by evaluating RMSD between the conformation reconstructed from the predicted Gram matrix and the true conformation of the molecule.Figure 5 illustrates the impact of atom number (Fig. 5A) and rotatable bonds (Fig. 5B) on RMSD.The red line represents the distribution of the number of atom number (Fig. 5A) and rotatable bonds (Fig. 5B) on the test split of the GEOM-DRUGS dataset (groups with fewer than 100 samples are not displayed), while the blue box illustrates their effect on RMSD.As demonstrated in Fig. 5A, with the increase in atom count from 12 to 38, the model's capacity to predict molecular conformation gradually diminishes, indicating that larger molecules present greater difficulty for prediction.Additionally, Fig. 5B illustrates that with the increase in the number of rotatable bonds from 0 to 11, the model's predictive capability for molecular conformation gradually diminishes.This suggests that molecules with greater f lexibility pose a greater challenge for prediction. Graph neural network pretrained with G improves molecular representation learning The significance of pretraining via Gram matrix prediction was further underscored by evaluating the performance of Pre-GTM in downstream tasks, encompassing eight tasks within the QM9 dataset and property prediction tasks across nine datasets in MoleculeNet. Quantum Machines 9 downstream tasks We evaluated Pre-GTM's performance in predicting the eight tasks of QM9.The comparative outcomes are summarized in Table 7.The Pre-GTM model showed superior performance compared to models like Graphormer and D-MPNN trained from scratch, providing strong evidence for its effectiveness.Additionally, Pre-GTM outperformed self-supervised learning models such as N-Gram, highlighting the importance of using Gram matrix for pretraining and leveraging 3D geometric information in subsequent tasks.Moreover, Pre-GTM d outperformed Pre-GTM c , indicating that enhanced 3Drepresentation leads to better property prediction. MoleculeNet downstream tasks We investigated the performance of the Pre-GTM d model on property prediction tasks using the MoleculeNet dataset, which consists of drug-like molecules.The results of the comparison between Pre-GTM d and the benchmark models are presented in Table 8.Pre-GTM d significantly outperforms all other models, including the randomized control, on approximately half of downstream tasks.This finding suggests that the 3D characterization learned through the use of the Gram matrix is beneficial for predicting the properties of drug-like molecules.The best performance is marked in bold.All models were run five times with different random seeds.Two-sided t-test was applied between the models, and the exact P-values are in source data.(Note: * P ≤ .01) Conclusion In this study, we introduce an innovative integration of Graphormer with the Gram matrix, enabling the generation of 3D molecular representations entirely from 2D structure.Furthermore, by utilizing the learned precise 3D representations on the QM9 and MoleculeNet datasets, we enhance the performance of quantitative property prediction tasks.Despite the success demonstrated by Pre-GTM, there are still various opportunities for improvement across multiple areas, including but not limited to pretraining with more extensive datasets.Our current pretraining approach relies on the QM9 and GEOM-DRUGS datasets (which provide 3D coordinates), limited by their small data size, thus hindering the model's generalization ability for predicting 3D representations.To overcome this limitation, it is crucial to explore datasets with increased samples and larger molecular sizes, such as PCQM4Mv2 [78].Furthermore, the complexity is growing at a quadratic level of the number of heavy atoms N and hidden size H, affecting model's efficiency and performance.This issue could be addressed by implementing local predictions or switch to a framework simpler than Graphormer.Lastly, there is potential to adapt the model architecture and information integration for the prediction of molecular conformational distributions or the generation of conformations.These enhancements could further advance the model's capabilities in molecular property prediction tasks. Key Points • We propose a graph transformer model, termed Pre-GTM, to predict the 3D structure and properties of druglike molecules.• Supervising Gram Matrix (E(3)-invariant) is a good way to acquire high-quality 3D representations.• The learned 3D representations in pretraining stage enhance the molecular property prediction.• We illustrate the advanced performance of Pre-GTM on drug-like datasets compared to other supervised methods for 3D structure and properties inference. Figure 3 . Figure 3.To a certain degree, noise can be tolerated when reconstructing coordinates via MDS.Conformations obtained by G or G with added noise via MDS are aligned with true conformations.The σ is the deviation of Gaussian noise.Abbreviations: MDS, multidimensional scaling; G, Gram matrix. Figure 4 . Figure 4. Conformation prediction instances from GEOM-DRUGS dataset demonstrate that the conformations returned by the model (Pre-GTM d ) closely align with the true conformations. Figure 5 . Figure 5. (A) The correlation between atom number and RMSD.(B) The association between the number of rotatable bonds in a molecule and RMSD.Box-and-whisker plots show the median (center line), 25th, and 75th percentile (lower and upper boundary), with 1.5× inter-quartile range indicated by whiskers and outliers shown as individual data points.Groups with fewer than 100 samples were excluded from the analysis.Abbreviations: RMSD, root mean-squared deviation. Table 1 . Input features of our model. Table 2 . Different combinations of supervised tasks. Table 4 . Dataset details: number of compounds and tasks, splits, and metrics. Table 5 . The results of predictingG, D, bond length, and bond angle using different combinations of supervised tasks on the QM9 and GEOM-DRUGS dataset.The best performance is marked in bold.Abbreviations: G, Gram matrix; D, distance matrix; MAE, mean absolute error; RMSD, root mean-squared deviation. Table 6 . Qualities of generated conformer in terms of mean COV (%).RMSD threshold δ = 2.00 Å. Maximum ensemble size is set to 1. Table 7 . Results of properties prediction on the QM9 dataset (MAE).Pre-GTM c denotes the utilization of the Gram matrix loss during pretraining, while Pre-GTM d signifies the utilization of the Gram matrix, bond length, and bond angle loss during pretraining. Table 8 . Results of properties prediction on the MoleculeNet dataset.
8,194.2
2024-05-23T00:00:00.000
[ "Computer Science", "Chemistry" ]
‘The good economy’: a conceptual and empirical move for investigating how economies and versions of the good are entangled Across Europe and the OECD, the bioeconomy is promoted as that which will succeed the carbon economy: an economy based in ‘the bio’ that will be innovative, sustainable, responsible and environmentally friendly. Yet how to critically approach an economy justified not only by its accumulative potentials but also its ability to do and be good? This paper suggests the concept of ‘the good economy’ as an analytical tool for investigating how economic practice is entangled in versions of the good. Building upon the classic contributions of Weber, Thompson and Foucault in combination with valuation studies, this paper shows how the good economy concept can be employed to examine how the economic and the good are intertwined. Empirically, the paper teases out how what is made to be good in the bioeconomy is radically different than in economies of the recent past. While ‘the good economy’ of the early oil and aquaculture economy concerned how to insert this economy into society in a good manner, society is surprisingly absent in the contemporary bioeconomy. The bioeconomy is enacted as an expert issue, pursued by the tools of economic valuation, and based in the unquestioned idea that ‘the bio’ makes any economy good. Introduction What shall we live off in the future? What will provide jobs and food, and create new markets and opportunities? What will come after the carbon economy? What will follow the fossil era? In the last ten to fifteen years, key institutions such as the OECD and the EU, as well as individual nation states, have proposed and promoted answers to these questions. They all point in a similar direction: after the fossil economy, another economy must come-a bioeconomy. In what is being proposed as the new bioeconomy, the 'bio' is being used as a shorthand for various ways in which biological materials are being enrolled in economic practices and made into the basis for the economy. As this paper will show, the various policy strategies enact the new bioeconomy in different ways, yet they all seem to share a strong normative basis: a vision of a shift to a new economy that is innovative, sustainable, responsible and environmentally friendly-in short, novel, ethical and good. So has the economy, finally, become good? Is this, finally, an economy that will be what it now promises; ethical and environmentally friendly? Is this the economy which will enable combining growth in quantity with a growth that is also qualitatively good? Answering if the economy has finally become good is not the objective of this paper. Our aim is rather to investigate more carefully what 'the good' of the bioeconomy purports to consist of and how it contrasts with previous versions of 'the good' in the economy. A key contribution of this paper is the concept of 'the good economy' that is developed as an analytical tool for investigating relations of the economy and 'the good'. This is prompted by an analysis of the emerging bioeconomy which, we argue, amounts to a reconfiguration of what a good economy purports to consist of. The analysis implies 'troubling' the good and to pursue a critical reading of the relations between 'the bio' and 'the economy' of the bioeconomy. By doing this, we simultaneously take the opportunity to explore how other versions of the economy have also been concerned with 'the good', but in other ways and formations. The paper's research strategy is to do a critical analysis of the economy 'by other means'-that has three important elements to it. Rather than critiquing how the bioeconomy is simply another turn in a capitalist logic, a new version of an already too familiar neoliberal economy, the empirical analysis of our paper aims at demonstrating how the economy can be otherwise and in fact relatively recently was otherwise-not the least when it comes to its good-economy relations. Second, and related to this, rather than taking at face value that the new bioeconomy is a normatively better (or worse) one than the economies it seeks to replace, we argue that questions of 'the good' has in fact always been entangled with the economy-but in highly different ways. The concept of 'the good economy' invites for analyses of such relations. Third, and most importantly, 'the good economy' concept enables us to grasp that there are more relations at stake in the bioeconomy than bioeconomy relations alone. Also economy-society relations are at stake, our analysis suggests. The bioeconomy is not simply about exploiting new resources or developing new tools of valuation, but, through these, about redefining the relations between the economy, society, politics and the 'bio'. Using the Norwegian case as our example, the paper demonstrates that whereas 'the good economy' of the early oil and aquaculture economy concerned how to insert this economy into society in a good manner, society is surprisingly absent in the contemporary bioeconomy. Rather, the bioeconomy is enacted as an expert issue, pursued by the tools of economic valuation and based on the unquestioned idea that 'the bio' makes any economy good. So, rather than to see the new bioeconomy as a good economy, and its predecessors as essentially bad, we approach the economy with new lenses. Perhaps the current policy visions for a bioeconomy are an even more problematic and troubling way of doing the good economy than what we can detect and analyse in earlier versions of the economy? That is what this paper asks. No matter the answer, we argue that understanding these enactments and struggles and how they configure the economy differently are vital if we are to understand, but also eventually intervene in and change, economic configurations and practices. Theoretical resources and approaches to 'the good economy' In developing the notion of 'the good economy', we draw from a combination of the emerging field of valuation studies and social studies of markets, critical bioeconomy studies and Foucauldian governmentality studies, as well as now classical works on early capitalism in history and sociology. In contrast to the latter classical works, which sought to analyse the concrete and distinct transition to a capitalist economy in a given historical period, the notion we suggest is meant to analytically address the fact that economies are very often also about good economies. In putting this upfront as a heuristic tool, we do not only address the explicit or implicit ethical and normative dimensions of the economy. We seek to address more broadly the versions of economy that good economy relations enact-including how these versions of economies do valuations-and by what means-what we here label 'tools of valuation'. The paper argues that we need to critically engage with and scrutinize these sometimes troubling-and not always successful-versions of the good. Yet we do not put this forward simply as a critical tool, but also as a tool that may enrich and open up empirical analyses of the economy for the struggles and the versions of economy at stake. Hence, our concern is with the tools of valuation, but also more broadly with how the economy 'adds up' to distinct versions of 'the good economy'. This article proceeds as follows: We end this introductory section with a paragraph on our empirical materials and methods. We then proceed to develop our analytical framework. First, we present the classic contributions of Max Weber and E. P. Thompson, who conceptualized the 'protestant ethic' and 'moral economy', respectively, in order to understand the emergence and effects of capitalism. We then discuss these concepts in relation to the works of Michel Foucault, thereby distinguishing what Foucault argued was a distinct dimension of the state-market formations of the neoliberal economy. From this expanded understanding of economy-normativity relations, we build the concept of 'the good economy' as a heuristic and analytical tool for investigating the ongoing discussions of the bioeconomy. This leads us to the article's empirical section, in which we employ and further develop our analytical framework within four key versions of economies: first, the obvious choice of the ongoing bioeconomy policy initiatives of the EU, the OECD and national governments in Europe; second, the related-but diverging-version of 'the blue bioeconomy', which is based upon ocean industries and marine resources; third, the establishment of aquaculture as an industry in Norway in the 1970s and fourth, the establishment of a petroleum industry in Norway, also in the 1970s. After this empirical exposé, we draw together the different versions of the economy and conclude by discussing how the concept of 'the good economy' may better help us make sense of co-existing forms of bioeconomies and their diverging modes of enacting the good-and ultimately, how this also carries implications not only for bioeconomy relations but also for biosociety relations. The empirical material which forms the basis of this article is in-depth analysis of policy-related documents from the OECD, the EU and Norway. This entails a broad mapping of relevant documents within the bioeconomy field, a close analysis of a selection of these documents and the contrasting of these documents with historical examples from Norway. These historical contrasts serve to cast light on the contemporary in that they enable us to better see and grasp what we too easily take for granted. It also sensitizes us to how the different economies enact economy-society relations, bioeconomy relations, values and normativities differently. The analysis of the early aquaculture economy and petroleum economy are being teased out as versions of 'the good economy' in their own right, thus, also contributing to the paper's overarching ambition of developing versions of economies and their 'good economy' formations. This builds on a notion of 'versions of economics' or of economies developed in Asdal (2014a, b), which highlights that we are not approaching these versions as different perspectives only on one and the same reality. Instead, they are seen as realities differently enacted in various settings, where different versions can form a relational space and may contradict one another, interfere or align (cf. also Mol 1999Mol , 2002. Further, we pursue a 'practice-oriented' approach to documents (Asdal 2015;Asdal and Reinertsen in press 2022) attuned towards analysing the relations that documents enact, the issues they produce, how documents work as tools of valuation (Asdal 2015) and how they provide space for different forms of expertise and valuation techniques. From a protestant ethic, a moral economy and market truths to 'the good economy' It is now more than a hundred years since the German scholar Max Weber published The Protestant Ethic and the Spirit of Capitalism, a work that has come to stand as a landmark in the analysis of modern capitalism. A key tenet of the book and his argument was, as the title indicates, the intimate interlinkages between ethics and capitalism (Weber 2001(Weber [1904). According to Weber, capitalism is not a non-moral enterprise; on the contrary, it is constituted upon a particular version of ethics: a work ethic inspired by Protestantism and oriented towards meticulous, diligent work, not for the purpose of showing off that the work paid off in expensive lifestyles or conspicuous consumption, but for the purpose of profit-and the re-investment of that profit-as an end in itself. Interestingly, it is not the economy itself which is good, but the surplus value which is constantly produced and re-invested-read as a sign that people pursuing profit are morally good subjects. Obviously, Max Weber is not the only scholar who has contributed influentially to how we understand economy-normativity relations. Even more so, perhaps, this is the case for E. P. Thompson who, in his seminal work on the English working class, suggested the notion of 'moral economy' to capture the tensions and conflicts emerging from the transition to a capitalist economy (Thompson 1971). Thompson's point was not to argue for a particular moral in-or integral to-the economy, nor, as in the case of Weber, an ethics spearheading the drive for surplus. On the contrary, Thompson's 'moral economy' was rather the moral of the English working class and the normativities embedded in their ways of living and doing their economy, a moral economy which was set aside and suppressed by the transformation to a capitalist economy. In this analysis, there is no space for a moral economy inside the capitalist logic, but a moral economy that can seek to counter and act as a critical force against a capitalist logic. Hence, the moral is not spearheading the economy, but is rather the economy's outside (see also Fourcade 2017). Michel Foucault-who in his governmentality lectures (2008) included an analysis of the choreography of the neoliberal and classical liberal economy-may also be included in this list of scholars who dwell on the economy and its normativity relations. In fact, in some ways Foucault's take resembles that of Thompson: When analysing the turn to markets in liberal and neoliberal society, Foucault suggested that the market becomes its own truth, so to speak: a 'regime of veridiction', where the morals or normativities that used to be linked up with issues of exchange were replaced with market truths (Foucault 2008). In this turn to a market society there is no space for moral sentiments or economies beyond the market price, which does not make a difference between the correct price and that which is good. The market price is correct, thus, by implication also good. Foucault does not draw upon this notion of the good himself, yet it makes sense, we suggest, to understand Foucault in this way. But what happens if we start to investigate and ask more explicitly about the good and the economy entanglements that might be involved? Before moving there, let us dwell a little bit longer on a few more relevant scholarly contributions. Processes of economization versus 'versions of the economy' In part indebted by Foucault, in part by the pragmatism of John Dewey (1939), but perhaps most notably actor-network theory, scholars in the field of social studies of markets have analysed the processes and practices by which entities, devices and practices become economic (Callon 1998;Callon et al. 2007). Hence, rather than taking for granted that the economy is already a sphere in society, the focus is more pragmatic and concerned with how the economy is being realized in practice-in its becoming, so to speak. The notion of 'economization' has been put forward to capture this as a step-by-step process (cf. Çaliskan and Callon 2009; see also Chiapello 2015 and her work on what she captures as 'financialization'). This is a more analytically and empirically open approach than analyses that state, for instance, that the economy is neoliberal, or that there is a pre-given capitalist logic, or that capitalism is an 'ism'. The term 'economization' nevertheless leaves little space for exploring if and how there might be other dimensions at stake than the economical. This is despite the fact that the approach interestingly addresses how the economy is made to perform not only by way of quantifications, but also by qualificationsor rather 'qualculations' (Cochoy 2008). How may such approaches be extended towards including the qualitative in more encompassing and broadened ways? And moreover, how may such approaches also be extended to include economy-society and economy-state relations? If such extensions are not done, the approach risksdespite its agnosticism on behalf of the economy and capitalism-putting too much weight on that which is seen and conceptualized as economic. We risk not providing space for the possibly extra-economical, for concerns that are collective and must be treated collectively or, for instance, politically. While this approach has proved most fruitful in studying markets and processes of economization, the concepts developed for studying the economic in the making may leave us poorly equipped to investigate relations between what is economic and what remains external to this economy. How, for instance, can we study the relations to a 'society' or a state in which a process of economization occurs-not simply as society economized, but in its constitutive interrelations? A key dimension to Foucault's analyses was indeed that of always including such interrelations: The governmentalities and governmental technologies that he traced and analysed were profoundly linked up with state formations-not the state in a given definition though, but as an 'entity' being profoundly shaped and remade in relations, for instance, to the market. Hence, policy programmes and the art of government are key, in his analyses, to grasping the formation of the economy-and again, not in a straightforward fashion, as that of proclaiming, for instance, the existence of a neoliberal economy or a given definition of the state. The investigative method Foucault worked by does not allow for predefined grand or universal categories in this way (Asdal 2020). Instead, Foucault was constantly seeking to grasp how practices take part in transforming the categories we thought we knew the composition of. In Foucault's analysis of the neoliberal, his constant focus is on the market-not the market in and of itself, however, but rather how (in the case of the neoliberal) the state becomes modelled, as he argues, upon the market: The market becomes the model for the state (Foucault 2008). Foucault's innovative analyses and conceptual work-especially his thesis of 'governmentalities' and his analyses of the turn to markets (that of how the markets became 'organized for collective concerns', to put it with Frankel et al. 2019)-have made tremendous impact on scholarly debates. When Foucault lectured on what he coined governmentality and biopolitics in the late 1970s, it was as if he already sensed the larger market turn which was to 'fall upon' so many national contexts during the ensuing years. Thus, his analyses gave almost immediate meaning and became a key for interpreting the new economic and societal conditions of the market society (Barry et al. 1996;Burchell et al. 1991). Also regarding studies of how life and the new life sciences are made part of the economy in the pursuit of surplus, Foucault's analyses have been a major source of inspiration (see, for example, Cooper 2008;Yoxen 1981;Rose 2007;Rajan 2006). Regarding the latter, the inspiration does not only derive from Foucault's governmentality studies, but also and perhaps just as much from his historical analyses of the emergence of biology and classical political economy (Foucault 1970). Something may nevertheless have escaped Foucault's attention-perhaps because this 'something' was not so easily detectable at the time? What we will suggest is that the bioeconomy-as we can trace it in the policy visions and programmes across the EU, European nation states and the OECD from the mid-1990s-is an entry point for alerting us to something new. This 'new' is precisely about the economy's entanglements with 'the good'. The issue, then, is not simply a market turn and the market as the regime of veridiction, a truth of its own. What we are witnessing, we suggest, is another and indeed more complex configuration, or set of configurations: The new economy is in need of justification beyond its contribution to surplus and on top of its eventual success at the market. And it is this economy's relation with 'the good' that we need to trace and trouble. In pursuing this task, Foucault's practice-oriented, profoundly relational and conceptually inventive investigative method may serve as the inspiration just as well as his substantive claims in his lectures in the late 1970s. As we will show, the bioeconomy carries a series of distinct and unique configurations between 'the good' and the economy. But what is evident for the bioeconomy also serves as an entry point for exploring how former and other versions of the economy have also carried distinct configurations of good economy relations. Searching for and exploring such configurations in former versions of the economy is one way of grasping both the promises and troubles of the bioeconomy. In analysing how the economy is essentially also about the extra-economical, the major contributions by Weber and Thompson have been and still are vital. Weber's protestant ethic is arguably a key to the spirit of capitalism, and to the drive for surplus and its re-investments, hence, to the very establishing of capitalism in the first place. Thompson's moral economy, however, is one less entangled with the economy-or with capitalism, to be precise. The moral economy is that which lies outside capitalism. And it is in this way that the term has been taken further in other and broader societal contexts; that of detecting and arguing for how various communities are morally ordered (for example, Kohler 1994). In contrast, what we suggest is a conceptual move that neither limits itself to ethics or 'morals', nor leaves questions of the normative or 'the good' to that which resides only outside the economic or capitalist realms. The concept of 'the good economy' is intended to work in precisely this way. So let us address this suggested conceptual move and tool more directly and in more detail. Charis Thompson, in her book Good Science (2013), points to how contemporary science is entrenched in ethics and, thus, needs to be analysed and managed accordingly. Ethics can no longer be treated alongside science; it must be seen as integral to it. Thompson does not expand much on the notion of 'good science', nor does she deal with economics or the economy. Our notion of 'the good economy' still takes inspiration from her term, while also addressing normativities beyond ethics. Other academic moves from within the broader field of science and technology studies may help us move in such extended directions. Care studies is one such key move that actively addresses the qualitative and the normative in medicine, health and food practices (see, for example, Mol et al. 2010;Druglitrø 2018). "What is a good tomato?" Heuts and Mol (2013) ask and address how quality can be engaged with and cared for in widely different registers of valuing. In related, but not similar, ways, contributions emerging from the field of valuation studies (Dussauge et al. 2015;Muniesa 2011) are concerned with how practices of valuation are integral to the economy. So far, these contributions have not sought to analyse the economy 'at large'-that is, not analysed the more comprehensive or encompassing versions of economies that might emerge from such diverse forms of valuation practices. Furthermore, these have not addressed the economy in relation to the explicit issue of 'the good', but rather pursued the valuation approach as more of a value-neutral term, so to speak. Rather than asking which forms of the good are at stake, contributions have so far been more concerned with the forms and means of valuation that perform the economy (Muniesa 2014; see also Callon et al. 2007;Callon 1998). The concept of 'the good economy' draws from these contributions, but extends them towards analysing the choreographies and compositions of the 'versions of economies' in question (cf. Asdal 2014a) and the entanglements between the economy and 'the good'. In doing this, the question of what 'the good' is thought to consist of may be addressed directly and not only the means by which 'the good' is enacted, but also which objects are taken to perform the good, including those in need of being cared for (Asdal 2014a). The critical bioeconomy literature (for example, Goven and Pavone 2015; Birch 2017a; Birch and Tyfield 2013) has, to a much larger degree than valuation studies and contributions within the field of social studies of markets, addressed the economy. In putting forward 'the good economy' concept, we share the ambition of delineating the composition, choreography and formation of the bioeconomy, yet we do not want to be bound up with already too familiar notions of what this formation consists of, such as 'the neoliberal' (see Birch 2019 for a problematization of the notion of 'neoliberal' bioeconomies). Rather than a label of what the economy is, the 'good economy' concept is intended as an investigative tool to question and trouble the composition of the so-called bioeconomy and its entanglements with 'the good', including 'the good' that it performs and invests in. The good green bioeconomy: troubling the inherent good of the 'bio' The bioeconomy has, during the past decade, become the object of major policy initiatives across Europe and beyond. Of key importance in driving this momentum are the supranational institutions of the OECD and the EU, epitomized in a set of comprehensive policy documents. In these documents, both institutions envision the bioeconomy as the desired future towards which our societies should strive, and they express urgency for national policymakers and governmental agencies to help realize this vision (OECD 2009;EU 2012aEU , 2013EU , 2018a. As asserted by the European Commission in the document Innovating for Sustainable Growth: A Bioeconomy for Europe: A strong bioeconomy will help Europe to live within its limits. The sustainable production and exploitation of biological resources will allow the production of more from less, including from waste, while limiting negative impacts on the environment and reducing the heavy dependency on fossil resources, mitigating climate change and moving Europe towards a post-petroleum society (EU 2012a, p. 4). Here, the Commission envisions the bioeconomy as the means to tackle the major challenges of unsustainable resource use, fossil fuel dependency and climate change, and to move away from a society built upon petroleum. Interestingly, the bioeconomy stands out as inherently good because of its 'bio' component-since the economic activities will be grounded in the biological, they will by definition be good. Indeed, this makes the bioeconomy a most blatant example of how 'the bio' is made to stand out as 'the good', thus, ensuring by its very bio 'the good economy'. Yet, as we will now demonstrate in more detail, this assumption not only works to present a specific notion of the bioeconomy, but also to downplay deep tensions and contradictions currently embedded in the concept. In order to highlight this, we will introduce how 'the bio' is in fact radically different things in different versions of bioeconomies, thus, enabling quite radically different economies, including their transition narratives. This in turn leads us to observe that the notion of society is curiously absent in the current initiatives, making the bioeconomy a quite different policy project than were its historical predecessors. What, exactly, is the 'bio' component of the bioeconomy, and how is 'it' expected to be and to do good? Other scholars have already noted that there exist multiple coexisting definitions of the bioeconomy (Bugge et al. 2016;Pavone and Goven 2017). The European Commission, in the above-mentioned document Innovating for Sustainable Growth: A Bioeconomy for Europe (EU 2012a), delineate the bioeconomy as follows: "The bioeconomy encompasses the production of renewable biological resources and their conversion into food, feed, bio-based products and bioenergy" (EU 2012a, p. 16). In contrast, the OECD document entitled The Bioeconomy to 2030: Designing a Policy Agenda (OECD 2009) is surprisingly different. Here, the bioeconomy is discussed interchangeably with 'biotechnology', and the document provides an influential argument for new biotechnology as an engine of economic growth (on this initiative, see Hilgartner 2007;Parry 2007). Its vision of the bioeconomy is one where "biotechnology contributes to a significant share of economic output" (OECD 2009, p. 22). Economic growth and innovation on the basis of biotechnology and the life sciences is, thus, what drives OECD's version of the bioeconomy. In other words, what distinguishes this economy as 'bio' is not a shared resource base from biological and non-fossil inputs, but rather a base of biological knowledge, biotechnology and the life sciences. Despite proposing overlapping policy prescriptions, the two institutions hence define and operationalize the 'bioeconomy' in most different ways. This in turn enables actors to adopt diverging versions and to modify these further within their specific context. The Norwegian government's bioeconomy strategy (NFD 2016) may serve as a case in point. By understanding the bioeconomy as "value creation based on production and exploitation of renewable biological resources in contrast to non-renewable carbon" (NFD 2016, p. 13, our translation), the Norwegian strategy is clearly based in the biomass definition rather than the biotech definition. The biomass definition carries strong implications for how natural resources are viewed-as biomass and a substitution for fossil inputs. This narrative of transition by substitution away from fossil inputs entails that increased utilization of biomass in itself is desirable because it represents a step towards realizing the bioeconomy-thus, 'the good economy' by implication. The biotechnology version, on the other hand, relies on a different logic: Biotechnology promises to be good because it decouples economic growth from constraining limits by harnessing the regenerative or reproductive capabilities of life itself. Thus, the biomass and biotech versions of the bioeconomy value 'the bio' that is to take part in the future bioeconomy very differently-yet they both take for granted 'the bio' as the good. Similarly, while all the bioeconomy policy documents emphasize the need to expand the 'bio' component of the economy-whether biomass or biotech-they diverge in how they position themselves vis-à-vis fossil fuels and the petroleum economy. As noted above, the EU documents explicitly envision a post-petroleum society: "Greater use of renewable resources is no longer just an option, it is a necessity. We must drive the transition from a fossil-based to a bio-based society, with research and innovation as the motor" (EU 2013, p. 4). The bioeconomy strategy is in itself a response to this, since, as it itself states, "[a] strategy is also needed to ensure that fossil fuels are replaced with sustainable, natural alternatives as part of the shift to a post-petroleum society" (EU 2013, p. 2). The Norwegian bioeconomy strategy (NFD 2016) similarly adopts a clear transition narrative, but rather than highlighting the need to move away from fossil fuels, what is emphasized is rather what the bioeconomy moves toward. The strategy opens by stating that it realizes that the bioeconomy is "central for the transition toward a low-emission economy" (NFD 2016, p. 5). This economy is further characterized as having a strong potential for value creation, more efficient use of renewable biological resources, new growth, a 'green shift' within the economy, increased competitiveness for Norwegian industries and firms, cross-sectoral policy initiatives and interdisciplinary research and innovation (NFD 2016, p. 5). The OECD, in its attention to the potential of biotechnology, stresses the importance of incentives to "reward environmentally sustainable technologies" and the "use of renewable biomass" (OECD 2009, pp. 6, 8), but does not explicitly adopt a transition narrative. Rather, it indicates some potentially more controversial sides of realizing the bioeconomy: "disruptive and radical technologies … may lead to the demise of firms and industrial structures, creating greater policy challenges, but they can also result in large improvements in productivity" (OECD 2009, p. 16). In these diverging versions of the bioeconomy, there are clearly great inherent tensions. Yet in the documents themselves, these are not addressed or acknowledged-neither in the founding documents from the supranational institutions, such as the OECD and the EU, nor in the national documents in which these versions are bound together, such as the Norwegian bioeconomy strategy. They all take for granted that making the economy 'bio' is inherently good. Furthermore, these documents stay within the bioeconomy relation and do not address how these tensions might play out in society at large. It is precisely this feature of the bioeconomy documents that we need to trouble. In order to better grasp this dimension, it is necessary to expand our empirical scope and contrast the bioeconomy documents with other similar policy initiatives that conceive of 'the good' in other ways. 'The good economy': a conceptual and empirical move for… The good blue economy: enabling a fully economized nature by bringing in economics and its 'tools of valuation' Alongside what we may summarize as the green bioeconomy discussed above, there exists a notion of a blue bioeconomy concerning the ocean industries. These policy initiatives are interchangeably termed 'blue growth' (EU 2012b), 'the blue economy' (EU 2014), 'the ocean economy' (OECD 2016, 2019), or simply 'the blue bioeconomy' (EU 2018b). Although the green and blue bioeconomies initially developed as parallel strands of policy, they are increasingly merging, as illustrated by the EU's revised bioeconomy strategy from 2018, in which "unlocking the potentials of oceans and seas" is put forth as one of three main dimensions of the European bioeconomy (EU 2018a, p. 1). Yet the notion of the blue bioeconomy offers a different conception of 'the good' than that of the green versions. A first key difference is the role of fossil fuels and petroleum production. While we showed above that the 'transition by substitution' argument is a strong narrative within the green bioeconomy, the opposite is the case in the blue bioeconomy. Both the EU's Blue Bioeconomy initiative (2018b) and the OECD's Ocean Economy project (2016, 2019) include all existing ocean-based industries, of which offshore petroleum is a key contributor. In the case of Norway, where the petroleum industry is indeed the nation's largest offshore industry, the government's Ocean Strategy explicitly states that "[t]he ocean industries are part of the bioeconomy" (NFD and OED 2017, p. 73, our translation). The second key difference is also the most striking feature of the blue bioeconomy: the efforts made herein to delineate and calculate its potential economic value. As expressed by the OECD, if the oceans are explored, monitored, governed and exploited in the best possible way, we may, in 2030, have realized a 'trillion-dollar ocean' (Jolly and Stevens 2016). This numerical growth potential has set the agenda in both national and international contexts (for the national example of Norway, see the above-mentioned 'Ocean Strategy' (NFD 2016); for an international example, see the United Nations' High-level Panel on Building a Sustainable Ocean Economy (The Ocean Panel 2020); for detailed analysis of this point, cf. Reinertsen and Asdal 2018). While fostering growth has indeed been a key preoccupation of all the 'blue documents' from the EU's Blue Growth strategy onwards (EU 2012b), the OECD's Ocean Economy project went further than conventional estimates of potential industrial growth. Being based in the OECD's Futures Project and employing policy foresight techniques, it sought to advance the methods for assessing potential industrial growth by integrating the value of the ocean's ecosystems directly into the calculation. More specifically, what the OECD sought to calculate was the value of 'ecosystem services', understood as the contributions by nature to human life and society. Hence, economic capital and natural capital were made commensurable and integrated into the same economic model. The method promises to demonstrate how environmental degradation of the ocean will negatively impact not only nature as such, but also the growth potential of the ocean economy-its very size. Unsustainable growth will, thus, diminish not only the ocean's 'bio value', but also its financial value. Hence, co-calculating the value of natural capital and financial capital, the OECD asserts, will enable us to take better care of the ocean and its resources, since overexploitation will affect the value of ecosystems negatively and thereby in effect reduce the ocean's economic potential. The means to do so is to make the so far noneconomized nature integral to the economy quantifiable and calculable in economic terms. In short, nature is taken into account by bringing it into a regime of accounting (Asdal 2008); it is being economized (Çaliskan and Callon 2009). Although seemingly new and innovative, this version of economization is based upon relatively well-established neoclassical economics, of which the model for taking nature into account has historically proven highly difficult to realize in practice (Asdal 1998; for discussions of related practices of valuation of nature, see Asdal 2008;Fourcade 2011;Chiapello 2015). Moreover, when reading the OECD report more closely, it becomes apparent that even the effort of calculating the ecosystem services in the first place-hence the very procedure of economizing-creates trouble and represents unresolved challenges (Nebdal 2019). The OECD's proposed new calculative tool is nevertheless put forth as the key to a transformed and more sustainable economy (OECD 2016; EU 2018b). The hopes, then, that are being invested in these 'tools of valuation' are that economic growth will be balanced up correctly against the long-term gains of conservation due to the deployment of a complex series of price-setting mechanisms. The promise then is that the tools of valuation of economics will secure that nature is being ascribed the correct price. Hence, nature is brought into economics and, simultaneously, is becoming an issue for economics and economists. Thus, the ocean economy is also becoming an expert issue. In fact, the ocean economy will become good only insofar as these 'tools of valuation' of economics are put to use. Moreover, the economy is good only insofar as nature is being economized. 'The good economy' is a fully economized nature. The OECD approach to the blue bioeconomy is characterized by this effort to take nature into account in a manner that, in theory, discourages unsustainable overexploitation and mismanagement of the oceans. That of not preserving the services provided by nature's ecosystems is turned into a risk-for the economy. As such, the ocean economy of the OECD is an economy in which the issue of pursuing economic growth is extended from that of engaging in industrial activities for short-term gain, to also include the conservation of ecosystems for long-term gain. Another blue economy report, this time from the Norwegian context, offers a particularly strong example of how the promise of economizing future bio-values may be used, not to preserve ecosystems, but to leverage political will for rapid, large-scale industrial growth. In 2012, a report titled Value Created from Productive Oceans in 2050 calculated the Norwegian marine industries to have the potential of at least a fivefold growth in value creation by the year 2050 (SINTEF 2012). The report was commissioned by two Norwegian scientific academies and was prepared by a working group with members from academia, industry and the Research Council of Norway, while the research organization SINTEF-one of Norway's largest-functioned as its secretariat. We highlight this report because its promise of grand growth proceeded to act almost immediately upon governmental policy: It was quoted on the opening page of a white paper on Norway's ambitions for expanding the seafood industry (MFC 2013, p. 8), and subsequently endorsed by Parliament (PNP 2013, p. 18). Several high-level politicians have since repeatedly affirmed their commitment to fulfilling this potential (for detailed analysis, see Reinertsen and Asdal 2019). Similar to the ocean economy report of the OECD, this report also works by a set of 'tools of valuation'. Yet the precise tools of valuation at work are different, as they rely not on neoclassical economics, but rather on a toolbox of business school models and strategies. In short, this entails, in practice, a loose combination of value chain analysis and SWOT analysis ('SWOT' being the acronym of 'strengths, weaknesses, opportunities, threats', which is what this tool seeks to identify for the project or enterprise in question). Furthermore, the very definition of 'value creation' enacts the blue economy as an enterprise, rather than a part of the national economy. Concretely, the report explicitly defines value creation "synonymously with turnaround or revenue from sales generated by the marine sector", rather than "contribution to GDP" (SINTEF 2012, p. 14), which is the definition used by the Ministry of Finance. What these business strategy tools in combination enable is to imagine a formidable future growth that also incorporates in its very calculative process the challenges and preconditions that might serve as barriers to its realization. This fascinating feature rests on an intricate calculative manoeuvre on the textual level. Quite concretely, what the report does is to first explicate future 'opportunities' and 'threats' for the marine industries (including the opportunity of expanding global markets and the threats of climate change, pollution from the aquaculture industry itself and lack of enough feedstuff), and then imagine what the growth might be like in the year 2050 if the threats are overcome. This is done by reformulating the threats into 'criteria for growth'. In the case of aquaculture, the report states that, "provided the aforementioned criteria are met, it may be possible to achieve production levels for salmon and trout of 5 million tons in 2050" (SINTEF 2012, p. 46). This effectively amounts to a fivefold growth, yet with a built-in assumption that the major threats identified by the same report have already been resolved. In effect, then, qualitative challenges and risks are built into the calculation, yet subsequently decoupled from the quantified growth potential. In effect this means that high risk and grave problems do not reduce the growth potential-quite the contrary. Paradoxically, the potential financial values are expanding when the bio-values are conceived as under pressure. However, not only are the distinct tools of valuation different here from the OECD version of the blue economy. It also invokes a mode of policymaking and governmental action characterized by 'will and determination' to act: [W]e believe that now is the time that politicians in Norway, to a greater extent than in the past, and much as they did for the oil and gas industry, commit themselves to the development of future marine-based industries so that we can generate greater value from the resources to which we already have access. We are seeing only the start of what can actually be exploited from the oceans by means of value generating activities, and we need political will and determination if we are to grasp the opportunities before us (SINTEF 2012, p. 12). In appealing to politicians to 'commit themselves' and help 'grasp the opportunities' of the ocean, the report here echoes longstanding tropes from business school curricula that leaders should be bold, take risks and dare to act in the face of uncertainty (Dewing 1930, in Muniesa 2011Doganova and Eyquem-Reynault 2009). Again, the blue economy is here enacted as a business issue and politicians are cast as investors in the enterprise (Reinertsen and Asdal 2019). This, in sum, enables policies for massive growth. Hence, in the context of the Norwegian contemporary blue economy, 'the good economy' is not simply an economy fully economized, but one that acts like an enterprise and has the politicians act as its investors as well as its managers. Interestingly, neither the OECD's ocean economy nor the Norwegian version provides space for the political in a conventional sense, instead turning the challenges to be either economics (finding the right price) or business (knowing the value and acting bravely). Neither of them provide space for political procedure, politics or societal concerns. The good aquaculture economy: justifying expansion by staying small and local The bioeconomy is often portrayed as something new, as an economy that will follow the economy of the present and the longstanding carbon economy. Obviously, this story can be told very differently. For centuries, 'the bio' has been key to growth and prosperity. In Norway, for instance, the export of timber and fish has been vital to the national economy. For the Norwegian case, aquaculture in the form of salmon farming was taking shape in the 1970s-precisely the enterprise that is today often presented as a flagship in the new, blue bioeconomy. In the early 1970s, the optimism and eagerness to invest in this new industry was 'second only to oil' along the coast, as stated in a 1973 parliamentary debate about how the authorities might aid, or reign in, its further development (PNP 1973a, p. 443). However, the growth of this new sector of the economy was, for all its promise, not simply regarded as good news pure and simple. Different models of aquaculture development-in terms of scale, geographical location or ownership-were judged very differently. Indeed, the first decisive action taken by (a unanimous) Parliament in 1973 was to limit the possibilities for an industrial path of growth in fish farming, by making any further expansion contingent upon production licences, with a maximum volume per fish farm (PNP 1973b). A Norwegian Official Report (NOU) on aquaculture was completed in 1977, a report that is widely regarded as foundational for Norwegian fish farming and its regulation and support by the state. In continuation of the initial limitations on large, centralized units, the report devoted a short chapter to explicate the aims of Norwegian aquaculture: 1. to make use of the country's resources in order to increase food production, 2. to maintain existing jobs and habitation, increase employment and provide opportunities for a more diverse basis for livelihoods [naeringsgrunnlag] in districts with weak productive or commercial activities [naeringsliv], 3. to build a rational trade, providing practitioners with income comparable to what is achieved in other trades (Ministry of Fisheries 1977, p. 24). Making 'rational use' of the possibilities, the report states, was a 'matter for society as a whole'. The aims of the new venture, then, were articulated to fit within overall societal aims, reflecting a politically sanctioned version of a good society. Thus, this growing, new sector of the economy is justified by reference to the overall aims it should help fulfil, and the ways it might fit together with and complement already existing activities: increased food production, rural development and new opportunities specifically directed to vulnerable, coastal districts, and lastly, employment and wages comparable to fisheries and agriculture. For aquaculture to help achieve such aims, it needed to be developed in a certain way: industrial growth-understood as large, mechanized units and 'external' capital-was curtailed in favour of smaller, geographically dispersed labour-intensive fish farms with local ownership and locally available labour and natural resources. Growth itself was a potential problem to be controlled-first, in terms of the size of individual fish farms, but also in terms of the total national production volume, which risked outpacing demand, with price collapse. The first clause in the legislation ensuing from the public report states controlled growth to be a key purpose for the granting of licences: "that expansion takes place such that production stands in a reasonable relation to possible sales [omsetningsmuligheter]" (PNP 1981, p. 7). 'Reasonable' expectations dictated a steady, controlled expansion, to not overwhelm export markets abroad and to fit a new and profitable venture into remote, coastal communities at home. Hence, the growth of aquaculture in this period needed to be adjusted toand justified in terms of-an overall politically sanctioned vision of societal aims. The claim to be good was anchored in the ability to be brought into harmony with such aims. Before drawing these indeed quite different versions of 'the good economy' together, let us first move towards the version that has come to be seen as the bad and the ugly of economies, namely the oil economy. And let us again draw on the Norwegian example, demonstrating how this version of the economy was indeed also deeply and explicitly about the good-however, in a radically different way than the current bioeconomy policy documents and strategies. The good oil economy: fostering 'a qualitatively better society' In 1969, international oil companies struck oil on the Norwegian continental shelf. This major reservoir inaugurated the 'oil age' and what is often referred to as the 'oil adventure' of Norway. Yet, at the time of discovering oil, Norwegian politicians were cautious that the potential new wealth would not automatically be positive for Norwegian society and its economy. In other words, the oil economy was not necessarily a good economy: the good economy had to be crafted. In 1970, a unison Norwegian Parliament articulated "Ten Oil Commandments", in which core principles of Norwegian oil policy were established (cf. Table 1). Most important of these were 'national government and control' over the exploitation and distribution of petroleum resources and revenues. The commandments, thus, explicitly state a strong moral stance: these resources belong to the nation and should be governed in the nation's best interests. The strong moral connotations of the Ten Oil Commandments were operationalized and further explicated in the following years. In 1973, the Norwegian government issued a white paper which explicitly warned that uncontrolled use of the oil revenues would have severe consequences for Norwegian society and its economy. Thus, interestingly, the oil economy could turn into a bad economy, if not dealt with in a proper way. In order to avoid this, the government asserted that petroleum activities must contribute to building a qualitatively better society: The petroleum discoveries in the Northern Sea make us richer as a nation. The Government holds that one first and foremost must use the new possibilities to develop a qualitatively better society. One should avoid that the result only entails a quick and uncontrolled expansion in the use of material resources without society at large being considerably changed. The guidelines that are drawn up for the petroleum activities and the use of revenues must therefore be part of a planned transformation of Norwegian society. […] Democratic institutions must gain actual government [herredømme] of the development in increasingly more areas. The economic possibilities must be used to create increased equality in standards of living and in other ways to prevent social problems and to develop a more environmentally and resource-friendly production. The welfare society must be further expanded and the composition of Table 1 The Norwegian parliament's "ten oil commandments" (PNP 1970), translated from the Norwegian original in Reinertsen 2016, p. 154 The Norwegian Parliament's "Ten Oil Commandments" (1970) The committee wishes to express that national governance and control must be secured for all activities on the Norwegian continental shelf; that the petroleum discoveries are exploited such that Norway becomes as independent as possible of others with regards to the supply of crude oil; that there are developed new business activities with a basis in petroleum; that the development of an oil industry must take place with the necessary concern for existing business activities and the protection of nature and the environment; that the burning of exploitable gas on the Norwegian continental shelf may not be accepted, with the exception of shorter periods of testing; that petroleum from the Norwegian continental shelf should as a general rule be brought ashore in Norway with the exception of single instances where sociopolitical concerns serve as a foundation for a different solution; that the state will be engaged on every purposeful level and contributes to a coordination of Norwegian interests within Norwegian petroleum industry and to the build-up of a Norwegian integrated oil community with a national as well as international perspective; that there will be established a state oil company that may maintain the state's commercial interests and have a purposeful cooperation with domestic and foreign oil interests; that there north of the 62nd latitude will be chosen a pattern of activities that accommodates the special sociopolitical concerns facing this region; that Norwegian petroleum discoveries to a larger extent may expose the Norwegian foreign policy to new tasks private consumption must be influenced through an active consumer policy. Local communities must be strengthened and developed with a view to a better environment as a whole (Ministry of Finance 1974, p. 6, italics in original, our translation). Integral to the white paper's vision of a good society-one of social equality, strong local communities and paid work for all-is an active government. In practice, the white paper envisioned the newfound petroleum riches to enable shorter working days, expanded welfare services and the inclusion of new groups into the workforce. The petroleum revenues would not ensure this on their own. Quite the contrary, the white paper repeatedly asserts that the infusion of oil into the economy might produce multiple "problems of transition" (Ministry of Finance 1974, p. 6). The oil might easily become a negative force, a bad economy, if the industry were allowed to grow unchecked and too quickly, and if the revenues were spent through unleashed consumer spending. In our own words: If the oil economy was to become a good economy, on the contrary, it had to be built wisely and determinately into society. The white paper's key concern was to outline how a smooth and beneficial transition into an oil economy might be done. The key measures for doing so were to ensure a 'moderate tempo' of exploration and production, to commission Norwegian companies and train Norwegian employees in the new sector, to build local communities around the new installations and to channel revenues abroad through investments and sales (Ministry of Finance 1974, pp. 8-10, 15). In this way, the economy would be spared multiple pressures-upon prices, salaries, employment and domestic mobility. The white paper, thus, expressed a deeply ambivalent view of the expected growth and repeatedly underlined its potential problems, should it be allowed to unfold too quickly. It is cautious and tempered with respect to the potential benefits, and urges restraint. The policies must be in place to ensure that the coming growth and transition is becoming a positive force. What the white paper never doubts, however, is the growth itself. It will come: the question is how to control it in order to ensure that it fosters a "qualitatively better society" (Ministry of Finance 1974, p. 6). If the coming oil economy is governed in an active, deliberate manner, it might become 'a good economy'. It will never automatically be good-it must be made into a good economy, through the determined policy and government of a social democratic nation state. Drawing versions of 'the good economy' together As we now know, the oil economy turned into a critically bad economy, but in a quite different way than what was envisioned in its infancy: today, the oil economy stands out as essential to move away from in order to establish an economy that is more environmentally friendly and, most importantly, not destroy the world's climate. The bioeconomy, as we saw, presents itself as an economy that will do precisely this: provide an alternative to the carbon economy. However, the contrasts with these other versions of the economy-such as the oil economy and the early phases of the aquaculture economy-also help to highlight other differences between these economies and policy initiatives. One of these is how the bioeconomy differs not only in its 'bio'-'economy' relation, but just as importantly in its 'economy'-'society' relation. Whereas the early oil and aquaculture economies were problematized in terms of how these new economic activities were being inserted into society, society does not exist in the same way in the policy documents on the bioeconomy. Rather than an economy inserting itself into society, the bioeconomy is oriented towards inserting 'the bio' into the economy. Interestingly and importantly, this insertion of 'the bio' into the economy is not envisioned to happen by societal means, but by way of the tools of valuation of economics, either neoclassical economics, such as ecosystem services, or business school models and strategies. This is in itself an important feature of the bioeconomy in both its green and blue versions. Even more so, the bioeconomy is presented as if such insertions come without frictions and dilemmas. The bioeconomy is enacted as if it is unproblematic in itself, and as if the bioeconomy relation is without tension, as the 'bio' is made to stand out as the essentially good. This stands in sharp contrast to the oil economy of the 1970s which actively pointed at and discussed problems and conflicts of interest that were likely to arise as this new economy was making its impact on society. The new oil economy could become a good economy-a 'qualitatively better society' could be the result of it-yet this economy-society relation had to be actively steered and governed since the opposite could just as well be the consequence, if society did not have its hand on the process. Similarly, the early aquaculture enterprise had to justify itself according to the societal gains it could eventually contribute to fulfil. The active governing of this new economy was key. Modesty and staying small was enacted as a precondition to its becoming a good economy. We have already pointed out how 'the good' of the bioeconomy is conceived as given in the 'bio' itself. While comparing and contrasting with other versions of the economy, what becomes apparent is how society is curiously lacking in the bioeconomy policy programmes. But so also is an explicit discussion of dilemmas, tensions and conflicts that arise as old and new versions of 'the bio' are to be made integral to the economy-and indeed also society more broadly. How to explain this absence of the political and the societal? One way of explaining it is precisely by addressing how realizing 'the good economy' is turned into a calculative endeavour, thus, essentially also an expert challenge. Hence, rather than an issue of how to insert the economy into society and by which tools and means, the issue becomes simply that of employing the right tools of valuation-of economics. Moreover, this version of the good economy is one where 'the bio'-the nature upon which the bioeconomy rests-is fully economized, thus, transformed into an economic object. As we saw, the blue bioeconomy of the OECD was a prominent example of this. Also, the Norwegian version of the blue economy rests upon tools of valuation from economics-yet, in this case, business methods and calculative strategies, including narrative strategies. Interestingly, whereas society is apparently lacking in the OECD version, this is a version in which the state is modelled upon the enterprise and politicians are cast as investors encouraged to be bold on behalf of the enterprise. Hence, whereas Foucault, in his governmentality lectures, identified how the state in the neoliberal society was modelled upon the market, this is something new and different: rather than being modelled upon the market, the state is being modelled upon the enterprise. 'The good economy' is, thus, not so much a straightforward market economy, but rather an enterprise society. And interestingly, if this observation is correct, the point is not so much that there is no society in which the economy is to be inserted, or that the moral economy resides outside of the economy, but that there is no moral economy outside of the enterprise: society and the enterprise are merging into one. Conclusion The overriding objective of this paper is to suggest a recasting of the bioeconomy, most notably by putting forward the notion of 'the good economy' as an analytical working tool made for opening up and investigating this economy further. Here we return to Max Weber as a source of inspiration: working conceptually was precisely Weber's suggested method. In The Protestant Ethic and the Spirit of Capitalism, the argument was that even if a concept was already put forward at the very beginning of a study (in Weber's case, the notion of a distinct 'spirit of capitalism'), an eventual precise definition of the proposed concept could only be substantiated step-by-step as the study was moving forward, and thus, only be defined at the study's very end (Weber 2001(Weber [1904). Importantly then, methodswise, Weber's aim was not "to grasp historical reality in abstract general formulae but in concrete genetic sets of relations which are inevitably of a specifically unique and individual character" (Weber 2001(Weber [1904, pp. 13-14). Weber noted that such conceptual moves would always have an individual tone; analysing the relevant phenomena from the chosen viewpoint would never be the only possible alternative, as others could always see these phenomena differently. And rather than working from a precise definition at the start of the study, the conceptual innovation is more a way of envisioning the relevant phenomenon-what Weber formulated as "a provisional description" (Weber 2001(Weber [1904, and which may also be linked to his famous concept of 'ideal types' (Weber 2012). 'The good economy' is our point of departure in a quest for grasping the emerging bioeconomy and the normativities involved. The bioeconomy is often presented as something radically new-an economy that will come and must come after our current economy. Ironically, however, one could argue the other way around-that bioeconomies are the oldest of all economies, the economy that we have and currently also still heavily rely on. So the newness, we suspect, is not so much in itself an economy that relies on the biological, but how the relation between the 'bio' and the 'economy' is envisioned and enacted. Furthermore, and as a more overriding point, the newness seems related to the normativities that are involved: the bioeconomy steps forward as a new form of good economy-as the 'good economy'. This is also why, we suggest, we need this conceptual innovation as our tool to work with. However, the concept of the good economy is not put forward in order to analyse the bioeconomy exclusively. This conceptual move is done in order to work as a heuristic tool for analysing good economy relations more broadly, and also in other versions of the economy. The economy has always also been about interrelations with the good, and we need approaches and tools to also investigate that which is 'extra-economical', as well as 'the good' that is being performed as integral to the distinct versions of economy at stake. In doing so, we also need to include the 'tools of valuation' that are involved in doing such good economy relations. Hence, we need to continue studying practices and processes of economization and financialization. Yet we also need to expand such endeavours with the study of how the good resides in such practices, which versions of economies are performed by these, how the good economy can also take radically different shapes and be more of an issue about collective concerns, and how the economy can be inserted into society in good ways. Tone Huse Associate professor at TIK and UiT The Arctic University of Norway with a project on the politics and practices in the bioeconomy and the economic life of the Atlantic cod (grant agreement no 637760). Silje R. Morsman PhD candidate at TIK working with responsible research and innovation in the life sciences in the COMPARE project, funded by UiO:Life Science. Tommas Måløy PhD candidate at TIK with a project on publishing and data practices in the life sciences.
14,403.2
2021-09-20T00:00:00.000
[ "Economics", "Environmental Science", "Philosophy" ]
Lithium-Ion Battery Remaining Useful Life Prediction Based on Hybrid Model : Accurate prediction of the remaining useful life (RUL) is a key function for ensuring the safety and stability of lithium-ion batteries. To solve the capacity regeneration and model adaptability under different working conditions, a hybrid RUL prediction model based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and a bi-directional gated recurrent unit (BiGRU) is proposed. CEEMDAN is used to divide the capacity into intrinsic mode functions (IMFs) to reduce the impact of capacity regeneration. In addition, an improved grey wolf optimizer (IGOW) is proposed to maintain the reliability of the BiGRU network. The diversity of the initial population in the GWO algorithm was improved using chaotic tent mapping. An improved control factor and dynamic population weight are adopted to accelerate the convergence speed of the algorithm. Finally, capacity and RUL prediction experiments are conducted to verify the battery prediction performance under different training data and working conditions. The results indicate that the proposed method can achieve an MAE of less than 4% with only 30% of the training set, which is verified using the CALCE and NASA battery data. Introduction Lithium-ion batteries are widely used in new-energy vehicles, communication equipment, and aerospace electronics owing to their fast-charging rate and long life [1].However, with the accumulation of cycles, the performance of lithium-ion batteries inevitably deteriorates, leading to system failure and possibly even safety accidents [2,3].Therefore, it is important to efficiently and accurately predict the Remaining Useful Life (RUL) of Li-ion batteries.During the long-term use of lithium-ion batteries, along with the increasing number of charge and discharge cycles, some irreversible chemical reactions occur inside the battery, resulting in increased internal resistance and performance degradation [4][5][6].Reliable life prediction techniques not only allow for more efficient use of the battery but also reduce the incidence of failure [7].RUL is the number of charge and discharge cycles from new to end of life (EOL) under certain operating conditions, with a typical 20% capacity degradation upon reaching EOL [8][9][10]. Currently, life prediction methods for lithium-ion batteries are mainly divided into model and data-driven methods [11,12].Although the prediction accuracy of lithium-ion batteries based on the model method is high, it depends significantly on the modeling of the internal physical and chemical properties of the battery [13].The data-driven method can explore the relationship between the external parameters and internal state of the battery without building a complex battery model.Babaeiyazdi et al. [14] designed a model with electrochemical impedance spectroscopy (EIS) and Gaussian process regression (GPR) to predict the state of charge.Ren et al. proposed the use of a convolutional neural network (CNN) and long short-term memory (LSTM) to improve the prediction accuracy of data-driven methods with insufficient degradation data [15].Zhao et al. combined Sustainability 2023, 15, 6261 2 of 18 a broad learning system algorithm and LSTM to increase the size of the training data.The results demonstrated that the training data can be reduced to only 25% using a datadriven method [16].Yao et al. [17] used particle swarm optimization to optimize (PSO) the parameters of an extreme learning machine (ELM) and effectively predict the RUL of lithium-ion batteries.However, it is difficult to accurately predict battery capacity using a data-driven method owing to the capacity regeneration phenomenon [18,19]. With the aim of tackling the problem of battery capacity regeneration, this study aims to eliminate the influence of regeneration capacity on global prediction.Cheng et al. [20] used Empirical Mode Decomposition (EMD) to decompose the original capacity into a series of intrinsic mode functions from high to low frequencies, thus reducing the impact of capacity regeneration.However, mode aliasing occurs during EMD decomposition, which interferes with the signal decomposition.Chen et al. [21] adopted an integrated empirical mode decomposition to add Gaussian white noise to the original signal to reduce modal mixing.However, EEMD may produce some false components in the process of adding noise to the signal, which affects subsequent signal analysis.CEEMDAN [22] was improved based on EEMD by adding pairs of Gaussian white noise in each decomposition.Averaging was then used to solve the problem of white noise transmission from high frequency to low frequency and to suppress the generation of modal aliasing and false components, which is more suitable for time-frequency analysis of nonlinear and non-stationary signals. The nonlinear and non-stationary characteristics of the battery capacity curve match the application range of the CEEMDAN algorithm [23][24][25].Therefore, CEEMDAN was selected to extract the signal characteristics of battery capacity. The judgment accuracy of a neural network depends excessively on the selection of weights and thresholds, which requires a large amount of training data.The operation is complex, and its stability is insufficient.The optimal threshold can easily be changed [26,27].Ding et al. [28] proposed cuckoo search (CS) optimization to optimize the decomposition layer of variational model decomposition as the input of a gated recurrent unit (GRU).However, the IMF component decomposed by CEEMDAN has different signal frequencies and requirements for universality of neural networks [29].Therefore, the grey wolf optimization (GWO) algorithm was used to optimize the network parameters and improve the adaptability of the network.The GWO algorithm was proposed by Mirjalili [30], based on the observation of predator hunting in nature.The grey wolf optimization algorithm model is simple and realizable, and its optimization performance in many fields is no less than that of other meta-heuristic swarm intelligence algorithms [31][32][33].Owing to the shortcomings of the GWO algorithm in large-scale optimization problems, such as premature convergence, easy local optimization, and low convergence accuracy, many researchers have proposed various solutions from different aspects.Nadimi-Shahraki et al. [34] proposed an improved GWO based on dimension-learning-based hunting (DLH) to maintain the diversity of the wolf population.Long et al. [35] proposed a hybrid algorithm using GWO and CS to balance exploration and exploitation.Zhao et al. [36] used chaos-enhanced GWO for the overall search and the initial population of the wolf can be restricted to a certain range.However, this method only considers the diversity of the initial population and ignores the weight and position relationships of different wolf groups. This study proposed a comprehensive RUL prediction method.First, the original capacity of the battery was decomposed using CEEMDAN to eliminate interference noise with low correlation.IGWO was used to optimize the parameters of the neural network.Finally, CALCE and NASA data were tested to verify the effectiveness and stability of this method.The following were achieved: (1) The regeneration of battery capacity was eliminated through CEEMDAN, and the accuracy of the prediction model was improved.(2) The weights of the initial population, control factor, and wolf group in the traditional GWO were improved to enhance the diversity and iteration speed of the population. (3) IGWO was used to improve the parameters of neurons, dropout rate, batch size, etc. in the neural network.The universality of RUL prediction was improved for IMF components at different frequencies. Battery Remaining Useful Life The remaining life of a battery represents the number of battery cycles from its rated capacity to the end of its life [37].The formula is as follows: where N eol is the maximum number of battery cycles, and N represents the number of cycles of the battery. Complete Ensemble Empirical Mode Decomposition with Adaptive Noise CEEMDAN is an improved algorithm based on EEMD and EMD.It has the advantages of a good mode spectrum separation effect, fewer shielding iterations, and low calculation cost, and is often used to process non-stationary and nonlinear signals [38,39].The specific steps of CEEMDAN are as follows: Step 1: Add Gaussian white noise to the original capacity signal r 0 .EMD is used to decompose the new signal to obtain the first IMF component.Then the first residual is calculated according to the following equations: Step 2: Add adaptive white noise to the remaining signal of Equation ( 2) and decompose the second modal component obtained after EMD processing, using: Step 3: Steps 2 and 3 are repeated to decompose the signal and t.The k residual and k+1 modal components are, respectively, as follows: (5) where n denotes the total number of modal components.The final residual is: Bidirectional Gate Recurrent Unit The remaining life of a battery represents the number of cycles of the battery from its rated capacity to the end of its life.The equation is as follows: where r t is the reset gate, z t is the update gate, x t denotes the current input, h t−1 is the last-moment output, h t is the current output, σ and tan are the activation functions, W r and W z are weight matrices, "•" is the dot product, and " * " is the matrix product.The classical GRU structure uses one-way propagation along the sequence transfer direction, and time is only related to the past time [40].However, in some cases, feedback on the future sequence value at a certain time is considered when building the model.This information can be used to modify a model [41].Therefore, a BiGRU model was constructed, as shown in Figure 1.The basic concepts are as follows.For each training sequence, two GRU models are established in the forward and reverse directions, and the hidden layer nodes of the two models are connected to the same output layer.This data-processing method can provide complete historical and future information for each time point in the input sequence of the output layer [42].Therefore, the BiGRU network can learn the relationship between past and future load-influencing factors and the current load, which helps extract the characteristics of capacity data.The output is shown in Equations ( 12)- (14). where where rt is the reset gate, zt is the update gate, xt denotes the current input, ht−1 is the last moment output, h t is the current output, σ and tan are the activation functions, Wr and Wz are weight matrices, "•" is the dot product, and "*" is the matrix product.The classical GRU structure uses one-way propagation along the sequence transfe direction, and time is only related to the past time [40].However, in some cases, feedback on the future sequence value at a certain time is considered when building the model.Thi information can be used to modify a model [41].Therefore, a BiGRU model was con structed, as shown in Figure 1.The basic concepts are as follows.For each training se quence, two GRU models are established in the forward and reverse directions, and the hidden layer nodes of the two models are connected to the same output layer.This data processing method can provide complete historical and future information for each time point in the input sequence of the output layer [42].Therefore, the BiGRU network can learn the relationship between past and future load-influencing factors and the curren load, which helps extract the characteristics of capacity data.The output is shown in Equa tions ( 12)- (14). where h ⃗ t is the output of the forward hidden layer at time t; h ⃖ t is the output of the reverse hidden layer at time t; wt and vt, respectively, represent the forward hidden layer state h ⃗ corresponding to the two-way GRU at time t and the reverse hidden state ℎ ⃖ correspond ing weight, respectively; and bt represents the offset corresponding to the hidden laye state at time t.The structure of the BiGRU is shown in Figure 1. ht-1 ht ht+1 In the forward layer, the forward direction from time step 1 to time step t is calcu lated, and the output h of each forward hidden layer is obtained and saved in h ⃗ t .In the reverse layer, a reverse calculation is performed from the current time step t to the previ ous time step t − 1, and the output h of each reverse hidden layer is obtained and saved in In the forward layer, the forward direction from time step 1 to time step t is calculated, and the output h of each forward hidden layer is obtained and saved in → h t .In the reverse layer, a reverse calculation is performed from the current time step t to the previous time step t − 1, and the output h of each reverse hidden layer is obtained and saved in ← h t .Finally, the final output is obtained by combining the output results of the forward and reverse layers. The Grey Wolf Optimizer Swarm intelligence is a powerful branch of computational intelligence for solving optimization problems.The SI algorithm simulates and imitates natural social behaviors, such as fish schools, birds, and animals [43].The GWO algorithm is inspired by the hunting behavior of grey wolves and is considered one of the fastest swarm intelligence algorithms. In each group of grey wolves, there is a common social class that represents the power and dominance of each wolf, as shown in Figure 2. In the implementation process of the GWO algorithm, grey wolves attack prey based on fitness values and social levels.The main process of grey wolf hunting is to round up and attack [44].The hunting technology and social level of grey wolves can be used to design mathematical models as follows: Sustainability 2023, 15, x FOR PEER REVIEW 5 of 18 ℎ ⃖ .Finally, the final output is obtained by combining the output results of the forward and reverse layers. The Grey Wolf Optimizer Swarm intelligence is a powerful branch of computational intelligence for solving optimization problems.The SI algorithm simulates and imitates natural social behaviors, such as fish schools, birds, and animals [43].The GWO algorithm is inspired by the hunting behavior of grey wolves and is considered one of the fastest swarm intelligence algorithms. In each group of grey wolves, there is a common social class that represents the power and dominance of each wolf, as shown in Figure 2. In the implementation process of the GWO algorithm, grey wolves attack prey based on fitness values and social levels.The main process of grey wolf hunting is to round up and attack [44].The hunting technology and social level of grey wolves can be used to design mathematical models as follows: Step 1: Surround the prey.The social class of the grey wolf was modeled and α was adopted as the best solution.The second and third best solutions were β and δ, respectively.The remaining candidate solutions are considered as ω. where D is the distance between the prey and wolf pack, A and C are the coefficient vectors, Xp is the position vector of the prey, X is the position vector of the grey wolf, and n is the current iteration number.Vectors A and C can be calculated separately. where r1 and r2 are the random vectors of [0, 1].The parameter a is linear, as shown in Equation (19), and decreases from 2 to 0 during the iterations. where I is the maximum number of iterations. Step 2: Attack prey.The grey wolf can identify and surround this position.It is typically composed of an α guidance.β and δ participated in occasional hunting.However, in the virtual search space, the location of the best prey is unknown.To simulate the hunting behavior of grey wolves mathematically, it was assumed that α was the best candidate Step 1: Surround the prey.The social class of the grey wolf was modeled and α was adopted as the best solution.The second and third best solutions were β and δ, respectively.The remaining candidate solutions are considered as ω. where D is the distance between the prey and wolf pack, A and C are the coefficient vectors, X p is the position vector of the prey, X is the position vector of the grey wolf, and n is the current iteration number.Vectors A and C can be calculated separately. where r 1 and r 2 are the random vectors of [0, 1].The parameter a is linear, as shown in Equation (19), and decreases from 2 to 0 during the iterations. where I is the maximum number of iterations. Step 2: Attack prey.The grey wolf can identify and surround this position.It is typically composed of an α guidance.β and δ participated in occasional hunting.However, in the virtual search space, the location of the best prey is unknown.To simulate the hunting behavior of grey wolves mathematically, it was assumed that α was the best candidate solution.β and γ are the potential locations of the prey.Therefore, each grey wolf can be updated according to the best location, as shown in Equations ( 20)- (22). Improved Grey Wolf Optimizer In the GWO, α, β, and δ guide the ω wolf moves in the best direction.This behavior may lead to a locally optimal solution.The distribution of the initial wolves also yields a locally optimal solution for the algorithm.To overcome these problems, an improved grey wolf optimizer (IGWO) is proposed in this section.Improvements include a new search strategy associated with the selection and update steps.IGWO includes four stages: initialization, movement, selection, and update. Population Initialization Because the initial grey wolf population determines whether the optimal path can be found and the convergence speed, the diversity of the initial population helps prove the performance of the algorithm in finding the optimal path.The traditional GWO randomly initializes the location of the wolf group, which primarily affects the search efficiency of the algorithm.The initialized population needs to be distributed as evenly as possible in the initial space to improve the diversity of the initial wolf group and to prevent the algorithm from falling into the local optimal solution. GWO usually generates an initialization population randomly.The more uniformly distributed the initial population in the search space, the better the optimization efficiency and accuracy of the algorithm.To improve population diversity, tent mapping was used to initialize the population.The tent map has a simple structure, uniform density distribution, and good ergodicity.The tent mapping is expressed as: The initial wolf pack was randomly generated within the specified search range, as shown in Equation (24). where x t is a random value in the range [0, 1]; U b and L b are the upper and lower bounds of the component, respectively; X ij is the position of the jth wolf in the ith iteration; N is the number of wolves; and D is the search dimension.The improved chaotic mapping of the wolf position can ensure a uniform distribution of the initial population. Optimize Control Parameters α In Equation ( 17), the value of A in the traditional GWO algorithm is affected by α.The influence of this factor linearly decreased from 2 to 0. For such complex nonlinear problems, as the battery capacity declines, it is impossible to balance the local search and global search energy effectively.Therefore, this paper proposes a method for nonlinear control of the additional variable A. where i is the current iteration number of the α wolf and I denotes the maximum number of iterations.The improved iteration curve for A is shown in Figure 3. Sustainability 2023, 15, x FOR PEER REVIEW 7 of 18 global search energy effectively.Therefore, this paper proposes a method for nonlinear control of the additional variable A. where i is the current iteration number of the α wolf and I denotes the maximum number of iterations.The improved iteration curve for A is shown in Figure 3. Dynamic Weight The traditional grey wolf algorithm is a one-way local search, and the position change cannot be changed.In this study, a two-way search was improved, and the selection of a random number control direction was introduced.The search formula is as follows: where w is the weight, wmax is the maximum weight, and wmin is the minimum weight.Generally, when the maximum weight is 0.9 and the minimum weight is 0.4, the algorithm can maintain the best performance [45].In GWO, the original formula of the inertia weight to change the grey wolf position vector can be introduced on this basis. where Wα, Wβ, and Wδ represent the weights of the wolf population class. Dynamic Weight The traditional grey wolf algorithm is a one-way local search, and the position change cannot be changed.In this study, a two-way search was improved, and the selection of a random number control direction was introduced.The search formula is as follows: where w is the weight, w max is the maximum weight, and w min is the minimum weight.Generally, when the maximum weight is 0.9 and the minimum weight is 0.4, the algorithm can maintain the best performance [45].In GWO, the original formula of the inertia weight to change the grey wolf position vector can be introduced on this basis. where W α , W β , and W δ represent the weights of the wolf population class. Data Sets To verify the effectiveness and generalization of the method in this study, the datasets including NASA and CALCE were used [46,47].For the CALCE battery dataset, CS2_35, CS2_36, CS2_37, CS2_38 were selected.The charging and discharging experimental data of the four batteries are used as the input data for the model.All batteries experienced the same charging and discharging modes.At room temperature of 25 • C, the samples were charged at a constant current of 0.5 C. When the voltage reached 4.2 V, they were charged at a constant voltage.When the charging current dropped to 0.05 A, they stopped.The discharge process used 1C current for constant current discharge and stopped when the voltage dropped to 2.7 V. NASA battery data were selected from B0005, B0006, B0007, and B0018.Under the condition of 24 • C at room temperature, these were charged at a constant current of 1.5 A until the battery voltage reached 4.2 V, and then they were charged at a constant voltage (CV) mode until the charging current dropped to 20 mA.The discharge process was conducted at a constant current (CC) mode of 2 A until the voltage of batteries 5, 6, 7, and 18 dropped to 2.7 V, 2.5 V, 2.2 V, and 2.5 V. Considering 70% of capacity as the end of life (EOL), NASA's battery capacity was reduced from 2 Ah to 1.4 Ah.The discharge process was conducted at a constant current (CC) mode of 2 A until the voltage of batteries 5, 6, 7, and 18 dropped to 2.7 V, 2.5 V, 2.2 V, and 2.5 V.The experimental capacity of CACEL was reduced from 1.1 Ah to 0.77 Ah as EOL. The experimental hardware is configured with Intel Core i5-11320H processor, 16 G RAM, NVIDIA 2070, Windows 11, Python 10.10, and TensorFlow 2.10.First, 50% of the data was used as the training set and test set.After that, 30% of the data was used as the training set, and 70% data was used as the test set to inspect the performance of the model. Evaluation Criteria To evaluate the performance of the prediction model, this paper introduces four evaluation indicators to evaluate the prediction results of the model, which are mean absolute error (MAE), absolute correlation coefficient (R 2 ), and root mean square error (RMSE).MAE can effectively measure the prediction error of the model, and R 2 reflects the fitting effect of the model.The closer the result, the better the effect [48].RMSE can measure the prediction accuracy of the model, and its calculation formula is as follows: where n is the total number of cycles and y and ŷ are the actual and predicted values of the battery capacity during the cycle.In addition, the error calculation index of battery RUL is introduced, and e 1 and e 2 are used to measure the life prediction accuracy.The equation is as follows: In the formula, e 1 and e 2 represent the error and relative error between the actual value and the predicted value of battery RUL, respectively.R rul represents the actual RUL value, and P rul represents the predicted RUL value. Structure of the RUL Prediction Model Combining the advantages and disadvantages of CEEMDAN and BiGRU, the improved Grey Wolf algorithm is used to optimize the parameters of the BiGRU network, and an RUL prediction method based on the CEEMDAN-IGWO-BiGRU model is proposed.The model can reduce the probability of local optimization and over-fitting, and the model design has good robustness.The framework diagram of the RUL estimation is presented in Figure 4.The specific implementation steps are as follows: Structure of the RUL Prediction Model Combining the advantages and disadvantages of CEEMDAN and BiGRU, the improved Grey Wolf algorithm is used to optimize the parameters of the BiGRU network, and an RUL prediction method based on the CEEMDAN-IGWO-BiGRU model is proposed.The model can reduce the probability of local optimization and over-fitting, and the model design has good robustness.The framework diagram of the RUL estimation is presented in Figure 4.The specific implementation steps are as follows: Step 1: Collect the battery capacity data and use CEEMDAN to decompose the capacity data into multiple components according to Equations ( 2)-( 7).Divide the training set and test set, and then normalize the data. Step 2: Determine the number of inputs, output, and hidden layers of the BiGRU, and set parameters such as the number of IGWO populations and the number of iterations. Step 3: The number of neurons in the input layer of the BiGRU model, number of training samples, and forgetting rate are taken as the individual searchers in the IGWO, and the root mean square error between the expected output of the training samples of the BiGRU model and the actual output is taken as the fitness value.Combined with the improved grey wolf optimization algorithm, the position is continuously updated, and the fitness value is calculated until the conditions are met. Step 4: Input the obtained optimal weight threshold into the BiGRU model and obtain the IMF prediction results for capacity after training.Step 1: Collect the battery capacity data and use CEEMDAN to decompose the capacity data into multiple components according to Equations ( 2)-( 7).Divide the training set and test set, and then normalize the data. Step 2: Determine the number of inputs, output, and hidden layers of the BiGRU, and set parameters such as the number of IGWO populations and the number of iterations. Step 3: The number of neurons in the input layer of the BiGRU model, number of training samples, and forgetting rate are taken as the individual searchers in the IGWO, and the root mean square error between the expected output of the training samples of the BiGRU model and the actual output is taken as the fitness value.Combined with the improved grey wolf optimization algorithm, the position is continuously updated, and the fitness value is calculated until the conditions are met. Step 4: Input the obtained optimal weight threshold into the BiGRU model and obtain the IMF prediction results for capacity after training. Results of CEEMDAN From Figure 5, CEEMDAN can be used to analyze the current change trend and process.Except for CS2_36, other data are decomposed into six intrinsic mode function (IMF) components and one residual signal (Res) component using the CEEMDAN method.CS2_36 can be decomposed into five IMF components, as shown in Figure 5b, and arranged from high to low frequency compared to the degraded sequence of the decomposed modal components and the original capacity.Some modal classifications with low correlation can be discarded as noise. Results of CEEMDAN From Figure 5, CEEMDAN can be used to analyze the current change trend and process.Except for CS2_36, other data are decomposed into six intrinsic mode function (IMF) components and one residual signal (Res) component using the CEEMDAN method.CS2_36 can be decomposed into five IMF components, as shown in Figure 5b, and arranged from high to low frequency compared to the degraded sequence of the decomposed modal components and the original capacity.Some modal classifications with low correlation can be discarded as noise.The correlation between the decomposed variables and battery capacity was analyzed.As shown in Figure 6, the correlation between the original battery capacity and Res is higher than 90%, and Res can effectively reflect the long-term trend of the capacity decline.In addition, the correlation of IMF1, IMF2, and IMF3 in the CALCE dataset was less than 1%, which can be used for noise removal, and the rest of the IMF and Res were used as prediction objects together.In the NASA dataset, IMF components with a correlation greater than 12% are selected as prediction objects.The correlation between the decomposed variables and battery capacity was analyzed.As shown in Figure 6, the correlation between the original battery capacity and Res is higher than 90%, and Res can effectively reflect the long-term trend of the capacity decline.In addition, the correlation of IMF1, IMF2, and IMF3 in the CALCE dataset was less than 1%, which can be used for noise removal, and the rest of the IMF and Res were used as prediction objects together.In the NASA dataset, IMF components with a correlation greater than 12% are selected as prediction objects. Decomposing each IMF component, it is difficult to accurately predict the trend of IMF components with low correlation.The signal fluctuation range was within 0.05, and there was no periodicity.After removing the components with less correlation according to the Pearson correlation coefficient, the short-term regeneration trend of the capacity is eliminated, but the overall prediction accuracy of the model can be significantly improved.Decomposing each IMF component, it is difficult to accurately predict the trend of IMF components with low correlation.The signal fluctuation range was within 0.05, and there was no periodicity.After removing the components with less correlation according to the Pearson correlation coefficient, the short-term regeneration trend of the capacity is eliminated, but the overall prediction accuracy of the model can be significantly improved. Results of IGWO-BiGRU In this study, the IGWO algorithm was used to optimize the parameters of the BiLSTM network model.The BiLSTM in this model selects a sigmoid function as the activation function, including two bidirectional hidden layers.The default number of neurons is 128, the dropout rate is 0.15, the batch size is 128, and the number of iterations is 500.The optimizer is Adam.In the IGWO algorithm, the number of the grey wolf population is defined as 30, and the number of iterations is 5. During training, 50% was used as the training set.The final prediction results are shown in Figure 7.Under the default parameters, the LSTM model predicts the short-term capacity data accurately.However, with an increase in the number of cycles, the prediction curve gradually deviates.Taking Figure 7b (CS2_38) as an example, the capacity decline trend of 38 batteries increased after 800 cycles.Although the fitting degree of the BiGRU is better than that of LSTM because the default parameters are consistent, there are also deviations for long-term capacity prediction.After using IGWO to optimize the initial parameters, the declining trend of capacity prediction can be consistent with the original value, and the CEEMDAN decomposed and combined curve can better fit the original capacity. Take Figure 7c,d for example.Owing to the small number of cycles of the NASA dataset, the prediction results of LSTM and the BiGRU deviate laterally from the original capacity, indicating that the prediction method has a certain lag.After using CEEMDAN decomposition to eliminate noise, the reconstructed forecast data were consistent with the Results of IGWO-BiGRU In this study, the IGWO algorithm was used to optimize the parameters of the BiLSTM network model.The BiLSTM in this model selects a sigmoid function as the activation function, including two bidirectional hidden layers.The default number of neurons is 128, the dropout rate is 0.15, the batch size is 128, and the number of iterations is 500.The optimizer is Adam.In the IGWO algorithm, the number of the grey wolf population is defined as 30, and the number of iterations is 5. During training, 50% was used as the training set.The final prediction results are shown in Figure 7.Under the default parameters, the LSTM model predicts the shortterm capacity data accurately.However, with an increase in the number of cycles, the prediction curve gradually deviates.Taking Figure 7b (CS2_38) as an example, the capacity decline trend of 38 batteries increased after 800 cycles.Although the fitting degree of the BiGRU is better than that of LSTM because the default parameters are consistent, there are also deviations for long-term capacity prediction.After using IGWO to optimize the initial parameters, the declining trend of capacity prediction can be consistent with the original value, and the CEEMDAN decomposed and combined curve can better fit the original capacity. Take Figure 7c,d for example.Owing to the small number of cycles of the NASA dataset, the prediction results of LSTM and the BiGRU deviate laterally from the original capacity, indicating that the prediction method has a certain lag.After using CEEMDAN decomposition to eliminate noise, the reconstructed forecast data were consistent with the actual capacity trend.The combined algorithm can effectively fit the declining trend of the original capacity and avoid premature convergence of the network.NASA has fewer datasets, and it was necessary to set the EOL to 80% of the capacity.The volume of CALCE data volume was sufficient and the EOL was set as 70% of the capacity for comparison between the models.Table 1 lists the evaluation indicators of the prediction results for the eight lithium-ion batteries under different algorithms.The smaller the RMSE and MAPE, the higher the prediction accuracy of the prediction model.Table 1 shows that the performance indicators predicted by the combined prediction model CEEMDAN-IGWO-BIGRU are superior to other methods, with high accuracy.In the CALCE dataset, the maximum RMSE was less than 2.6% and the maximum MAE was controlled within 1.6% in the NASA dataset.Owing to the lack of data, the model training was insufficient.Taking B0018 as an example, R 2 was lower than 73% in all the models.It can be observed from the observation of the original data that the sampling time of this battery was relatively discrete, and the number of cycles is 132, which was nearly a quarter less than the sampling data of other batteries, resulting in poor prediction accuracy of the model.In the prediction results of all batteries, the error e2 of the combined model was NASA has fewer datasets, and it was necessary to set the EOL to 80% of the capacity.The volume of CALCE data volume was sufficient and the EOL was set as 70% of the capacity for comparison between the models.Table 1 lists the evaluation indicators of the prediction results for the eight lithium-ion batteries under different algorithms.The smaller the RMSE and MAPE, the higher the prediction accuracy of the prediction model.Table 1 shows that the performance indicators predicted by the combined prediction model CEEMDAN-IGWO-BIGRU are superior to other methods, with high accuracy.In the CALCE dataset, the maximum RMSE was less than 2.6% and the maximum MAE was controlled within 1.6% in the NASA dataset.Owing to the lack of data, the model training was insufficient.Taking B0018 as an example, R 2 was lower than 73% in all the models.It can be observed from the observation of the original data that the sampling time of this battery was relatively discrete, and the number of cycles is 132, which was nearly a quarter less than the sampling data of other batteries, resulting in poor prediction accuracy of the model.In the prediction results of all batteries, the error e 2 of the combined model was controlled under 6%.It is proved that CEEMDAN decomposition and the IGWO algorithm can effectively improve the universality of the model in different application scenarios.To predict the capacity decline trend in early life, the remaining service life of the lithium-ion battery was measured, and the prediction model was trained using a 30% dataset.The prediction results for the four models are shown in Figure 8. Because the model training was insufficient, the CALCE dataset had a large deviation in the long-term capacity prediction trend.As shown in Figure 8a, when the number of forecast cycles in the figure was 700, the forecast models deviated from the original capacity curve.However, the CEEMDAN-IGWO-BiGRU model maintained the same declining trend as the original capacity.As shown in Figure 8c, the model proposed in this study is still within the trend of declining range of capacity with 30% training data.It can be seen that the prediction model can still fit the degradation trend of the battery well with less training data, and its stability is better than other algorithms. Table 2 summarizes the predicted performance indicators of all lithium-ion batteries in the 30% training set.It can be observed from Table 2 that the prediction effect of LSTM and the BiGRU is poor owing to the influence of the training data.The CALCE dataset exhibits capacity regeneration when reaching EOL, and the amplitude range is large.In CS2_37, the RUL error was significantly greater than those of the other three batteries, although the RMSE of the hybrid method was 4.53%.In addition, CEEMDAN-IGWO-BiGRU maintained a high accuracy rate, with a maximum MAE of less than 4%.Table 2 summarizes the predicted performance indicators of all lithium-ion batteries in the 30% training set.It can be observed from Table 2 that the prediction effect of LSTM and the BiGRU is poor owing to the influence of the training data.The CALCE dataset exhibits capacity regeneration when reaching EOL, and the amplitude range is large.In CS2_37, the RUL error was significantly greater than those of the other three batteries, although the RMSE of the hybrid method was 4.53%.In addition, CEEMDAN-IGWO-BiGRU maintained a high accuracy rate, with a maximum MAE of less than 4%. The MAE of B0005 and B0007 in the NASA battery set was up to 20%.The MAE of CEEMDAN-IGWO-BiGRU could still be maintained below 3.5.Owing to the small number of datasets, the relative error of the RUL could be controlled within 3.5%, showing strong robustness.The IGWO significantly improves the stability of the network prediction.The combined prediction algorithm fully trained the data, improved the prediction accuracy, and reduced the strong dependence on the data.In addition, in CS_ 38 and B0018, CEEMDAN had limited improvement in model accuracy.The main reason for this is that some IMF components with weak correlations were removed.Although the predicted capacity curve maintains the global declining trend after noise elimination, it cannot predict the change in capacity regeneration at high frequencies in the short term.The MAE of B0005 and B0007 in the NASA battery set was up to 20%.The MAE of CEEMDAN-IGWO-BiGRU could still be maintained below 3.5.Owing to the small number of datasets, the relative error of the RUL could be controlled within 3.5%, showing strong robustness.The IGWO significantly improves the stability of the network prediction.The combined prediction algorithm fully trained the data, improved the prediction accuracy, and reduced the strong dependence on the data.In addition, in CS_ 38 and B0018, CEEMDAN had limited improvement in model accuracy.The main reason for this is that some IMF components with weak correlations were removed.Although the predicted capacity curve maintains the global declining trend after noise elimination, it cannot predict the change in capacity regeneration at high frequencies in the short term. Conclusions In this study, a CEEMDAN-IGWO-BiLSTM model was proposed to predict the capacity and RUL of lithium batteries.The following conclusions were drawn from previous research.The CEEMDAN algorithm is used to effectively solve the mode aliasing problem after the initial current data decomposition and to reduce the error caused by the current data instability in the prediction process.The CEEMDAN-BiGRU model, which uses the IMF component as the model input and capacity values as the output, is suitable for RUL prediction under various working conditions.The GWO algorithm was improved by the nonlinear control of variable A. Additionally, tent mapping improved the global search ability and convergence speed of the GWO algorithm.The improved GWO algorithm was used to optimize the weight and threshold of the BiGRU prediction model to obtain the best parameters.Through simulation and comparison with LSTM and BiGRU, the improved GWO algorithm can further explore the relationship between the IMF component of capacity and RUL.Both the MAE and RMSE were less than 6.3% when only 30% of the dataset was used.Moreover, the accuracy and stability of the RUL prediction of the battery were improved using the hybrid model.The relative error of the RUL was less than 7.13%. Figure 4 . Figure 4.The framework diagram of the RUL estimation.Figure 4. The framework diagram of the RUL estimation. Figure 4 . Figure 4.The framework diagram of the RUL estimation.Figure 4. The framework diagram of the RUL estimation. corresponding weight, respectively; and b t represents the offset corresponding to the hidden layer state at time t.The structure of the BiGRU is shown in Figure1. →h t corresponding to the two-way GRU at time t and the reverse hidden state ← h t Table 1 . Lithium battery life prediction error under 50% training set. Table 2 . Lithium battery life prediction error under 30% training set.
9,387
2023-04-06T00:00:00.000
[ "Computer Science" ]
Computational cloning of drug target genes of a parasitic nematode, Oesophagostomum dentatum Background Gene identification and sequence determination are critical requirements for many biological, genomic, and bioinformatic studies. With the advent of next generation sequencing (NGS) technologies, such determinations are predominantly accomplished in silico for organisms for which the genome is known or for which there exists substantial gene sequence information. Without detailed genomic/gene information, in silico sequence determination is not straightforward, and full coding sequence determination typically involves time- and labor-intensive PCR-based amplification and cloning methods. Results An improved method was developed with which to determine full length gene coding sequences in silico using de novo assembly of RNA-Seq data. The scheme improves upon initial contigs through contig-to-gene identification by BLAST nearest–neighbor comparison, and through single-contig refinement by iterative-binning and -assembly of reads. Application of the iterative method produced the gene identification and full coding sequence for 9 of 12 genes and improved the sequence of 3 of the 12 genes targeted by benzimidazole, macrocyclic lactone, and nicotinic agonist classes of anthelminthic drugs in the swine nodular parasite Oesophagostomum dentatum. The approach improved upon the initial optimized assembly with Velvet that only identified full coding sequences for 2 genes. Conclusions Our reiterative methodology represents a simplified pipeline with which to determine longer gene sequences in silico from next generation sequence data for any nematode for which detailed genetic/gene information is lacking. The method significantly improved upon an initial Velvet assembly of RNA-Seq data that yielded only 2 full length sequences. The identified coding sequences for the 11 target genes enables further future examinations including: (i) the use of recombinant target protein in functional assays seeking a better understanding of the mechanism of drug resistance, and (ii) seeking comparative genomic and transcriptomic assessments between parasite isolates that exhibit varied drug sensitivities. Background Helminth infection of the gut of humans and domestic animals is a global concern with tremendous social and economic costs [1]. Treatment of either humans or animals may involve administering anthelminthic drugs, typically from 1 of 3 main drug classes: the benzimidazoles (which selectively bind nematode beta-tubulin), macrocyclic lactones (which allosterically activate glutamate-gated chloride channels present in nematodes), or nicotinic agonists (which selectively activate subtypes of nematode nicotinic acetylcholine receptors; nAChRs) [2,3]. Not yet in widespread use are two more recently developed drugs monepantel [4] and derquantel [5] that represent aminoacetonitrile and spiroindole drug classes, respectively; both of these compounds have sites of action on subtypes of nematode nicotinic receptors other than the nematode levamisole receptor subtype [3] and have been promoted as 'resistance-busting'. Of particular concern is that resistance has developed to members of each of the typically used drug classes [6,7]. Studies of anthelminthic drugs, many performed in the non-parasitic nematode Caenorhabditis elegans, have identified a number of genes/ proteins that are drug targets of benzimidazoles (ben-1), macrocyclic lactones (avr-14, avr-15, glc-1, glc-2, glc-3, glc-4), and the nicotinic agonist levamisole (lev-1, lev-8, unc-29, unc-38, unc-63) (reviewed in [7,8]). From studies like these it appears that the molecular basis of susceptibility to macrocyclic lactones and levamisole is polygenic. Although studies in C. elegans have provided important insights about drug-targets and drug-sensitivity, they are free-living bacteriovores and not parasitic worms. Therefore, there is great interest in extending drug sensitivity studies to the parasitic nematodes. One major hindrance to such efforts has been the lack of genome/ gene information that is available for the parasitic nematodes. Consequently, studies requiring sequence determination typically include the time-and laborintensive steps of: (i) identifying the target gene from among known genes of genetically close organisms, (ii) aligning target gene sequences to determine regions of high similarity, (iii) designing DNA primers to those regions, and finally, (iv) amplifying the target using PCR and a single pair of primers, cloning and sequencing the cloned product. A modification of this approach, one having the potential to reduce the time and expense required to identify multiple gene sequences, is to use RNA-Seq data to build the target nematode gene sequences in silico, i.e., by computational methods. The next-generation sequencing (NGS) technique of RNA-Seq economically produces large amounts of sequence data, albeit comprised of very short (50-150 base) reads. When applied to an organism having a known genome, RNA-Seq sequence data can be computationally analyzed using software packages such as the commonly used Velvet package [9] to produce entire gene sequences in silico. Unfortunately, when applied to organisms for which little genome information is known, the output is more typically comprised of short contigs and contigs of lower quality [10]. An example of a parasitic nematode lacking a described genome is Oesophagostomum dentatum, a soiltransmitted helminth (STH) with a classic fecal-oral transmission route that causes minor nodules in the large intestine of swine and that is used as a parasite model for STH [11,12]. This report describes a reiterative method that builds upon assembly using the existing Velvet analysis software to allow greatly improved in silico determination of gene sequences from O. dentatum. The method was applied towards a determination of the sequence of 12 O. dentatum genes that, predominantly in C. elegans, have been identified as targets of the major classes of anthelminthic drugs (benzimidazoles, macrocyclic lactones, and nicotinic agonists). Establishing these sequences in O. dentatum facilitates downstream physiological and molecular studies. Generation of mRNA-seq read libraries Separate mRNA-seq libraries were constructed from 5 μg of high quality total RNA (RNA Integrity Number ≥ 7.3) isolated from 58 adult male and 141 adult female worms. Given the absence of a sequenced genome of O. dentatum for use as a scaffold during subsequent assembly and analysis, paired-end sequencing was performed to facilitate contig-building steps. Similar numbers of 75-cycle paired reads were obtained for both libraries, (2.144 × 10 7 for male, 2.159 × 10 7 for female) for a combined total of more than 40 million reads. The lack of information about the exome size and complexity of O. dentatum, and that the RNA-seq libraries were not normalized, preclude estimation of the depth of coverage of the libraries; however, in broad approximation, the 3,100 × 10 6 DNA base calls within the 40 × 10 6 reads represent considerable coverage of mRNAs transcribed from the estimated 53-59 MB genome [13]. Reads were trimmed of read tags (added during library building to allow multiplexing during the sequencing run) and deposited at the NCBI sequence read archive: male library [GenBank:SRR393668] and female library [GenBank:SRR393669]. Velvet assembly The NGS assembler Velvet executed in the paired-end mode was used to assemble the 40 million reads from the combined male and female libraries into contigs ( Figure 1 (1) Individual data sets were assembled into contigs using Velvet. (2) BLAST searches for genes of the nAChR-pathway were carried out with a high cutoff (expect value = 1E -10 ) to identify contigs highly similar to the target genes. (3) Reads were individually mapped (using Bowtie) to high similarity contigs. (4) All paired-reads for which at least one read mapped to a contig (in Step-3) were identified and binned using a custom Java program. (5) De novo assembly of Step-4 sequences was performed using Velvet. (6) Iteration of Steps 3-5 was performed until the iteration resulted in no additional reads being mapped to the contig of interest. number of reads that were identified for each target sequence while maintaining compliance with NCBI guidelines requiring submitted cDNA sequences be derived from a single strain. The paired-end mode was used for optimal de novo assembly, given the lack of a reference genome. Hashing, the early Velvet step in which individual reads are indexed into overlapping sequences (k-mers) of some length "k", has significant impact on the length and quality of the contigs built during an assembly, with longer k-mers generally corresponding to greater specificity (i.e., that a detected alignment is actually correct) but lower sensitivity to detect alignments [9]. Because optimal k-mer length cannot be known a priori, multiple assemblies were performed using k-mers ranging from length 17 to 49. The 12 genes of C. elegans shown to produce anthelminthic targets for which the homologous genes and sequences of O. dentatum were sought are shown in Table 1; the drug classes for which they are targets are shown in Table 2. To identify contigs exhibiting high similarity to the C. elegans target sequences separate BLAST databases, each comprised of the set of contigs produced by a single k-mer assembly, were built and then queried each database using tblastn and the protein sequences of the C. elegans target genes ( Figure 1, step 2). A stringent "E-value" threshold of 1 × 10 -10 was used within the tblastn search to limit false positive identifications; the E-value corresponds to the number of times a match of the same quality would be found by chance within the database. A custom java-script (Additional file 1) was used to collect those contigs whose corresponding high scoring pair (HSP) E-values were ≤ 1 × 10 -10 and, for further stringency, to retain only those contigs whose region of alignment with the target protein exhibited ≥ 60% identity across ≥ 50% of the contig length. Some contigs that passed these filters exhibited high similarity to more than one target gene (data not shown), as would be expected given that some of the target genes are paralogous to one another. To unambiguously assign the filtered contigs to single target genes, contigs were queried (using blastx) against the C. elegans genome database and then gene identity was assigned based upon the match exhibiting the lowest E-value. For each gene of interest, Table 1 indicates C. elegans target gene information including coding sequence length, and summarizes the BLAST results profile for each k-mer assembly from k-value 21 to 47; profiles for k-values of 17, 19, and 49 were unremarkable and are not shown. For each k-mer evaluated and each target sequence, the table indicates the number of HSPs identified, their mean length, and their range from shortest to longest. A best/longest HSP was identified for each target contig (as is indicated by bold font for R value), and no contig yielded more than 1 HSP for a target gene (data not shown). Relative to the length of the target gene coding sequences, the length of the corresponding longest HSP was quite short for 9 genes, and was near to completely full length for 3 other sequences (lev-1, unc-38, and glc-4). Table 1 demonstrates the impact that k-mer can have on the quality of assembly; for a given length k-mer, the quality of the assembly varied greatly, with (i) some k-mers producing many but short HSPs, (ii) other k-mers yielding few, but longer HSPs, and (iii) no single k-mer yielding the longest HSP (contig) for all target genes. Also as shown in Table 1, for some targets only a single k-mer produced the best HSP (e.g. glc-3 at 39-mer), whereas for other targets a range of k-mers yielded equivalent best HSPs (e.g. unc-38 at k-mers 29-45). Iterative-binning and iterative-assembly to optimize sequence determination To improve the overall quality of the contigs (i.e., increase contig lengths by extension and by gap filling) additional computational steps of assembly were developed ( Figure 1, steps 3-6). In outline, this involved identifying and binning all RNA-Seq library read-pairs for which at least 1 read matched a single contig for each target gene, and then using Velvet to reassemble those binned reads into contigs; this process was repeated until the output contigs exhibited no relative improvement. In detail, high-identity contigs that were identified in the initial Velvet assembly (at k-mer length 31 for all gene targets excepting lev-8, glc-1, and glc-3,which used k-mer lengths of 23, 25, and 39 respectively) were used in Step 3 ( Figure 1) to identify and bin all library reads that mapped to those contigs; this process utilized the mapping program Bowtie set to increase the sensitivity of read identification by using a low quality threshold value (of 150) and by running it in unpaired-read mode. The unpaired-read mapping mode allowed inclusion of those reads that did not contribute to the prior contig (Table 1) for reason their pair read failed to map to that prior contig. Consequently, the reads binned in Step 3 included the paired reads along with a number of single reads for which their pair did not map; a custom Javascript collected into a single bin those reads that mapped as well as reads that did not map but whose read-pair did map (Figure 1, Step 4: Additional file 2). The Velvet program was then used to assemble contigs from the collected paired-end reads combined from both libraries (Figure 1, Step 5) using multiple length k-mers and coverage cutoffs to identify assembly conditions that produced a maximum contig length; this step involved minimal computational time given the small number of reads (< 2 * 10 5 ) present among all bins. Bowtie mapping, read set collection and assembly were reiterated ( Figure 1, Step 6) until a maximum contig length and maximum coverage (relative to the target genes) were achieved. C. elegans target genes (name (ID), GenBank accession number (Acc #) and coding sequence length (CDS len)) used to BLASTx-query a database comprised of the initial de novo library assembly. For each k-mer (e.g. "21 mer HSPs") are listed in columns the number of high scoring pairs identified (HSP; "#"), the mean HSP length in DNA bases (" X "), and the range of HSP-lengths ("R") with minimum and maximum length shown. Bold HSP-length values indicates the longest HSP identified among all k-mers for a given target gene. "nd" indicate no high-similarity HSPs were identified at that k-mer. The dramatic effectiveness of the reiterative method is evidenced by comparison of contigs and corresponding best-HSPs of the final (iterative-derived) contigs to the initial contigs and to the C. elegans target sequence, as is shown in Table 2. Whereas the initial Velvet assembly yielded 3 sequences corresponding to nearly 100% of the target gene coding sequences (lev-1, unc-38, glc-4), reiteration yielded the full coding sequence of an additional 7 target sequences including a second isotype of ben-1, and yielded significant improvement to all other sequences excepting that of lev-8. Interestingly, whereas the initial assembly yielded contigs that best-mapped by BLAST analysis to glc-1 (Table 1), reiteration yielded an extended contig unambiguously identified by BLAST as the paralogous target gene avr-15. A closer examination of the initial contigs that were identified as glc-1 revealed that their region of similarity to glc-1 was quite small and exhibited almostas-high BLAST similarity to the paralog avr-15. Consequently, reads within the libraries provide evidence for transcripts for 2 avr-15 variants and no glc-1. The quality/accuracy of the in silico derived sequences can be inferred from the shared identity to target genes of C. elegans (Table 2, column "% ID"). Additionally, a direct indication of in silico sequence quality for 2 of the target genes (unc-38 and -63) is possible because their sequence in O. dentatum had previously been determined (from PCR-amplicons). BLAST nucleotide comparison of the previously determined O. dentatum unc-63 mRNA sequence [GenBank:HQ162136] to the corresponding 1770 nucleotide in silico sequence ( Table 2, GACS01000005) identified a single alignment comprised of 1768 bases (including 30 non-identical bases) and no gaps. Of the non-identical bases, 20 were located within the coding sequence but only 1 of the 20 resulted in a change in the deduced protein sequence, a G 1411 A/ R 433 Q (numbering relative to HQ162136 DNA and corresponding protein sequences). Within the RNAseq library reads this base call was invariant, i.e. of the 5 reads that mapped over the G 1411 A position, all were unique and all contained the "A" base call. A similar comparison of the 1681 base O. dentatum unc-38 mRNA sequence [GenBank:GU256648.1] to the corresponding 1631 nucleotide in silico sequence ( Table 2, GACS01000004) identified a single alignment comprised of 1607 bases and no gaps, and which, when limiting the comparison to only the coding sequence, exhibited 3 base changes (i.e., 1521 of the 1524 coding sequence bases were identical). All 3 base changes result in amino acid changes: A 179 G/ Y 35 C, T 434 C/ Y 120 H, and T 1103 C/ F 343 S (numbering relative to GACS01000004 DNA and corresponding protein sequences). The other differences determined by alignment of the full sequences were that the GACS01000004 3' UTR was shorter by 43 bases, and that the first 25 bases of the GACS01000004 75 base 5' UTR was not contained within the 82 base GU256648.1 5' UTR (suggesting the in silico sequence may represent a 5' splice variant). To validate these differences within the 5' UTR and the coding sequence of GACS01000004, reverse transcription Best initial high scoring pair (HSP) and contig lengths correspond to the longest HSP from the initial read assemblies (see bold values from Table 1) and the contig from which that HSP derives. "% full length" represents the percent of the comparator C. elegans protein that is represented within the final contig. "% ID" represents the percent identity as determined by pairwise alignment. polymerase chain reaction (RT-PCR) using a 5' SL1 splice leader forward primer (see Methods) and an unc-38 specific internal reverse primer were used to amplify an approximately 1600 base RNA segment (spliced leaders represent a set of invariant nucleotide sequences that are post-transcriptionally added to the 5' end of many nematode mRNAs). The sequence of individual PCR clones fully recapitulated the GACS01000004 5' UTR sequence as well as the 3 base changes within the coding sequence. Further evidence for the validity of the in silico derived sequences was shown by RT-PCR amplification of ben-1 and avr-15 from RNA template using gene specific primers designed from the in silico sequence ( Table 2, GACS01000008 and GACS01000007, respectively). The sequences of 9 ben-1 PCR clones compared against that of GACS01000008 showed a single 1349 base no-gap alignment with 1307 identities and 42 variant base calls of which 6 resulted in a change in the deduced protein sequence: G 165 A/ V 49 I, G 512 A/ M 164 I, A 646 G/ D209G, T 840 C/ S 274 P, G 996 A/ V 326 M, C 1111 T/ A 364 V (numbering relative to GACS01000008 DNA and corresponding protein sequences). Interestingly, at each of the 42 sites of base variance, the majority of clone sequences called for an identical base to that of the in silico GACS01000008 sequence. In addition, at 21 of the 42 sites of base variance only a single PCR clone contained the variant base call. Thus the 42 base variants likely represent single nucleotide polymorphisms that are present within the population of ben-1 RNAs examined. In the equivalent analysis of avr-15, the sequences of 2 PCR clones compared against that of GACS01000007 showed a single 1363 base no-gap alignment with 1320 identities and 43 variant base calls of which 6 resulted in a change in the deduced protein sequence: A 1089 G/ K 345 R, C 1103 T/ H 350 Y, C 1173 T/ V 373 A, T 1185 C/ V 377 A, A 1284 G/ K 410 R, C 1292 G/ L 413 V (numbering relative to GACS01000007 DNA and corresponding protein sequences). Among the 2 PCR clones, one contained only the V 373 A deduced change and otherwise was identical in translation to the protein deduced from GACS01000007, and the other clone contained all 6 deduced amino acid changes. Discussion The data shown in Results demonstrate the efficient and successful use of an iterative de novo assembly of RNA-Seq data to determine in silico the sequence of 12 O. dentatum anthelminthic drug target genes. Selection of these particular genes was based upon their general importance to a range of studies investigating helminth drug targets (reviewed in [7,8]). The iterative assembly produced full length coding sequences for 9 target genes, whereas the Velvet assembly yielded full length (or nearly full-length) sequences for only a 3-gene subset of those 9 genes. A major utility of this process is that, as a computational process, it is scalable and should fit well to a variety of gene-characterization situations in O. dentatum or in any other nematode lacking a known genome sequence. Related computational processes have been described with utility for producing output other than full length coding sequences. In one method, remapping of reads by identity to contigs within an initial assembly, and then reassembling contigs from those remapped reads, improved transcriptome assembly [14]; the desired output from that work was production of a general gene ontology. In another method, genome assemblies were variably improved via gap closure achieved by mapping paired-end reads and collecting pairs for which only one of the ends aligned to a contig [15]. As shown in Table 1, a number of k-mer lengths were used for initial library assemblies, demonstrating the dramatic effect of k-mer length on contigs. That said, as noted in Results, k-mer length 31 was used to build the contigs used in all target-gene resampling excepting that for lev-8 (for which k-mer 31 returned no contigs; see Table 1) and glc-3 (for which k-mer 31 contigs failed to support generation of resampled contigs representing the full target sequence). This suggests there is little need to use the best or longest contig as input to the resampling process. As a further test of this concept we were able to successfully build avr-15 from a single approximately 300 base initial contig. Thus, there seems to be a reduced need to conduct full library assemblies over a wide range of k-mers when attempting to derive in silico the sequences for a set of target genes; instead, one stops when a limited set of k-mers that have been run produce a quality contig for each target. The reiterative approach presented here may have utility on a larger scale to facilitate transcriptome projects in nematodes and other organisms that lack genomic information but for which more complete gene-transcript sequences are desired. A dynamic programming approach will likely be required in such extended application to accommodate the conditional filters used during reiterative binning and assembly, and to accommodate a broader range of possible contig-to-target sequence similarity scores (i.e., blast E-values). We note that for a gene family represented by many members within a single organism there can arise ambiguities in contig identification, something that was seen in the present study for some initially assembled nAChR subunit contigs (data not shown); that such ambiguities were not observed for any reiteratively assembled contigs logically suggests that the longer the contig sequence-lengths, the less the chance that ambiguities of identity assignment will occur. Of interest to nematologists is the identification of a lev-8-like sequence in O. dentatum, since it has also been found in C. elegans [16] but not in H. contortus [17], a Clade V nematode that is considered more closely related to O. dentatum than is C. elegans [18]. This identification is validated by its 85% amino acid identity ( Table 2) and 94% similarity to C. elegans lev-8; by comparison, it exhibited only 67% identity and 82% similarity to the highest BLAST-identified matching gene/protein in H. contortus, acr-8 [GenBank:ABV68891]. These data suggest a loss of lev-8 in H. contortus but not O. dentatum. Because data reported here identified only a partial in silico sequence for lev-8 (<300 bp), it is uncertain whether O. dentatum contains a full length (functional) lev-8 or only a vestigial (partial) lev-8 sequence; if the latter, then O. dentatum may be close behind H. contortus in losing the gene; if the former, then given that the lev-8 nAChR subunit has been shown in C. elegans to confer sensitivity to levamisole [19], one might predict a difference in levamisole sensitivity as a function of the presence, or absence of lev-8. While differences in the nAChR properties and drug sensitivities of closely related nematode species have been observed, the genetic basis for these differences are unknown. Conclusions The reiterative approach presented here was effective in determining in silico longer sequence reads for 11 genes of a 12-gene set of drug target genes of O. dentatum, a nematode for which exists very little genomic or gene information; an initial Velvet assembly that yielded 3 full/nearly-full length sequences can be improved by reiteration to yield full coding sequences for 9 (or 10, including ben-1 isotype 2) target genes and improved sequences for the remaining genes. The reiterative approach is expected to have general application for the in silico gene identification/sequencing of any nematode for which detailed genetic/gene information is lacking. The identification of full coding sequences for the target genes enables further examinations including studies like (i) seeking to reconstitute functional proteins/systems for assessment in vitro (similar to [17]), (ii) seeking comparative genomic and transcriptomic assessments between parasites isolates that exhibit varied drug sensitivities; such studies are ongoing in our labs and those of collaborators. RNA isolation Parasite samples resuspended in 1.0 ml TRI reagent (Molecular Research Center, Cincinnati, Ohio) were ground by mortar and pestle under liquid nitrogen then brought to a total volume of 2 to 3 ml TRI reagent. Total RNA was extracted from the TRI reagent according to the manufacturer's instructions, including an additional centrifugation step for clearing insoluble material. Extracted RNA was treated with DNase I (New England BioLabs, Ipswich, Massachusetts) (10 min at 37°C, 10 min at 75°C), then re-extracted with TRI reagent and resuspended in diethylpyrocarbonate-treated water. RNA concentration, purity, and quality (RNA Integrity Number) were assessed on a 2100 Bioanalyzer (Agilent Technologies, Santa Clara, California). mRNA-Seq The building of indexed, non-normalized, paired-end mRNA-seq libraries, and subsequent 75-cycle pyrosequencing on an Illumina GAIIx platform, were performed as a service by the DNA Facility (Office of Biotechnology, Iowa State University) using 5 μg total RNA (per sample). Male and female libraries were duplexed in a single sequencing lane. Genomics and bioinformatics Assembly Velvet version 1.1.06 [9] was used for contig assembly. Similarity searching BLAST algorithms [21] were used to compare contigs with sequences available in public databases including the National Center for Biotechnology Information (NCBI) to identify homologues from other nematodes, i.e., sequences returning BLAST expect values ≤ 1E -10 . Read mapping 64 bit Bowtie [22] version 0.12.7 was used to map reads for contig building. Pairwise comparison The Needle algorithm [23] was used for pairwise comparison. Custom codes Java 1: Java-script code to read BLAST output and collect contigs that pass identity thresholds from a contig
6,151.6
2013-06-18T00:00:00.000
[ "Biology", "Computer Science" ]
The Einstein–Podolsky–Rosen Steering and Its Certification The Einstein–Podolsky–Rosen (EPR) steering is a subtle intermediate correlation between entanglement and Bell nonlocality. It not only theoretically completes the whole picture of non-local effects but also practically inspires novel quantum protocols in specific scenarios. However, a verification of EPR steering is still challenging due to difficulties in bounding unsteerable correlations. In this survey, the basic framework to study the bipartite EPR steering is discussed, and general techniques to certify EPR steering correlations are reviewed. Introduction The Einstein-Podolsky-Rosen (EPR) steering [1] depicts one of the most striking features in quantum mechanics: With local measurements, one can steer or prepare a certain state on a remote physical system without even accessing it [2,3]. This feature challenges one's intuition in a way that the set of prepared states in the EPR steering fashion cannot be produced by any local operations. Therefore, a genuine nonlocal phenomenon happens in this procedure. Whilst EPR steering requires entanglement as the basic resource to complete the remote state preparation task, the correlation implied by EPR steering is not always enough to violate any Bell inequality. In this sense, EPR steering can be seen as a subtle quantum correlation or quantum resource in between entanglement and nonlocality. The discussion of EPR steering dated back to the emergence of quantum theory, when Einstein, Podolsky, and Rosen questioned the completeness of quantum theory in their famous 1935's paper [4]. According to their argument on local realism, quantum theory allows a curious phenomenon: the so-called "spooky action at a distance". In the next year 1936, Schrödinger firstly introduced the terminology "entanglement" and "steering" to describe such quantum "spooky action". Debates on whether quantum theory is complete and how to understand quantum entanglement lasted for the following 20 years and were finally concluded by Bohm [5] and Bell [6,7]. The celebrated Bell inequality [8] was provided in 1955 as a practical verification of such "spooky action" or equivalent "non-locality". Noteworthily, the experimental tests of nonlocality without loopholes due to the real devices have been only carried out in recent years [9][10][11][12]. Strictly speaking, Bell inequalities test nonlocal correlations of general physical theories, not necessarily the quantum theory [8]. This can be understood by that Bell inequalities are functions of general probabilities and are independent of how to realize such probabilities. Thus, it is still a question fashions. This survey is organized as follows. In Section 2, the basic notations and the box framework combined with trust/untrust scenarios will be introduced. After a brief discussion of entanglement and nonlocality in such a framework, EPR steering as well as other equivalent descriptions will be introduced in Section 3. In Section 4, the systematic method to formulate the criteria for certifying EPR steering will be discussed. Two types of criteria, (a) linear EPR steering inequality and (b) criterion based on uncertainty relations, will be studied in detail. Their performances on some typical states will also be given. Finally, a summary will be given in Section 5. Preliminaries and Notations In this paper, we will focus on the bipartite correlation P (ab|xy) with input parameters x, y and output parameters a, b and discuss, under certain assumptions, whether the correlation can be certified as EPR steerable. Before the discussion, we firstly introduce the basic terminology and the notations that will be used throughout the paper. The Box Framework A typical experiment of testing a bipartite correlation can be described by the box framework, as shown in Figure 1. Suppose two parties, Alice and Bob, are in their closed labs to do the experiment. The lab is sketched as the doted rectangle, inside which there is an experimental device sketched as the solid rectangle. In each run of the experiment, Alice and Bob are distributed with a bipartite state W from a source, which may be unknown. In their own labs, combined with the subsystem they received, Alice and Bob can input x and y to the device and obtain outputs a and b, respectively. Such a run is repeated enough times so that, after the experiment, Alice and Bob can obtain the correlation P (ab|xy) by announcing their input and output results. Depending on different descriptions and mechanics of the source and device, the correlation may have different structures and properties. The aim of the box framework is then to characterize the dependence of the correlation on descriptions of sources and devices. In general, there is no restrictions on the source, inputs, and outputs. For instance, the source W; inputs x, y; and outputs a, b can all be quantum states, with the devices being quantum instruments. In this case, the box framework characterizes general local quantum operations on bipartite quantum states. In this paper, we will restrict the device to be the typical measurement device in labs. That is, the inputs x, y represent different measurement settings on the received subsystem and the outputs a, b represent different outcomes. Physically, x, y, a, b can be described by natural numbers 0, 1, 2, . . . and corresponding sets are denoted as X , Y, A, B, respectively. In the scenario of steering and nonlocality, there are some common assumptions. The No-Signaling Principle Roughly speaking, the no-signaling principle describes that Alice and Bob cannot communicate with each other during the test [55,56]. In the above box framework, this principle guarantees the independence between Alice and Bob such that the correlation P (ab|xy) is faithfully generated by the state W and measurements but not any other statistics shared before or during the test. Mathematically, the no-signaling principle has the following form, ∑ a P (ab|xy; W) = P (b|xy; W) = P (b|y; W) , ∀x ∈ X , Therefore, the no-signaling principle denies the possibility that Alice and Bob can guess each other's measurement setting y or x based on their local statistics P (a|x; W) or P (b|y; W), respectively. Experimentally, this principle is guaranteed by Alice and Bob being separated far away (space-like separation) and by both of them choosing measurement settings independently and randomly. The no-signaling principle is then guaranteed by two hypotheses. Firstly, two parties in the space-like separation cannot communicate with each other. Secondly, the random number generators [57] in Alice's and Bob's labs should be truly independent and random. In the test of nonlocality and EPR steering, we suppose that the no-signaling principle has been guaranteed. Trust and Untrust If the description of boxes is restricted as quantum or classical, we can further define if a device is trusted or not for the sake of practice. A device is said to be trusted if it is believed that the function of the device is exactly what we expect. This definition comes from the sense that, without the assistance of other resources, it is, in principle, impossible to verify how an unknown device really functions based solely on statistics of measurement results. Particularly, in the rest of the paper, the device is trusted if it is a quantum device and the accurate quantum mechanical description is known. Therefore, if we say some devices are trusted, we actually make additional assumptions. For instance, we say a measurement device is trusted if its measurement can be exactly described by a known set of POVMs E y b , where y is the measurement settings and b is the measurement outcome. On the contrary, we say a measurement device is untrusted if we can, at most, describe the measurement results by a probability distribution P (b|y). The scenario is device-independent if all devices and the source are untrusted. Particularly, the scenario is measurement-device-independent if all measurement devices are untrusted. If some but not all measurement devices are untrusted, we say the corresponding scenario as semi-measurement-device-independent. Entanglement and Nonlocality In the box framework, we can discuss entanglement and nonlocality in an operational manner. Let λ label different hidden states in W and p λ be its probability such that dλp λ = 1. The correlation can be written as P (ab|xy; W) = dλp λ P (ab|xy, λ) . The local realism argues that, for any hidden variable λ, P (ab|xy, λ) can be localized such that P (ab|xy, λ) = P (a|x, λ) P (b|y, λ). We say the correlation P (ab|xy; W) is a local correlation if all hidden states in Equation (3) can be localized. The nonlocality is defined as the failure of local realism, usually modeled by local hidden variable (LHV) models. The main property of LHV models is that, if two parties are no longer interacting (guaranteed by space-like separation), their measurements should be local, i.e., a should be independent on y and b (similarly for b). Thus, for each hidden variable λ, the LHV models produce a localized correlation P (ab|xy, λ) = P (a|x, λ) P (b|y, λ). The nonlocal correlation is defined as correlations that cannot explained by the local correlation where P (a|x, λ) and P (b|y, λ) are arbitrary probabilities. If the statistic of the experimental results cannot be explained by Equation (4), then the correlation is nonlocal and we say the source W is nonlocal. The Bell inequality is indeed a linear constraint on all local correlations. This is based on the fact that all local correlations from Equation (4) form a convex subset. There are some correlations produced by quantum mechanics outside this subset. Precisely, in the probability space, points of local correlations form a polytope, while all probabilities produced by quantum mechanics form a superset of the polytope [8]. Thus, one can distinguish a specific nonlocal correlation from all local correlations by a linear equation. Additionally, since Alice's and Bob's measurement results are described by general probabilities, the problem of nonlocality corresponds to the device-independent scenario. The entanglement is defined as the failure of description in the form of separable states. The separable states have a clear definition that ρ SEP is separable if ρ SEP = ∑ k p k ρ A k ⊗ ρ B k with ρ A k and ρ B k being some local quantum states and ∑ k p k = 1. Usually, the decomposition of a separable state is not unified and the verification of a separable is not a easy task. However, if the source W distributes separable states in the box framework, then the correlation is in the form of P SEP (ab|xy; W) = dλp λ P Q (a|x, λ) P Q (b|y, λ) , where P Q (a|x, λ) = tr E x a ρ A λ and P Q (b|y, λ) = tr F y b ρ B λ are probabilities yielded by quantum measurements. Here ρ A λ and ρ B λ are local hidden quantum states which may be unknown to Alice and Bob, while E x a and F y b are POVMs that Alice and Bob know well. If the statistic of experimental results cannot be explained by Equation (5), then the correlation is non-separable, i.e., entangled, and we say the source W is entangled. Like the Bell inequality, one can use a linear constraint, the so-called entanglement witness, to bound all separable correlations to certify an entangled correlation. Similar to the case of local correlations, correlations produced by all separable states also form a convex set. Since all devices are assumed to be quantum, here, the entanglement corresponds to the scenario where all measurement devices are trusted. Definition From the above introduction, it is easy to see that definitions of nonlocality and entanglement have two similarities. Firstly, both of them are defined by the failure of corresponding local models in their own contexts, i.e., LHV models and separable quantum states, respectively. Secondly, as for the two local models, the descriptions on Alice's and Bob's systems are symmetric, i.e., general probabilities P (a|x, λ) and P (b|y, λ) in LHV models and quantum probabilities P Q (a|x, λ) and P Q (b|y, λ) in separable states. The only difference between the two definitions is whether the local descriptions are both quantum. A natural equation would be "What if the local descriptions are asymmetric?" and "Can this asymmetric property lead to novel correlations?". The answer is yes. The corresponding local model is called the local hidden state (LHS) model and its failure implies the main objective of this paper, the correlation of EPR steering [1]. Definition 1 (EPR steering). In a box frame test, the experimental result statistics exhibits EPR steering property, if it cannot be explained by the correlation of LHS models, i.e., the correlation cannot be written as where p λ is a probability distribution satisfying dλp λ = 1, P (a|x, λ) is an arbitrary probability distribution, and P Q (b|y, λ) = tr F y b σ λ is a probability distribution generated by POVM F y b on quantum state σ λ . It is said that the corresponding quantum state is EPR steerable if Equation (6) is violated. The relationship among EPR steerable states, entangled states, and nonlocal states are sketched out in Figure 2. The set of quantum states: All quantum states form a convex set, with the boundary being the pure state. The region I represents the convex subset of separable states. The complement set, i.e., regions II, III, and IV, represent entangled states. Particularly, regions III and IV represent Einstein-Podolsky-Rosen (EPR) steerable states, and the region IV represents nonlocal states. Region II are entangled states which is neither EPR steerable nor nonlocal. One-Sided Measurement Device Independence The understanding of EPR steering can be more clear if we discuss it in the trust and untrust scenarios. As has been discussed before, nonlocality defies a local correlation in the device-independent scenario, while entanglement defies local correlations in the measurement-dependent scenario. Since EPR steering is defined as the failure of LHS models, where only one party is assumed to be quantum, we have the following claim. Remark 1. EPR steering defies all local correlations in the one-sided measurement-device-independent scenario. This scenario corresponds to the real situation when users in the communication task need different levels of security. For instance, in the communication task between banks and individuals, obviously it is easier for banks to prepare their devices to be trustworthy. For individuals, however, due to limits of costs and environments, their devices are hard to be guaranteed as trustworthy ones. In this case, let individuals be Alice and banks be Bob, such that if EPR steering correlation is certified by the violation steering inequality, then the secure quantum communications can be achieved [18]. Different scenarios corresponding to nonlocality, entanglement, and EPR steering are shown in Figure 3. Schrödinger's Steering Theorem As an equivalent definition, one can consider the assemblage. The assemblage is defined as the collection of ensembles, denoted by ρ a|x a,x , whereρ a|x are unnormalized quantum states satisfying ∑ aρa|x = σ, ∀x. The definition of EPR steering can be applied on the assemblage ρ a|x a,x instead of correlations P(ab|xy). This equivalence is guaranteed by the Schrödinger's steering theorem [2,3]. Proof. For the first statement, it is straightforward to verity that, for all x, For the second statement, write ρ B in its diagonal form POVMs and the quantum state, respectively. Then, an assemblage ρ a|x a,x is said to be unsteerable if it can be produced by rearrangement on for two-qubit states, the steered statesρ a|x B form an ellipsoid in the Bloch sphere on Bob's side [58]. The volume of such ellipsoid indicates the steerability of the bipartite state. If the assemblage cannot be written in this manner, it is said to be EPR steerable. This EPR steering definition is equivalent to Definition 1 on the condition that Bob is allowed to do the state tomography for each conditional state ρ a|x . Furthermore, in Reference [54] post-quantum steering is well-studied using no-signaling assemblages. If Bob's measurements are not sufficient to do the tomography, then it is hard for him to obtain each ρ a|x yet to verify the EPR steerability. In this case, however, the statistic of measurement results P (ab|xy) is still useful. In the following discussion, Definition 1 will be mainly considered. There is an interesting analog of the assemblage [59], from the perspective of the state-channel duality [60]. If the set of local hidden state σ λ is replaced with a set of POVMs {G λ }, then the assemblage of {G λ } can be defined as jointly measurable observables. That is, a set of with p x,λ (a) being probabilities. It has been proved that a given assemblage ρ a|x a,x is unsteerable if and only if Alice's measurements {E x a } a,x is jointly measurable [45,46], which can be checked from the Proof of Theorem 1. Criteria of EPR Steering A natural question arises on how to certify the EPR steering correlation. It can be shown that unsteerable correlations, i.e., correlations produced by LHS models, form a convex subset. According to the hyperplane separate theorem, there always exists a linear constraint of all unsteerable correlations, such that steerable ones can be witnessed [22]. Suppose that the box framework is fixed, i.e., X , Y, A, and B are all fixed. Then, the set of probability distributions {P (ab|xy) |x ∈ X , y ∈ Y, a ∈ A, b ∈ B} can be seen as a point in the probability space. All correlations yielded by LHS models in Equation (6) {p λ σ λ } form a subset {P LHS (ab|xy) |x ∈ X , y ∈ Y, a ∈ A, b ∈ B}. This subset of usteerable correlations is convex. LHS (ab|xy) = dp respectively. Then, any linear combination of these two, i.e., tP LHS (ab|xy) with 0 t 1, can always be written as the correlation yielded by another LHS model {q ν τ ν }, where It is easy to verify that =t dp Therefore, the subset of all unsteerable correlation is convex. Any convex subset can be bounded by a linear equation, which is guaranteed by the hyperplane separation theorem [61]. The proof can be found in many Linear Algebra textbooks (like Reference [61]) and is skipped here. Based on these two lemmas, one can certify EPR correlations by linear inequalities [22]. Theorem 2. Any EPR steerable correlation can be verified by an inequality. Proof. According to Lemma 2, let the set A be the set of all unsteerable correlations, which is proved by Lemma 1. For any EPR steerable correlation P STE (ab|xy), let B be a sufficient open ball containing P STE (ab|xy), such that the open ball is disjoint with the subset A. Then, there always exists a hyperplane v (P (ab|xy)) = ∑ abxy v xy ab P (ab|xy) = c, such that v (P LHS (ab|xy)) c holds for all unsteerable correlations P LHS (ab|xy) while v (P STE (ab|xy)) < c holds for the certain EPR steerable correlation P STE (ab|xy). Linear EPR Steering Inequality Perhaps the most straightforward criteria to verify EPR steering is the linear steering inequality. The linear steering inequality to certify EPR steering is like the Bell inequality to nonlocality and the entanglement witness to entanglement. From the Proof of Theorem 2, the linear steering inequality has a general from, i.e., for all unsteerable correlations, the following inequality holds: where P = {P (ab|xy)} denotes the correlation {P (ab|xy) |a ∈ A, b ∈ B, x ∈ X , y ∈ Y }, V xy ab ∈ R are some coefficients, and B LHS is the bound of all unsteerable correlations. Then, if for a certain correlation Q = {Q (ab|xy)} satisfies I (Q) > B LHS , i.e., the linear steering inequality in Equation (15) is violated, then it can be conclude that Q cannot be explained by any LHS correlations, i.e., Q is EPR steerable. In practice, the expectation value of the measurement results is usually considered for convenience and clarity. Combined with the scenario of EPR steering where Alice's and Bob's measurement devices are untrusted and trusted, respectively, denote A x = {a x ∈ R} as the random variable corresponding to Alice's measurements and B y = ∑ b b y F y b as the general quantum measurement for Bob's measurements, with F y b being the POVM corresponding to the result b y . Suppose that, in an EPR steering test experiment, Alice and Bob randomly and independently choose n pairs of measurements A k and B k , respectively, labeled by k = 1, 2, . . . , n. After the experiment, the value of each pair of measurements is Then, the following linear steering inequality holds for all unsteerable correlations [19,20]. Theorem 3 (The linear EPR steering inequality). If the result of an EPR steering test violates the following inequality where g k are real numbers and C n satisfies with λ max (·) the maximal eigenvalue of the matrix, then the correlation of the test shows EPR steering. The corresponding quantum state ρ AB is EPR steerable, and more precisely, Alice can steer Bob. Proof. By definition, S n C n is an EPR steering inequality when it holds for all unsteerable correlation P LHS . P LHS has a general form as defined by Equation (6), i.e., P LHS (ab|xy) = dp λ P (a|x, λ) tr F y b σ λ . It is straightforward to verify that Here, the second line comes from ∑ a k a k P (a k |A k , λ) tr [B k σ λ ] ≤ max a k ∈A k a k tr [B k σ λ ], and the third line comes from 1 n n ∑ k=1 g k a k dp λ tr [B k σ λ ] = tr 1 n n ∑ k=1 g k a k B k dp λ σ λ λ max 1 n n ∑ k=1 g k a k B k . Here, g k are flexible coefficients to help to form efficient inequalities. Example 1. The 2-qubit Werner state [13]. As a simple example, one can consider the 2-qubit Werner state, which is an often-used bipartite quantum states in quantum information processes. It can be constructed as the mixture of the maximally entangled state |Ψ − = (|01 − |10 ) / √ 2 and the white noise I/4, i.e., where µ ∈ [0, 1]. It can be theoretically proved that W µ is entangled when µ > 1/3 and is separable when µ ≤ 1/3 [13]. When µ > 1/ √ 2, there exists certain observables such that the CHSH inequality is violated [62], i.e., W µ is nonlocal when µ > 1/ √ 2. When µ 0.66, any measurement results of W µ can be explained by some LHV models, i.e., W µ never exhibits a nonlocality when µ 0.66 [63]. It is an open question of whether W µ is nonlocal when 0.66 µ ≤ 1/ √ 2. It has been proved that µ > 1 2 is the critical bound for the EPR steerability of W µ [1], i.e., any measurement results of W µ can be explained by LHS models when µ ≤ 1 2 . It is easy to see that the performance of linear EPR steering inequality (Theorem 3) depends on the number of Alice and Bob's measurement pairs and Bob's observables. Furthermore, from the symmetric property of the 2-qubit Werner state, when Bob's k'th observable is B k = n k · σ, where n k = n (k) x , n (k) y , n (k) z is a unit vector and σ = σ x , σ y , σ z is the set of Pauli matrices, i.e., Alice can always choose her observable as A k = −n k · σ, such that the expectation value of the measurement pair tr A k ⊗ B k W µ = tr −n k · σ ⊗ n k · σW µ = µ. If we further let g k = 1, S n = µ always holds independent of the number of measurements. The bound C n , however, depends on n and the form of B k . More precisely, when n = 2, let B 1 = σ x and B 2 = σ y . The corresponding C 2 = 1/ √ 2 and, thus, W µ is steerable when µ > 1/ √ 2 ≈ 0.707. When n = 3, let B 1 = σ x , B 2 = σ y , and B 3 = σ z . The corresponding C 3 = 1/ √ 3 and, thus, W µ is steerable when It can be proved that, for n = 2, 3, the above Bob's observables are optimal [15,19]. When n = 4, it is a little complicated, but one can let B 1 = σ x , B 2 = σ y , B 3 = σ y + √ 3σ z /2, and B 4 = σ y − √ 3σ z /2. The corresponding C 4 = √ 5/4 and W µ is steerable when µ > √ 5/4 ≈ 0.559. In this case, the observables {B 1 , B 2 , B 3 , B 4 } may not be optimal. It can be concluded that the larger the number of measurement pairs, the lower bound of µ can be detected by the linear inequality. In principle, when n → ∞, which can be understood as the state tomography, one can image that the critical bound for the EPR steerability can be finally found, i.e., µ > 1/2 [15,19]. This example shows the application of the linear EPR steering inequality, as well as its limitations. Firstly, the linear inequality (Equation (18)) may not give the critical bound of the EPR steerability when testing some kinds of quantum states. This makes sense as the linear inequality represents only one hyperplane in the probability space, while the sufficient and necessary condition for the EPR steerability usually requires numerous such hyperplanes. Secondly, the linear inequality (Equation (18)) closely relies on observables that would be chosen. Thus, in practice, a natural question is how to choose Alice's and Bob's observables such that the detection of EPR steering is efficient. Thirdly, as seen from the example, the more measurements, the better the performance of the linear inequality. However, the complexity to compute C n is also increasing when n becomes large. In fact, the method in Equation (19) to calculate C n needs to maximize all a k ∈ A k for all k, which leads the complexity of C n exponentially increasing with n. Therefore, it is motivated to specify systematic techniques of choosing proper observables and obtaining C n more efficiently. Optimal Observables for Alice Usually, Bob's observables {B k } are fixed due to the measurement devices are trusted in his lab. Here, the problem of how Alice chooses proper measurement settings according to Bob's observables is discussed. The main idea is that, to violate the linear inequality (Equation (18)) more obviously, Alice should choose observables such that A k B k is larger when g k > 0 and A k B k is smaller when g k < 0. In this sense, the value of S n can be made as large as possible so as to violate the unsteerable bound. This technique can be formulated based on the following lemma [64]. Lemma 3. For any two n × n-dimensional Hermite matrices A and B, the following equation holds, where U is an arbitrary unitary matrix and α 1 ≥ α 2 ≥ · · · ≥ α n and β 1 ≥ β 2 ≥ · · · ≥ β n are the eigenvalues of A and B, respectively. Proof. Write A = ∑ α i e i and B = ∑ β j f j in the diagonal form, where {e i } and f j are specific bases of the operator space, respectively satisfying tr e i e † j = δ ij = tr f i f † j and ∑ i e i = I = ∑ j f j . Then, Here, ẽ i = Ue i U † is another bases of the operator space, and it is straightforward to verify that the transition matrix D ij = tr ẽ i e j is a doubly stochastic matrix, i.e., ∑ i D ij = 1 and ∑ j D ij = 1. As the doubly stochastic matrix can always been written as the convex combination of permutation matrices [61], the following equation holds: where σ is a certain permutation. Then, the following technique to choose Alice's observables {A k } can be specified [20]. Theorem 4. When the quantum state ρ AB is to be tested and Bob's observables are fixed as {B k }, A k ⊗ B k is maximal if Alice's observables satisfy the following conditions. 1. A k andρ k = tr B [(I A ⊗ B k ) ρ AB ] are diagonalized in the same bases e A i . 2. Eigenvalues of A k and eigenvalues ofρ k = tr B [(I A ⊗ B k ) ρ AB ] have the same order. Then, k are eigenvalues of A k andρ k , respectively. Proof. For any observables A k and B k on a quantum state ρ AB , the expectation value of A k ⊗ B k is where U k is a unitary matrix, D k is a diagonal matrix, and A k = U k D k U † k holds. From Lemma 3, tr U k D k U † kρ k is maximized when U k can diagonalizeρ k simultaneously, i.e., U † kρ k U k is a diagonal matrix, and D k has the same order of diagonal values with U † kρ k U k . In this case, is the maximal over all Alice's observables, where α are eigenvalues of A k andρ k ,respectively. Note that, when ρ k contains degenerate eigenvalues, the optimal A k by this method are not unique. As an example, we consider the 3 × 3-dimensional isotropic state [23]. For the LHS bound C n , we have the following results. When n = 3 and Bob chooses G 1 , G 2 , and G 3 , the state is steerable if η > 0.8660. When n = 4 and Bob chooses G 1 , G 2 , G 4 , and G 8 , the state is steerable if η > 0.7318. When n = 5 and Bob chooses G 3 , G 4 , G 5 , G 6 , and G 7 , the state is steerable if η > 0.6708. When n = 6 and Bob chooses G 1 , G 2 , G 3 , G 4 , G 5 , and G 8 , the state is steerable if η > 0.6424. When n = 7 and Bob chooses observables from G 1 to G 7 , the state is steerable if η > 0.6204. Finally, when n = 8 and Bob chooses all Gell-Mann matrices, the state is steerable if η > 0.5748. Note that, in this case, when Bob chooses only two observables from Gell-Mann matrices, the corresponding linear inequality will not detect any steerability of the state. A Flexible Bound on Unsteerable Correlations As discussed above, the unsteerable bound C n in the linear inequality from Equation (19) contains a maximization over all Alice's measurement results. The complexity to compute C n is exponentially increasing with the number of n. This property can also be concluded from the above two examples. Therefore, when the number of measurements are large, a simpler bound is needed [66]. Theorem 5. If the result of an EPR steering test violates the following inequality where g k are some real numbers and Λ A Λ B satisfies , then the correlation of the test is EPR steering. The corresponding quantum state ρ AB is EPR steerable, and more precisely, Alice can steer Bob. = dp λ ≤ dp λ Here,(A k ) λ = ∑ a k P (a k |A k , λ) is the expectation value of A k under the probability distribution P (a k |A k , λ) and¯ A 2 k λ is the expectation value of A 2 k under the probability distribution P (a k |A k , λ). The third line is based on the Cauchy-Schwarz inequality u · v ≤ |u| |v|, where we let u = . . . g k( A k ) λ . . . and v=(. . . (B k ) λ . . . ). The fourth line comes from(A k ) λ 2 ≤¯ A 2 k λ and ∑ n k=1 B k 2 σ λ ≤ max ρ ∑ n k=1 B k 2 ρ . The fifth line is due to the concavity of the function y = x 1/2 . Compared with the bound (Equation (19)) in the linear EPR steering inequality (Equation (18)), here, the unsteerable bound C n is simpler to compute and the complexity to obtain Λ A and Λ B increases linearly with n. However, C n may not as tight as C n , i.e., some steerable states may be detectable by bound C n but not with bound C n . EPR Steering Inequality Based on Local Uncertainty Relations For a random variable X = {x i }, the variance is defined as i is the mean of the square of X and X 2 = (∑ i p (x i ) x i ) 2 is the square of the mean of X. For any random variable X, δ 2 (X) ≥ 0 always holds. In quantum mechanics, the variance describes the uncertainty of measurement results. For instance, consider the projective measurement M = ∑ k m k Π k , where Π k are projectors and m k are the corresponding outcome. The variance of measurement results {m i } on a quantum state ρ is in the form of is the expectation value of measurement M on ρ and M 2 ρ = tr M 2 ρ is the expectation value of the square of measurement M on ρ. In the following, the subscript ρ is omitted for simplicity. The uncertainty relation can be described as, for a set of measurements {M i |i = 1, . . . , n}, the sum of variances is larger than a certain value, i.e., In a nontrivial case, where {M i } has no common eigenvectors, C M is positive, i.e., C M > 0 [67][68][69]. In the EPR steering test, only Bob's measurements are assumed to be quantum. Then, the local uncertainty relations (LUR) on Bob's side can help to certify EPR steering correlation [23]. Theorem 6 (Steering inequality based on LUR). If the result of an EPR steering test violates the following inequality where α i are some real numbers and C B = min ρ ∑ i δ 2 (B i ) ρ , then the correlation of the test is EPR steering. The corresponding quantum state ρ AB is EPR steerable, and more precisely, Alice can steer Bob. Proof. Generally, for any two random variables X and Y, let p (xy) be the joint probability distribution and p (y|x) = p (xy) /p (x) be the conditioned probability distribution. Then, the variance of Y satisfies where the third line comes from the concavity of function f (t) = t 2 and δ 2 (y) x is the variance of Y under the distribution {p (y|x)}. Now, consider the definition of unsteerable correlation One has where trivial results δ 2 (A i ) λ 0 is used. Here, {α i } are some flexible real variables. For a certain probability distribution {P (a k b k |A k B k )} generated from an EPR steering test, the optimal {α i } can be calculated such that the inequality from Equation (42) is maximally violated. For each term in Equation (42) can be seen as a quadratic polynomial of α i , from which the optimal α i can be obtained, i.e., It is noteworthy that, here, like the case in the linear inequality of Equation (34), the complexity to compute unsteerable bound C B also increases linearly with the number of measurements n, better than the case in inequality (Equation (18)), where the complexity increases exponentially with n. Remark 2. The use of LUR in quantum correlations. In the case of EPR steering, the inequality from Equation (42) shows that, for unsteerable correlations, the uncertainty of the total system AB is always larger than that of one subsystem B. This conclusion is consistent with the definition of LHS models, where only Bob has the quantum description. One property of EPR steering is, thus, that the uncertainty of the correlated measurement results can be less than the uncertainty of one subsystem. In this sense, the violation of LUR indicates the amount of quantum correlations. Furthermore, if quantum entanglement is considered in this fashion, for any separable states σ SEP AB = ∑ k p k σ A k ⊗ σ B k , it has been proved that where C A = min ρ ∑ i δ 2 (A i ) ρ [70]. That is, in the case of quantum separable states, where both Alice and Bob can be described as quantum but classically correlated, the uncertainty of the total system is always larger than the sum of the local uncertainty relations of all subsystems. However, for the nonlocality, the probability distribution of LHV models always satisfies which is a trivial result, and no violation can be detected. In fact, formulating a nonlinear form of Bell inequalities is a difficult problem. It is noteworthy that in Reference [43], the violation of the CHSH inequality [62] can be restricted by the so-called fine-grained uncertainty relations combined by a properly-defined steerability. Such a restriction holds only when a specific form of the Bell inequalities are selected [44]. Different from the variance-based uncertainties discussed here or entropies [24], the fine-grained uncertainty relation are described in a linear form of the set of measurement observables, which can also be used as the certification of EPR steering [25] . Example 3. Bell diagonal states Bell diagonal states has the following simple form, where σ j , j = x, y, z is the set of Pauli matrices. In another form, ρ c can be written in the diagonal form where |ψ ± = (|00 ± |11 ) / √ 2 and |φ ± = (|01 ± |10 ) / √ 2 are four Bell states and ∑ i t i = 1. If three Pauli matrices are selected as the observables, the linear EPR steering inequality (18) can be simplified as Here, the absolute value and binary ω i suggest that there are a set of linear inequalities. The violation implies that ρ c is steerable if c x ± c y ± c z > √ 3. Nevertheless, the EPR steering inequalities (42) based on LUR can be optimized as As a comparison, it can be verified that, in this example, the inequality based on LUR certifies a larger steerable region of Bell diagonal states than the linear inequality [23]. Realignment Method From the EPR steering inequality based on LUR, the realignment method for certifying entanglement also works for the EPR steering case. Generally, the realignment criterion [71] or the computable cross-norm criterion [72] are important techniques to certify bound quantum entanglement, i.e., entangled states with a positive partial transpose. Mathematically, the realignment is a map on a quantum state ρ AB such that R (ρ AB ) : ρ AB → m| µ| R (ρ AB ) |n |v = m| n| ρ AB |v |µ . If ρ AB is separable, then the trace norm of the matrix R (ρ) is not larger than 1. To obtain the norm of R (ρ), one can seek for the complete set of local orthogonal observables (LOOs). A complete set of LOOs is a collection of observables {G k } satisfying G † k = G k , tr [G k G l ] = δ kl , and ∑ k G 2 k = I. Indeed, {G k } forms a complete set of orthonormal bases for the corresponding operator space. Then, a state ρ can be written as ρ = ∑ k µ k G k , where µ k = tr [ρG k ]. For example, in the case of qubits, the identity matrix and three Pauli matrices form a complete set of LOOs, and in the case of qutrits, the identity matrix and eight Gell-Mann matrices form a complete set of LOOs. For any bipartite quantum state ρ AB , suppose that the maximal dimension of Alice's Hilbert space and Bob's Hilbert space is d. Let the complete sets of LOOs for Alice's operator space and Bob's operator space be G A k and G B k , respectively. Then, ρ AB can always be written as The singular value decomposition on the matrix µ = (µ kl ) yields µ = SλT T , where λ = diag {λ 1 , . . . , λ d 2 } is the diagonal matrix with λ k ≥ 0, S = s ij and T = t ij are two orthogonal matrices, i.e., SS T = TT T = I. Take µ = SλT T into the expression of ρ AB , and finally, the Hilbert-Schmidt decomposition of ρ AB can be obtained: where G A k = ∑ m s mkG A m and G B k = ∑ m t mkG B m . It can be verified that G A k and G B k are another two complete sets of LOOs, and λ k = tr ρ AB G A k ⊗ G B k . In a certifying entanglement, if ρ AB is separable, then the realignment [71,72] method guarantees that ∑ k λ k 1. In certifying EPR steering, a similar result can be concluded [23]. then ρ AB is EPR steerable. In this case, Alice can steer Bob and Bob can also steer Alice. Proof. From the EPR steering inequality based on LUR, for a bipartite quantum state ρ AB = ∑ k λ k G A k ⊗ G B k , let Alice's and Bob's observables be G A k and G B k and g k = −g. The violation of Equation (42) A sufficient condition of this inequality is omitting the quadratic term, i.e., g 2 d + d − 2g ∑ k λ k < d − 1. Finally, let g = ∑ k λ k /d, and the inequality (58) is concluded. Different from the linear inequality and the inequality based on LUR, the realignment method does not require an EPR steering test. For any quantum state ρ AB , there is a possibility that one can know whether this state is EPR steerable or not, regardless of how to certify it in the test. A limitation is that, as a corollary of the inequality, the realignment method will not perform better than the inequality. In the entanglement case, where the state is entangled if the value ∑ k λ k is larger than 1. Here, this quantity should be larger than √ d to certify the EPR steerability. Although the realignment method can certify positive partial transpose (PPT) entanglement, it remains an open question if it can certify PPT EPR steering, i.e., EPR steerable states with PPT. Note that there have been numerical results proving the existence of such states [73,74]. Summary In this survey, the basic technique to discuss and certify EPR steering is discussed. Particularly, the box framework and trust-untrust scenario is adopted. The linear criterion and local-uncertainty-relation-based criterion are summarized. Both criteria are constructed in an experimentally friendly manner, i.e., they can be directly applied in real experiments for arbitrary measurement settings and arbitrary outcomes, with a reduced complexity to obtain the unsteerable bound. Moreover, an analytical method for the optimization of EPR steering detection is also maintained. Furthermore, from these criteria, LUR are shown to play an important role in the correlation exhibition of quantum bipartite systems. There have also been other useful criteria, as has been listed in Section 1. Most of them are formulated in the same fashion as introduced in this survey. Therefore, the discussed techniques to find a computable unsteerable bound and optimal observables can be directly applied. There still remains an open problem of how much entanglement is sufficient for EPR steering and how much EPR steering is sufficient for nonlocality. Solving this problem would technically advance the realization of nonlocality-based quantum protocols and finally contributes to the application of quantum information technologies.
10,523.8
2019-04-01T00:00:00.000
[ "Physics" ]
Evaluation and Comparison of the Effects of Persulfate Containing and Persulfate-free Denture Cleansers on Acrylic Resin Teeth Stained with Cigarette Smoke: An In Vitro Study Problem statement and aim The esthetics of the complete denture primarily depend upon the color of the denture teeth; however, there are situations where the teeth are subjected to extrinsic and intrinsic stains and discolor over time. The purpose of the study was to investigate the effects of smoking and two different denture cleaners on the color stability of the denture teeth. Material and methods Commercially available maxillary anterior teeth made up of acrylic resin were selected for the study and were divided into two groups (n=10): persulfate-free denture cleanser and persulfate containing denture cleansers. The acrylic teeth were set in the smoke chamber with a distance to absorb the smoke equally from the cigarette. The smoke was released for 10 minutes, and the results are observed by the spectrophotometer. Results All the values were collected after the 21st day, and data were analyzed with the SPSS software. It was found that denture cleansers with persulfate are effective on color stability. Conclusions Even though the persulfate containing denture cleansers are injurious to health, they can be recommended to the smokers with clear instructions of use. However, for non-smokers, persulfate-free denture cleansers are preferred over the persulfate containing denture cleansers. Introduction Staining of acrylic dental replacement teeth has been a significant complaint from dental 1 1 2 2 2 2 replacement wearers. There are a few several reasons behind denture staining, for example, long haul utilization of espresso coffee, tea, shaded beverages, and smoking [1]. The staining of resin-based materials has all the earmarks of being identified with numerous elements and might be brought about by natural and outward factors. The characteristic elements include the staining of the resin material itself; for example, the change of the resin matrix by physicochemical responses causing disintegration [2]. Since high-quality plastic teeth made of hard resin are helpless to staining with colors, it has been seen that the esthetics of removable partial prosthesis made using such plastic teeth slowly decrease in numerous patients. Then again, porcelain teeth are nearly predominant as a result of their high wear opposition, less perception of stains, and have a great esthetical appearance. Nonetheless, if removable partial prosthesis were utilized for significant use without adequate cleaning, there is a chance of staining in porcelain teeth too due to the adsorption of colorants on the porcelain surface [3]. Consequently, dental specialists suggest dental replacement chemicals for the cleaning of the dentures, which have persulfate as the major component and persulfate-free denture cleansers [4]. In respect to this, the aim of the study is to discover the effectiveness of the persulfate-free dental replacement chemicals when exposed to smoke. Materials And Methods Twenty sets of maxillary anterior resin teeth were selected for the study and were grouped into group A and group B, with each group having 10 sets each. Two denture cleansers were used for this study -one without persulfate (Dentasoak®; solution A) and other with persulfate (Polident®; solution B). Solution A (Dentasoak®) was prepared by blending the powder and solution in 50ml of distilled water; solution B (Polident®) was prepared by diluting one tablet in 50ml of distilled water as per the manufacturer's instructions ( Figure 1 A and B). Construction of smoke chamber The study utilized the smoking machine created by the standards of Wasilewski S et al. to recreate (in vitro) the smokers' mouth conditions [2]. The smoke chamber is utilized to expose the tobacco smoke to the acrylic teeth. It has a glass beaker with two openings that are linked with silicone tubes (Figure 2). One silicone tube is connected to the vacuum and the other tube is locked. The conical glass beaker is closed with a stopper (cork) and has a hole in the center; this hole is utilized to embed the cigarette. At the point when the vacuum is actuated, the tobacco smoke is discharged into the container because of negative pressure from the vacuum. Acrylic teeth were assembled in such a manner that the labial surface of the teeth was confronting the inlet to ingest the smoke when it is discharged from the opening. For equivalent retention of the smoke, all the samples were kept and balanced out at an equivalent separation. At that point, the acrylic dental replacement teeth are along these lines presented to tobacco smoke for 10 minutes. Each group (group A and group B) acrylic resin teeth are exposed to tobacco smoke independently five times each day for 10 minutes. The exposure of the smoke is completed five times each day since it's been anticipated that a normal smoker smokes at least five cigarettes every day. Figures 3 A and B show the samples after exposure to smoke and before exposure to denture cleanser on day 1 (groups A and B). Figures 4 A and B show the samples after the exposure of denture cleansers on day 1. Figures 5 A and B show the samples after the exposure of the smoke on day 21, and Figures 6 A and B represent the aftereffect on the acrylic teeth after subjecting to denture cleansers on day 21. Later the group A teeth are drenched into solution A, and group B teeth are exposed to solution B for overnight. All the samples were collected after 21 days and tested in a spectrophotometer for color intensity. Spectrometric analysis of color intensity We utilized a double-beam UV-visible spectrophotometer (Figure 7). Color changes can be assessed outwardly or by instrumental techniques. Instrumental valuations eliminate the subjective interpretation of visual coloring examination [5]. This spectrophotometer utilizes a reference beam and sample beam shaft that go through the sample solutions. The two denture cleansers absorb the various wavelengths of light. The measure of absorption is reliant on the convergence of the denture cleansers. The qualities recorded from the arrangements allude to the shading force of the arrangement. The acrylic teeth were presented to tobacco smoke for 21 days and were absorbed dental replacement chemical medium-term for 21 days. The color intensity values are recorded on first, third, fifth, seventh, ninth, 11th, 13th, 15th, and 21st days. Prior to every measurement session, the spectrophotometer was calibrated according to the manufacture's recommendations to avoid the wrong measurement. FIGURE 7: Double-beam UV-visible spectrophotometer The materials used in the study are described in Table 1. Results The spectrophotometric values on the exposure of the denture cleansers were observed (from day 1 to day 21) and tabulated in Table 2. These values are then subjected to statistical analysis with SPSS software 24 (IBM Inc., Armonk, USA). In the present study, two separate investigations were performed to compare the values corresponding to ΔE. Spectrophotometric values of group A and group B were compared by using an independent t-test. Table 2 shows that there is a significant difference between two groups (p-value < 0.001). The results of the photographic examination revealed a significant difference between group A and group B denture cleansers. Persulfate containing denture cleanser (group B) is more effective than persulfate-free denture cleanser (group A) in removing the cigarette smoke stain in acrylic resin teeth. Discussion Denture cleansers control the development of microorganisms on false teeth, remove stains and different flotsam and jetsam brought by diet, tobacco, smoking, espresso, and so on. A few denture cleansers come in cream and fluid forms; others come in powders, glue, or tablet forms. Commercial denture cleansers are classified into the following groups: unbiased peroxides with compounds, proteins, acids, hypochlorite's, peroxides, rough medications, and mouthwashes for dentures [6]. The decision of denture cleansers should be based on the chemistry of resin and cleanser, denture cleanser concentration, and duration of immersion. Regular utilization of denture cleansers is prescribed to prevent microbial colonization on the dentures and promote great oral wellbeing. Daily utilization of denture cleansers can influence the physical and mechanical properties of denture base material [7]. Most generally utilized denture cleansers are represented by the group of alkaline peroxides. Potassium monopersulfate removes staining and eliminate microbes on a denture surface [8]. Considering the cleaning strategies utilized by the patients on the dentures, resin materials were drenched in 50-degree centigrade warm solutions for 15 minutes every day for 20 days. Seventy-three reports of susceptible allergic responses, including one demise connected to denture cleanser, had been accounted for by the FDA (Food and Drug Administration) referring to persulfate as a reason [9]. Side effects of unfavorably susceptible responses to persulfates utilized in denture cleanser are bothering, tissue harm, rash, gum delicacy, breathing issues, low circulatory strain, and overall hypersensitive response. The outcomes showed that there were contrasts in the stain removal effectiveness of both the denture cleanser. Looking at two denture cleansers, we came to the outcome that the persulfate containing dental replacement chemical is progressively more viable in expelling the cigarette smoking stains from the dental replacement tar acrylic teeth than the persulfate-free dental replacement chemical. Considering the wellbeing variable of the patient's persulfate-free dental replacement chemical, it can be prescribed for non-smoker patients. For smokers, the persulfate containing dental replacement chemicals ought to be suggested yet with appropriate safety measures and directions to stay away from any complications. Within the restrictions of this investigation, we conclude that the most proficient dental replacement cleaning agents in expelling tobacco smoke stains are persulfate containing dental replacement cleaning agents. Further examinations with more samples and a more extended range of the investigations in vivo will give more exact results. In addition, checking the physical properties like the quality of the acrylic teeth after the introduction of the dental replacement chemicals could add to the study. Conclusions FDA warns of allergic reactions to persulfate containing denture cleansers with proper or improper use. The importance of persulfate-free denture cleansers has to be acknowledged. In the case of non-smokers, the use of persulfate-free denture cleansers should be encouraged for safety concerns. In the case of smokers and others who use persulfate containing denture cleansers, it is important to make sure they follow the instructions and use it with precautions. Additional Information Disclosures Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2,529.8
2020-03-01T00:00:00.000
[ "Medicine", "Materials Science" ]
DRIVERS OF FIRMS’ DEBT RATIOS: EVIDENCE FROM TAIWANESE AND TURKISH FIRMS . This study investigates the drivers of debt ratios of the firms listed on the stock markets of two different countries, namely Turkey, a developing country and Taiwan, a newly developed country. The factors impacting short-term, long-term, and total debts are selected as EBIT (Earnings before Interest and Tax), ROE (Return on Equity), sales, total assets, fixed assets-total assets ratio, and depreciation- total assets ratio. The findings indicate that there are differences between Turkish and Taiwanese firms in terms of the drivers’ impacts on the debt structures of the firms. The proposed regression models work better on the data collected from Taiwan as compared to the data from Turkey. Possible reasons are discussed in the final section. Journal 13(1): Introduction Firms frequently use external monetary sources to meet their fund needs. They need funds sometimes because of their limited amount of cash for daily operations and sometimes because of their investments for fixed assets. Especially long term debts contribute to the equity and earnings of the firms. As Dalmazzo and Marini (2000) mention, increase in the usage of external monetary sources can increase earnings of the investors. The critical point here is the necessity for the earnings to be greater than the costs of the external funds. Thus, because of leverage effect, the firm's earnings go up as the debts increase. However, as Mckenzie (2002) warns the decision makers of the firms listed on stock markets, that investors may refrain from buying the stocks of those firms whose debt / equity ratios are very high because of the over usage of this leverage effect. Moreover, the collapses are much more rapid and harsher than the recoveries in the stock exchange markets (Choudhry 2001;Alexander, Dimitriu 2005). The literature on corporate financial management of the firms listed on the stock markets begin to give special importance to the management of debt ratios which is a very critical issue for the owners, top managers and investors. In this concern, Singh and Nejadmalayeri (2004) have already developed a comprehensive indebtedness model consisting of the factors that possibly affect the variance in the debt ratios of the firms and tested it in the context of already developed countries' firms. The main motivation in this study is to develop and test an extended model to explain the factors affecting the capital structure of the firms. Inspired by the Singh and Nejadmalayeri's (2004) study, we use in our research a revised and extended version of this indebtedness model by including some additional drivers namely ROE, total assets, and the depreciation / total assets ratio. Therefore, in this study, we try to test an original indebtedness model consisting of the following independent variables: EBIT, ROE, sales, total assets, fixed assets-total assets ratio, depreciation-total assets ratio. In addition to the extension of the model, another originality of this study is to test the indebtedness model on two different country settings at the same time to produce comparative implications for the firms operating in a developing (Turkey) and a newly developed (Taiwan) country. Briefly, we desire to test the effects of the indebtedness drivers on the debt ratios in the context of a developing country and a newly developed country, and to make a comparison between them concerning not only these drivers but also the general capital structures of their firms. The paper proceeds in the following manner. In the second section next to the introduction, the indebtedness literature is reviewed. In the third section, the financial differences between the firms in Turkish and Taiwanese firms and the relations among the model's variables as the drivers of indebtedness are tested via ANOVA and regression analyses; and the final section is devoted to the discussion of the results. Modigliani and Miller (M & M) are highly referred researchers in the literature for their studies on the firms' capital structures, i.e. the combination of the long term debts and equity of the firms (Yukcu et al. 1999(Yukcu et al. ). M & M (1958 indicate that managers of those firms where internal monetary funds are not adequate may easily apply for external funds especially when we assume that markets function perfectly and taxes do not exist (Firatoglu 2005). Tough in the real markets, various types of tax exist. Therefore, M&M emphasize that firms can increase their earnings per share by emitting new bonds and accordingly paying less taxes (Yukcu et al. 1999). After M & M's remarks on the impact of tax concerns on the capital structure and debt ratios, the trend in the literature has turned towards determining the optimum combination or balance between the sizes of external and internal monetary funds. Theoretical framework and hypotheses development 2.1. Literature review It is obvious that the leverage effect and the tax shield -obtained via increasing amount of debts-are also risky. Over-indebtedness may cause bankruptcy, attachment, and agency costs that constitute the limits of rational indebtedness (Kraus, Litzenberger 1973;Jensen, Meckling 1976;Hovakimian et al. 2004). Because of these limits, every firm should consider the risks of their debt ratios, and pay attention to try to establish a healthy capital structure (Firatoglu 2005). Researchers in the recent literature concentrate on the ways of developing an optimal capital structure for those companies that face severe pressures and problems that occur in the marketplace. Agency problem leading to agency costs is among the most important ones of these problems. Campello (2006) includes all the related costs -such as manipulations of earnings, loss of profitable investment opportunities, excessive and inefficient investments done because of the conflicts of interests between owners and managers, etc. -in the context of this agency problem. The fundamental source of these conflicts of interests is the incongruity between the earnings expectations of the owners and the individual benefit or utility expectations of the managers. To balance these conflicting expectations, firms need to establish an optimum capital structure. Revenue / cost theories in the sphere of corporate finance are already based on the target capital structure to balance the various advantages and costs of both debts and equity (Cadenillas et al. 2004). According to the Agency Theory hypothesis which is related to the borrowing strategies and capital structure, those firms whose leverage ratios are high and equity / assets ratios are low, can decrease their agency costs and increase their market value by increasing their indebtedness (Berger, Di Patti 2006). The accelerating effect of the financial leverage does not only decrease the agency costs but also rationalize the decisions of the managers thanks to the increasing amount of audit and control done by the lenders. The financial leverage is thus positively associated to the market value of the firms (Harris, Raviv 1991). Still however, beyond the optimum level of borrowing, the leverage effect may lead to fatal consequences such as attachment, bankruptcy, and liquidation (Berger, Di Patti 2006). According to the Myers and Majluf's (1984) Pecking Order Model managers do not care about the partial capital structure, and they do not treat the debt structure independent from the equity structure (Howakimian et al. 2004). They rather consider the general variance in the total cost of the capital and its consequences. Determination of a target indebtedness ratio in advance may constitute a precaution for the harmful effects of over-indebtedness; until the pre-determined ceiling for borrowing, the leverage effect will bring extra returns (Yukcu et al. 1999;Nieh et al. 2008). Agency problem is also related to information asymmetry which is among the important problems present within the interactions among different actors, such as owners, stockholders, managers, lenders, etc. As Schieg (2008) points out asymmetric distribution of information among different parties is one of the critical factors to deal with especially during the design phase of contractual relations. Asymmetric information is the ability of some parties to access and possess a greater amount of naturally tacit knowledge, experience and information than the others, thereby, creating an information asymmetry problem. Accordingly, those who control this asymmetric information may exploit it by producing a risk premium to serve only their individual interests at the expense of the others. Even the efforts to share some knowledge with the stakeholders, such as announcements or declarations of the financial reports to the public about the activities and financial situation of the firm done by the firms themselves and / or by the audit firms, are not sufficient to uncover the detailed tacit knowledge beyond the information asymmetry (Karataş, Aren 2008). Therefore, the creditors prefer to implement higher risk premium for those firms which are considered to be very uncertain and accordingly risky; leading to unexpected credit-debt cost for the firms. Another important problem present in the marketplace is the globalization of the markets which not only increases the amount of threats coming from international competitors but also provides diverse borrowing sources and opportunities for especially those developing country firms that used to operate in their smaller but also safer national markets. Since 1980s, developing countries' governments and private firms have tried to access to the international markets to secure more external funds to finance their offensive and defensive strategies to tackle with both the opportunities and threats of the globalization process. The ability of the private firms of the developing nations to borrow in the international markets proved to be highly related to the indicators of their national economy, such as the value of the national currency, inflation rate, interest rate, maturity terms, etc. (Ozgen 1998). The decisions pertaining to the cost of borrowing and risk of lending are especially affected by the differences of these economic indicators between borrowing developing countries and lending developed countries. All these global concerns intervene in the determination of an optimum firm-level debt and equity structure (Schmukler, Vesperoni 2006). Moreover country-specific factors influence the roles of firm-specific determinants of leverage (De Jong et al. 2008). For instance, long-term debt decreases in economies with a large banking sector and developed domestic financial systems (e.g. Demirgüç-Kunt, Maksimovic 1999;Schmukler, Vesperoni 2006). While firm leverage at an aggregate level is fairly similar across the G-7 countries (Rajan, Zingales 1995), there are persistent differences among capital structures of firms from different developing countries (Booth et al. 2001). An important opportunity present in the global markets is the possibility of reducing financial risks by diversifying sources of foreign indebtment. Diversification of debts is an appropriate strategy, on the one hand, to cope with the currency fluctuations which are typical in the globalization process (Reeb et al. 2001;Singh, Nejadmalayeri 2004). However, on the other hand, the mismanagement of diversification strategies may also lead to the utilization of numerous external monetary sources and to end up with overindebtedness. Moreover, short-term capital movements and the under-developed legal and fiscal infrastructure within the developing nations have already limited the rational decision making at the firm level, especially at the first years of globalization (Kaminsky, Reinhart 1998). Local and international financial crises that occurred after 1980s indicated that the inability of the decision makers at the firm level was also the result of the inability of the nation level decision makers, i.e. governmental and / or autonomous institutions for economic regulations in the internal markets, especially in the fields of fiscal control and audit mechanisms, early warning systems, transparency of the financial markets, etc. (e.g. Diao et al. 1999;Yenturk 1999). Mismanagement of these issues complicated also the formation of an optimum level of indebtedness in the companies of the developing economies (Egilmez, Kumcu 2004). The opposite may also be true; in other words developed nations with stronger banking systems, experienced fiscal audit and control institutions and more stable and attractive national markets will provide suitable conditions to their national firms to establish healthier capital structures. Thus the national level of development (being whether a developed nation's or a developing nation's firm) may be a critical concern for borrowing limits and / or opportunities that shape the capital structure, debt ratios and their drivers. The hypotheses and the models Our empirical study's hypotheses developed based on the related literature about the drivers of debt ratios are discussed below. First, we group these drivers according to their common characteristics, namely profitability, firm size, tangibility, and depreciation. Then, these hypotheses all together form the relationship models to be tested. Impact of profitability on debt ratios Empirical investigations through relationship models on the factors that affect the debt ratios concentrate especially on the developed nations' financial markets (e.g. Singh, Nejadmalayeri 2004). In these models in the recent literature the amount of earnings is among the mostly used drivers of indebtedness. According to the Pecking Order Model, firms utilize first their retained earnings as an internal source of finance; then step by step they find other sources, namely borrowing debts and issuing stocks (Hovakimian et al. 2004). In their studies on Turkish firms, Arslan and Karan (2009) found a negative relation between the likelihood of corporate default and profit margins. In this respect, as the earnings and accordingly retained earnings increase internal sources of finance will be adequate and there will be no need to borrow. Thus, there should be a negative relationship between earnings and indebtedness. Moreover, beside the nominal value of the earnings, the earnings ratios should also be considered in this association. Therefore we also purport that a similar negative relationship should also exist between the returnsequity ratio and indebtedness. Accordingly we develop the following hypotheses: H1: Earnings before interest and tax payments (EBIT) decrease the debt ratios. H2: Returns on equity ratio (ROE) decreases the debt ratios. Impact of firm size on debt ratios Another driver of the indebtedness is the amount of sales which constitutes the primary source of cash in / outflows for the firms. As the sales and / or orders for sales increase, the need for funds to spend for the operations and investments increases. Recent empirical literature confirms this logic (e.g. Harford et al. 2009;Bauguess et al. 2009). Accordingly we develop the following hypothesis: H3: Sales increase the debt ratios. The amount of the assets is another factor that affects the debt ratios. Literature on the indebtedness indicates that there is a positive relationship between the leverage effect and assets' size. Titman and Wessels (1988) mention that as the firm size increases, its operations will be more diversified and balanced; and the fluctuations in the cash flows will be decreased. The firm will become safer and free from the destructive effects of cash bottlenecks. Accordingly, the firm size reduces the possibility of bankruptcy due to the fluctuations of cash in / outflows (Firatoglu 2005) which would increase the courage and credibility to borrow more. This positive association between the total assets and debt ratios is also confirmed by the recent literature (e.g. Cadenillas et al. 2004;Fattouh et al. 2005). Accordingly we develop the following hypothesis: H4: Assets increase the debt ratios. Impact of tangibility on debt ratios The value of the tangible fixed assets constitutes another driver for indebtedness. The fixed assets are both accepted by the creditors as a guarantee for debts and also as a valuable physical item to sell in case of insolvency (Firatoglu 2005). According to Fattouh et al. (2005) larger firm size and larger amount of fixed assets increase the credibility and bargaining power of the firms in the relations with the creditors leading to the betterment of the credit terms. Moreover, managers of those firms with larger amount of fixed assets may feel freer and safer to increase their indebtedness believing that their larger firm size and institutionalized and transparent structure will contribute to their ability to develop a healthier capital structure even with larger amounts of debts. The recent literature supports in general the positive relationship between the size of leverage and the amount of assets which are acceptable by the creditors to be collateralized (e.g. Huang, Song 2006). Therefore, an increase in the tangible fixed assets / total assets ratio -or tangibility-may lead to an increase in the debt ratios. Accordingly we develop the following hypothesis: H5: Tangibility increases the debt ratios. Impact of depreciation on debt ratios Depreciation-total assets ratio as a kind of non-debt tax shields is another determinant in the indebtedness decisions. Depreciation expenses are tax deductible like debt related tax shields and can be used for the same purpose (DeAngelo, Masulis 1980); and they are even more preferable because they do not involve the use of cash. Thus, firms with significant depreciation expenses (usually those that had made discretionary investments in property, plant and equipment recently) will not want or need to issue debt because they are already receiving tax benefits from depreciation (Dalbor, Upneja 2004). Previous research's findings support this logic (e.g. MacKie-Mason 1990; Chiarella et al. 1992;Wald 1999;Upneja, Dalbor 2001;Huang, Song 2006) by confirming a negative relationship between depreciation tax shields and the use of debt. Accordingly we develop the following hypothesis: H6: Depreciation-total assets ratio decreases the debt ratios. The models of debt ratios' drivers Based on the above developed six hypotheses we test the following models of indebtedness relations. The debt ratios to represent the indebtedness of the firms are categorized as short-term, long-term, and total debt ratios. These three debt ratios are the dependent variables and the six hypothesized drivers of indebtedness are the independent variables of our three research models. In every model, the independent variables are the same, but the dependent ones are different: Model 1: Total Debts / Total Assets = -ß 1 EBIT -ß 2 ROE + ß 3 Log (Sales) + ß 4 Log (Total Assets) + ß 5 Tangibility -ß 6 Depreciation / Total assets Model 2: Long-term Debts / Total Assets = -ß 1 EBIT -ß 2 ROE + ß 3 Log (Sales) + ß 4 Log (Total Assets) + ß 5 Tangibility -ß 6 Depreciation / Total assets Model 3: Short-term Debts / Total Assets = -ß 1 EBIT -ß 2 ROE + ß 3 Log (Sales) + ß 4 Log (Total Assets) + ß 5 Tangibility -ß 6 Depreciation / Total assets Measurement of the variables Variables of our research model and the explanations for the calculation of their measures are depicted on Table 1. Data collection Data on the dependent and independent variables of our study model are collected from Turkish and Taiwanese stock markets. The motivation behind collecting data on two different countries and to test the model on both of them was to compare the drivers of indebtedness in the newly developed and developing economies. Selection of the Turkish and Taiwanese national stock markets as the sources of data was based firstly to their different economical status-one is a developing economy, Turkey 1 ; and the other one, a newly developed economy, Taiwan 2 . Another reason for their selection is the eligibility to the longitudinal firm data from the publicized official web sources of their financial markets. Data collection is done by downloading online data, from the official web pages of the Turkish and Taiwanese stock markets, on the balance sheets and income statements of the firms listed on these stock markets, for the period of 1999-2005. Descriptive statistics and ANOVA Firstly, we report the descriptive statistics of the variables measured by data collected from both countries' stock markets, and secondly, the ANOVA results for the country comparisons. Table 2 depicts the means and standard deviations of the Turkish and Taiwanese companies separately, and also the ANOVA results on the differences between these two company categories. (Chui, Wei 1998) and then the 12th largest financial market in 1999 (Barber et al. 2007;Wu et al. 2009). Means of the Taiwanese firms indicate the financial strength and balanced capital structure of them when compared to the Turkish firms considering especially the means of Log (Sales), Log (Total Assets), and Long-term Debts / Total Assets ratio. Descriptive statistics reveal also that Turkish firms listed on the stock exchange are heterogeneous concerning the large standard deviations of the variables especially that of ROE and Depreciation / Total Assets ratio. As for the Taiwanese firms, on the contrary, homogeneity among their indicators is a typical finding, especially for the following variables, Log (Sales), Log (Total Assets), and Total Debts / Total Assets ratio since their standard deviations are very low when compared to their means. ANOVA results on Table 1 indicate that the descriptive statistics of all the indebtedness drivers except ROE and all the debt ratios except Total Debts / Total Assets are significantly different when we compare the Turkish and Taiwanese companies. In Turkey, EBIT, Tangibility, Long-term and Short-term Debts ratios are significantly higher than in Taiwan; whereas in Taiwan, Sales, Total Assets, and Depreciation ratio are significantly higher than in Turkey. Thereby, we can infer that the capital structure of the Turkish firms in general is not only less healthy and significantly different than the Taiwanese firms but also heterogeneous within each other. Regression analyses Relationship models to test the effects of the indebtedness drivers on the debt ratios are tested using data from both Turkish and Taiwanese stock markets by six regression analyses, where the independent variables -EBIT, ROE, Log (Sales), Log (Total Assets), fixed assets-total assets ratio, depreciation-total assets ratio-are the same, and the dependent variables are different, namely Total Debts / Total Assets, Long-term Debts / Total Assets, Short-term Debts / Total Assets. Table 3 shows the results of the first two regression analyses about the effects of the independent variables on the Total Debts / Total Assets ratio on both markets. The regression analysis for the drivers of the Total Debts / Total Assets ratio on the Taiwanese Table 6 summarizes the findings of the regression analyses. It is shown that the model of relations between the debt ratios and its proposed drivers works in general in the Taiwanese firms, except for the ROE -Long-term Debts / Total Assets ratio and Sales -Short-term Debts / Total Assets ratio relations. As for the Turkish firms, it is found that Sales increase all the debt ratios, and that only ROE, Sales, and Tangibility increase Long-term Debts / Total Assets ratio. Thus, in general, the proposed model does not work for the Turkish data. Results and discussion Our empirical tests on the financial indicators of the firms indexed in the Taiwanese and Turkish stock markets reveal some significant variances between these two countries' firms concerning their financial indicators. Firstly, while Taiwanese firms, representing the newly developed country context, are more homogenous within each other especially concerning for their total sales, assets, and debts, Turkish firms, representing the developing country context, are rather heterogeneous especially concerning for their ROE and depreciations. Secondly, Turkish firms' EBIT, tangibility, and both long and short term debt ratios are significantly higher than Taiwanese firms; while Taiwanese firms' Sales, Total Assets, and Depreciation ratio are significantly higher. This implies that the relative size of indebtedness and profitability before interest payments is higher in Turkish firms, while the relative size of cash and liquid sources, total sales, total assets, and accumulated depreciations is larger in Taiwanese firms. Beside these financial variances, significant differences also exist concerning the test results of the hypothesized relations. Our model of the drivers of indebtedness that we develop based on the original model of Singh and Nejadmalayeri (2004) is generally confirmed in the newly developed country context -but not in the developing country context. Especially, all the six hypothesized drivers of the total debt ratios prove to be effective in the Taiwanese firms. However, a few exceptions of non-confirmed relations still exist for long and short term debt ratios. For instance, ROE -as one of the negative drivers of long-term debts, and Sales and Tangibility -as for the drivers of short-term debts-are found to be ineffective. The results of the tests on the Turkish firms, on the other hand, indicate that only a few hypothesized relationships are confirmed. For instance, Sales, ROE and depreciations are effective on the long-term debts, and again Sales on both total and short-term debts. This implies that the concept of sales is the most outstanding -if not the only one-driver of indebtedness for our study's developing country, Turkey; whereas in Taiwan all the six drivers are effective, but especially the size of the assets, depreciations, and earnings are significantly influential on all of the three types of debt ratios. These results can be interpreted as the existence of a clear distinction between developing and developed nations' firms considering the health of their financial structures, debt ratios, and their drivers. In brief, newly developed countries' firms are much healthier in financial concerns, and the model works much better in this context. Provision of plausible explanations for the non-confirmed hypotheses in the developed country context needs further consideration. As for the long-term debt ratios of the Taiwanese firms, the ratio of profits to the equity (ROE) is found to be positively effective -contradictory to the negatively hypothesized relation, while it is negatively effective on both short and total debt ratios. Thus, the time horizon of the debts intervenes in the utilization of the leverage, and turns this still significant effect from a negative sign to a positive one. In other words, the expectation of a relationship between increasing profits -contributing to the formation of an internal cash reserve alternative to indebtment-and decreasing need for debts is not confirmed for long-term debts, and just the opposite is found. We can explain this positive effect which is similar to that of sales or assets, and dissimilar to that of EBIT, by assuming that the expectation both in the minds of the borrowers and lenders that higher ROE will increase the accumulation of cash reserves, increases also the courage, ability, and credibility to borrow more longterm debts, but it still decreases the need for short-term debts. Moreover, the provision of long-term debts in developed nations is much more abundant and convenient when compared not only to the provision of short-term debts, but also to the conditions in the developing economies. Therefore, we can deduce that in newly developed and still stably growing financial systems, either profitable company decision makers are confident for their profitability to grow continuously in the long run to repay their increased long-term debts, or the lenders located in these markets are richer and more able to provide long-term debts with convenient conditions, or both. Another contradictory finding related to time horizon is the negative effect of tangibility on short-term debts. The logic behind the hypothesis that purports a positive relation between tangible fixed assets and the ability to collect long-term debts seems to be more related to the need for long-term investments and to existence of some valuable fixed assets for collateralization. But, if this long term external support for investments is already provided in a developed financial system, the need for short term and less convenient debts should be decreased. Moreover, in those richer firms both the nominal value of internal cash reserves and tangible fixed assets may be large at the same time. Thus, considering both the strategies of lenders and borrowers in the advanced economies, any further increase in the tangibility should not be leading necessarily to an increase in the short run need for external funds. Another finding again in the Taiwanese context, not-supporting the related indebtedness hypothesis is the ineffectiveness of sales -which is indeed positively effective on long-term and total debt ratios-on the short-term debts. A plausible explanation for this finding may be the existence of a time lag between the cash inflow to be provided by sales and the need for short-term cash outflow to pay the short term debts. Moreover, in the short run, the companies of the developed countries which have been already found to be much stronger in the size of sales and assets generally do neither rely on the short run debts to produce and sell the fluctuating orders of the customers, nor rely on the sales to pay their short term debts. Rather, they make long term mass production and sales plans not based on the daily and uncertain orders coming from the customers. Thus their need and/or ability to secure short term debts is not related to their size of total sales; as the sales and / or orders for sales increase, the need for funds to spend more for the operations and investments may be easily found from other sources i.e. already reserved internal cash sources or long-term external sources. Briefly, the test results for the developed country context show that the model works best for the explanation of the drivers of the total and long-term debt ratios. For the short-term indebtedness, although our tests confirm only four of the six proposed drivers further research may discover some other drivers. Provision of some plausible explanations for the non-confirmed hypotheses in the developing country context seems to be much more difficult since only the size of the sales affect all the three types of debt ratios. The amount of sales is also the sole driver -among the six drivers that we test-that affect significantly total and short-term debts, In addition to sales, only ROE and depreciations affect the long-term debts ratio positively as hypothesized. However, the size of the total assets exerts a negative effect on long term debts ratio as opposed to the related hypothesis. Moreover, fixed assets and EBIT are not related to any one of the debt ratios. These findings indicate that the decision makers in the developing country context mostly rely on debt financing if their amount of sales increases; but on the other hand, if their size of the assets and accordingly equity is larger they prefer to utilize equity financing instead of long-term borrowing probably because of the non-availability convenient long-term borrowing costs and conditions. In this concern, ROE that will contribute to the retained earnings and accumulated depreciations, come together and create an alternative inner fund to indebtment in the long-term by enhancing the equity financing approach. Further research implications Limitations of this study may open new avenues for the further studies on similar research questions. In this study, we tried to deduce specific consequences pertaining to the comparison of newly developed and developing country contexts. However, due to the difficulties of accessing to relevant financial data, we preferred to analyze only one country, Turkey, representing the developing country context and again only one country, Taiwan, representing the newly developed country context, which were convenient for us to collect and analyze data. In future studies, number of financial markets may be increased to make generalizations. Another research may be related to collect data to retest indebtedness models in the specific periods of crises that the companies face frequently during the globalization process after 1980s. Finally, in order to discover other significant drivers of indebtedness especially in the developing country context, other financial indicators may be added to the model. Conclusions This study trying to develop and test an original indebtedness model on the drivers of indebtedness -namely EBIT, ROE, sales, total assets, fixed assets-total assets ratio, depreciation-total assets ratio-and conducted in the stock markets of two different countries, namely Turkey, a developing country and Taiwan, a newly developed country reveal that there are great differences between Turkish and Taiwanese firms in terms S. Aren et al. Drivers of firms' debt ratios: evidence from Taiwanese and Turkish firms of their capital structure and the drivers' impacts on their debt ratios. The size of the sales is the outstanding factor that leads firms to borrow more in the developing country context. On the contrary, sales, profitability, size of the assets, and depreciations only in the long run encourage firms to utilize equity financing, again in the same context. Briefly, we can conclude that the proposed model which is prepared based on the recent literature, works better in Taiwan. As in Turkey, it works to some extent, only for the long term debt ratios.
7,418.6
2012-02-21T00:00:00.000
[ "Business", "Economics" ]
Regulation of bi-color fluorescence changes of AIE supramolecular self-assembly gels by the interaction with Al 3+ and energy transfer Supramolecular fluorescent materials have attracted considerable attention in recent years since they endow specific and unique properties to materials. Nevertheless, the utilization of photo-responsive characteristics to modulate their fluorescence emission behaviors and functions are still rarely explored. Here a facile fabrication strategy for producing dual emissive materials based on the supramolecular gels was proposed. A bi-acylhydrazone supramolecular gelator BD was designed and synthesized by a Schiff base reaction. Interestingly, the gelator BD could self-assemble into stable supramolecular gel BDG with strong aggregation-induced emission (AIE) in DMF-H 2 O binary solutions via π - π stacking interactions. On the one hand, the BDG could selectively identify Al 3+ in the gel state. Upon addition of Al 3+ , the AIE BDG shows an obvious blue shift (85 nm, from yellow-green to sky-blue). On the other hand, artificial light-harvesting systems were successfully fabricated in gel environment based on the supramolecular strategy. In these systems, efficient energy transfer occurs between the BD assembly and the loaded acceptors. For instance, the transition from yellow-green to red light could be accomplished in BDG/SR 101 system. Based on these, the manipulation of bi-color fluorescence emission have been realized by the interaction with Al 3+ and energy transfer. Introduction The phenomenon and concept of aggregation-induced emission (AIE) of organic compounds, proposed by Tang and co-workers in 2001, 1 has been attracting considerable research interest and shows an extensive application prospects in many filed including chemical sensing, 2 fluorescent sensors, 3 bioimaging 4 and so on. In the AIE process, AIEgens exhibit strongly fluorescence emission in the aggregated state, 5,6 which is different from the aggregation caused quenching (ACQ) effect. Supramolecular materials, 7 where the component molecules are combined together through the non-covalent bond, has been attracting great interest. In these systems, the AIE can attribute much to provoke wholesale academic interest for innovative practical applications. Supramolecular gels were constructed by single noncovalent interaction or the combination of multiple noncovalent interactions. 8 By comparison, the dynamic and reversible properties of multiple noncovalent interactions endow supramolecular gels with excellent response to various external stimuli, such as heat, light, pH, metal cations and so forth. 8 Meanwhile, the gel is a soft substance between a liquid state and a solid state, and a large amount of solvent molecules are fixed/wrapped by gelators, in which the solvent could contribute also. These characteristics will provide a broad scope for the fabrication of smart materials or devices. Recognition and sensing of metal ions have become the focus of considerable attention in biological, chemical, material and environmental fields. [9][10] Aluminum is the third most abundant of all the elements (after oxygen and silicon) in the earth's crust, 11,12 accounting for 8.3% of total mineral components. High aluminium intake can damage the central nervous system and be involved in Alzheimer's disease, Parkinson's disease, bone softening, chronic renal failure and smoking-related diseases. 13 Accordingly, the development of convenient and efficient testing methods for Al 3+ is of great significance for environmental protection and human health. In this way, developing a fluorescent chemosensor for the selective recognition and monitoring of Al 3+ in environmental or in living cells 14 is essential. Photosynthesis plays a vital role in the survival foundation for organisms, in which the large number of closely packed antenna pigments (ca. 200) around the reaction center is applied in the energy transfer and accumulation of sunlight. [15][16][17][18] Several scaffolds have had extraordinary results in mimicking natural light-harvesting process by achieving efficient energy transfer from donors to acceptors through a Förster resonance energy transfer (FRET) process, 19, 20 such as protein assemblies, 21 dendrimers, 22 metal complex polymers, and porphyrin assemblies. These artificial light-harvesting systems/materials are of significant importance for practical applications, which is still less emphasized in the gel state to date. 23 Fluorophores with aggregation-induced emission properties are good candidates for the construction of fluorescent supramolecular systems. However, supramolecular gels with tunable emission have been rarely reported, although some progress has recently been made on the construction of discrete fluorescent supramolecular assemblies. 24 Traditionally, more than one kind of fluorophores should be incorporated to construct this kind of gel, thus resulting in the complexity of the designed systems. Besides this method, the emission behavior could also be tuned by the introduction of Interactions with ions, host-guest interactions or other forms of interaction, [9][10]23 resulting in the change in the emission wavelength. Herein, we reported a bi-acylhydrazone compound, which was found to exhibit competitive guest stimuli responsiveness in the gel state and AIE properties. The stable organogel (BDG) shows selective and sensitive stimuliresponse for Al 3+ but the BD in solution could not detect Al 3+ in the fluorescence analysis experiment. Moreover, efficient light-harvesting systems (LHSs) are successfully fabricated using organogel BDG as an energy donor, and a series of dyes such as sulforhodamine 101, acridine red, rhodamine B and rhodamine 6G as energy acceptor, respectively. Additionally, the fluorescence color and emission spectrum of BD can be effectively tuned by metal ion coordination or fabricating lightharvesting systems. As far as we know, this is the first report that connects self-assembly, AIE activity, light-harvesting and ion recognition through a multifunction bi-acylhydrazone derivative (Scheme 1). Scheme 1. Schematic representation for bi-color fluorescence changes of a bi-acylhydrazone compound Materials and general methods 2 -( 2 -hexyl -1 H -benzimidazole -1 -yl ) Acetohydrazide, terephthalaldehyde, organic solvent and metal salt were commercially available and were used without further purification. Deionized water was used for the whole experiment process. The measurements of steady-state luminescence were performed with a Shimadzu RF-5301PC spectrometer and spectrofluorophotometer (F-4500, Japan). Ultraviolet-visible (UV-vis) spectra were recorded on a Shimadzu UV-1750 spectrometer. The infrared spectra were recorded on a Thermo Scientific Nicolet iS5 FT-IR spectrophotometer. Fluorescence micrographs (FOM) of the samples were performed with Olympus IX 71. The 1 H and 13 C NMR spectra were recorded on a Bruker 400 MHz spectrometer. The internal standard was TMS and the solvents were dimethylsulfoxide (DMSO-d 6 ). Time-resolved photoluminescence decay measurements were carried out using a time-correlated single-photon counting (TCSPC) spectrometer (Edinburgh, FLS 980). Mass spectra were carried out by a Bruker solanX 70 FT-MS mass spectrometer equipped with ESI interface and ion trap analyzer. The morphologies of the as-synthesized samples were characterized with a SM-74190UEC SEM using an accelerating voltage of 10kV. The quantum yield was measured on a FLS980 fluorescence spectrometer. X-ray diffraction patterns (XRD) were determined with a X' Pert PRO diffractometer using Cu-K α radiation over the 2θ range of 5-90°. Form of the gel BDG First of all, to exam the assembly ability and gelation properties of the gelator BD, we configured BD in a variety of polar, non-polar organic and component solvents (organic solvents-H 2 O), respectively into the proportion of 5% and heated it completely. After cooled to room temperature, we found that BD could self-assembly into a stable gel only in DMF, DMF-H 2 O binary solution, DMSO and DMSO-H 2 O binary solution. The successful preparation of the gel BDG was verified via a ''stabilized reverse tube'' method and the results were summarized in Tables S1. As a general procedure, the compound BD (0.015 mmol) were completely dissolved in DMF (0.4 mL), then H 2 O (1.6 mL) was added, dissolved completely by mixing a hot solution, and finally the mixture was cooled to ambient temperature. As a vital parameter of supramolecular gel, we investigated the effect of different water content on the assembly ability of gel (Fig. S4), which plays an important role when we were choosing the solvent composition of BD gel(V DMF : V water = 1 : 4, Fig. S5). In DMF-H 2 O binary system, with the water content less than 67.5%, the gel of BD cannot be obtained. And meanwhile, we also conducted partial tests for DMSO (Tables S1, Fig. S6 , v/v) binary system is optimal for the formation of the gel BDG. As shown in Fig. 1a, the morphology structure of the BDG is further investigated by scanning electron microscopy (SEM), which is observed in homogeneous uniform state. We can find that large fibres exhibit some fine structure and appear to be multi-layered. 3D entangled network structures comprised of fibers with high aspect ratios were obtained, revealing well-ordered molecular packing properties. The fibres lacking of twisting and regular shape exist in the form of widths of 200-500 nm and lengths of dozens of micrometers. As the FOM (fluorescent optical microscopy) images show, the morphology of BDG are formed of dense aggregated bundles of fibres (Fig. 1b). Interestingly, a significantly yellow-green light can be achieved in the BDG. AIE behaviours As shown in Fig. 2, the solution of BD had inappreciable fluorescence. Whereas, the emission intensity emerged rapidly, and reached a steady state finally at λ = 520 nm in the process of the temperature of heated DMF/water solution declined under the T gel (T < T gel ) of BDG, which indicated that the fluorescence of supramolecular gel BDG was aggregation induced emission (AIE), accompanied by yellow-green fluorescence. In order to indicate that it wasn't a pure thermal effect, we did another experiment as a supplementary proof of AIE properties of the BD gel (Fig. S8). From the experiment, we can see that BD powder (25 ℃) has a yellow-green light under UV light, which corresponds to the description in Fig. 2. However, this phenomenon does not exist in BD powder (25 ℃) under daylight or BD in DMF (1 × 10 -4 M, 25℃) under UV light. That means the molecule BD presents the interesting aggregation-induced emission phenomenon. BD was insoluble in water, and the fluorescence of BD in aggregation state was also studied in a DMF/water mixtures with different water fractions to investigate its solvent-dependent aggregation behavior (Fig. S9). The emission peak at 430 nm of BD (3 × 10 -4 M) was very weak in DMF and almost kept constant until the water fraction reached 30%. The fluorescence intensity of BD showed a remarkable enhancement caused by the intriguing AIE effect owing to the formation of aggregates when the water volume attained to 40%. Meanwhile, it exhibited an apparent red shift (from λ = 430 nm to 520 nm) with the increment of water content. At about 70% water, the maximal emission intensity was achieved, which is almost 7 fold greater than that in dilute solution. In a DMSO/water mixtures with different water fractions, the BD shows a similar phenomenon (Fig. S10). Interaction of Al 3+ The investigation of BDG for the influence of metal ions on it in mixed solutions was performed (Fig. S11). The addition and diffusion of 10.0 equiv. of various metal ions (Na + , K + , Mn 2+ , Ni 2+ , Zn 2+ , Pb 2+ , Cu 2+ , Cr 3+ , Fe 3+ , Hg 2+ , Cd 2+ , Co 2+ , Mg 2+ , and Al 3+ Fig. 3 Fluorescence spectra (λ ex = 372 nm) of BDG (0.5%, V DMF : V water = 1 : 4) before and after addition of Al 3+ (10.0 equiv.). Inset: Photographs showing the fluorescence change of BDG and BDG-Al 3+ under illumination at 365 nm using their nitrate salts as solutions) to the BD generated the corresponding metallogels (such as BDG-Al). In the fluorescence spectrum, upon the addition of 10.0 equiv. Al 3+ to the BDG in the process of gel formation, the emmision of BDG made an obvious blue shift (Fig. 3), which indicated that Al 3+ could be clearly detected using BDG. Considering the above interesting blue shift in the fluorescence spectrum, the effects of various concentrations of Al 3+ on the fluorescence spectrum of BDG were investigated in the gel state (Fig. S12). When the concentration of Al 3+ is further increased, a blue shift of 85 nm emerges; meanwhile the original peak disappears. Then the fluorescence intensity of BDG at 435 nm enhances gradually with the continuing increase of Al 3+ concentration. The fluorescent selectivity of BDG towards Al 3+ may be attributed to the smaller ionic radius (0.5 Å), which allowed a suitable coordination geometry with BD and higher charge density that made Al 3+ coordinate to BD strongly. 25 In the presence of the Al 3+ ion, the carbonyl O and acylhydrazone N can coordinate with the Al 3+ center atom, which increased the energies of the n-p* transitions compared to the corresponding p-p* transitions, and finally the PET process was interrupted, and the fluorescence changes from yellow-green to blue emission. As the Job's plot experiments showed, we can see an obvious peak at 3 : 7 of BDG-Al 3+ , assignable to the best composition ratio of BDG-Al 3+ (1 : 2) (Fig. 4a). In terms of the performance of identifying Al 3+ , for example, Fig. 4b shows an investigation in which various metal cations (including Na + , K + , Mn 2+ , Ni 2+ , Zn 2+ , Pb 2+ , Cu 2+ , Cr 3+ , Fe 3+ , Hg 2+ , Cd 2+ , Co 2+ , Mg 2+ and Al 3+ ) are put into dot arrays of BD-gel, but only Al 3+ gave BDG bright sky-blue color under UV light at 365 nm. In addition, the response mechanism of the BDG for Al 3+ was further investigated by 1 H NMR titration, IR, and XRD. In the 1 H NMR titrations of Al 3+ for BD (Fig. 5) with the increasing concentration of Al 3+ , the N-H (H 1 ) signals show a downfield shift. When adding proper Al 3+ , the weakness of the N-H(H 1 ) signal at δ 11.9 ppm indicated that the deprotonation process occurred. In the meanwhile, the hydrogen bond existing in the acylhydrazone group was destroyed during the coordination of Al 3+ with the carbonyl group of BD. Additionally, at low concentration of BD, =CH (H 2 ) on BD appeared and its signals show a downfield shift with the increasing concentration of Al 3+ . Therefore, the possible sensing mechanism based on deprotonation was proposed as given in Fig. S13. The XRD patterns of free BDG and BDG-Al (Fig. S14) showed d-spacings of 3.52 Å and 11.45 Å at 2θ = 25.23° and 7.71° supporting the presence of π-π stacking and hydrophobic interaction in the BD. At the meantime, d-spacings of 3.51 Å and 3.80 Å at 2θ = 25.37° and 23.34° suggested that the π-π stacking remained in metallogel BDG-Al. FT-IR spectra of the powder gelator BD and the xerogel BDG-Al were measured to deeply understand the coordination of Al 3+ and the gelator BD (Fig. S15). The bands at 1695 and 1596 cm -1 corresponding to V (C=O) and V (C=N) are present in the IR spectrum of the gelator BD. However, with the addition of 2.0 equiv. of Al 3+ and the formation of Al 3+ coordinated metallogel BDG-Al, the V (C=N) band shifts to lower wavenumber (1562 cm -1 ), respectively, due to coordination with the metal centre. Light-harvesting Due to the unique AIE effect, compound BD in the aggregated form can exhibit broad emission at 520 nm, which enables BD act as a remarkable donor in gel. The potential of fluorescence resonance energy transfer (FRET) between AIE (BDG) and acceptors were investigated herein. Sulforhodamine 101 (SR 101), acridine red, rhodamine 6G (Rh 6G) and rhodamine B (Rh B) was selected as energy acceptor, respectively, since they show strong absorption in the visible region and overlap with the emission from BDG. As shown in Fig. 6a, Fig. S16a, S16c and S16e, the absorption band of these dye molecules overlapped well with the fluorescence emission of the BD assembly. On the other hand, a key parameter in facilitating efficient energy transfer process is the spatially well-organized chromophores. This can be realized in between the BD assembly and the dyes. As shown in Figure 6b, with the gradual addition of dye SR 101, the fluorescence intensity of BDG decreased significantly, while the fluorescence emission band of SR 101 appeared and increased when excited at 372 nm, accompanying with a change of emission color from green/yellow to red under a UV lamp. The fluorescence quantum yield of BDG/SR 101 system was estimated to be 13.28% (Fig. S17a). According to fluorescence titration spectrum, the energy transfer efficiency was calculated to be 80% (Fig. S18a) at a donor/acceptor ratio of 200:1. Upon gradual addition of acridine red, Rh 6G and Rh B as acceptors to the BD assembly, respectively, the emission peak of BDG begin to decay while the emissive intensity of acceptors kept increasing (Fig. S16b, S16d and S16f). According to fluorescence titration spectrum, the energy transfer efficiency (Φ ET ) of acridine red, Rh 6G and Rh B as acceptors are calculated to be 98%, 99%, and 98% (Fig. S18b, S18c and S18d). The fluorescence quantum yield of acridine red, Rh 6G and Rh B system was estimated to be 14.99%, 11.88% and 7.51% (Fig. S17b, S17c and S17d). Furthermore, to clearly show the energy transfer (ET) from BDG to acceptors during the gelation process, we measured the fluorescence spectra of gelation in energy transfer systems (Fig. S19). In view of rapid gelation of BD, we performed the experiment at lower concentrations. Primarily, when adding SR 101 into BDG (molar ratio of BDG: SR 101 is 3000 : 1) in DMF/H 2 O solution, fluorescence spectrum of BDG/SR 101 showed a continuous significant increase at 610 nm during the gelation process (Fig. S19a). In addition, compared with the emission spectrum of BDG (Fig. S20) at the same concentration, fluorescence intensity of BDG/SR 101 at 515 nm after 10 minutes is only half as much as intensity of BDG. Finally, the morphologies of BDG/SR 101, BDG/acridine red, BDG/Rh 6G and BDG/Rh B were investigated by scanning electron microscopy (SEM) (Fig. S21). The SEM images of the lightharvesting systems showed a number of micron rodlike structure. The transformation from network structures into stick structures indicated that dye molecules could create a interaction with BDG to change the previous intramolecular interactions, thereby the formation of rod-like supramolecular structure was observed. Subsequently, the fluorescent optical microscopy (FOM) images further truly reflected the morphology and luminescence phenomenon of the lightharvesting systems (Fig. S22). Conclusion In conclusion, we have designed a novel organogelator BD which can be self-assembled into organogel BDG with strong AIE. The supramolecular gel BDG can serve both as a selective fluorescence sensing system for Al 3+ and light-harvesting energy donor with several kinds of dyes through FRET mechanism. Notably, the supramolecular coordination complexes BDG-Al 3+ formed by the metal-coordination-driven self-assembly emit from yellow-green to sky-blue as the concentration of Al 3+ increases. The fluorescence emission from yellow-green to red was also obtained via the host-guest interactions in artificial light-harvesting systems. This simple bi-acylhydrazone demonstrates an example as a multi-function light-emitting material including AIE activity, light-harvesting, sensing properties and so on. Conflicts of interest There are no conflicts to declare.
4,439.4
2021-08-14T00:00:00.000
[ "Materials Science", "Chemistry" ]
High-resolution surface faulting from the 1983 Idaho Lost River Fault Mw 6.9 earthquake and previous events We present high-resolution mapping and surface faulting measurements along the Lost River fault (Idaho-USA), a normal fault activated in the 1983 (Mw 6.9) earthquake. The earthquake ruptured ~35 km of the fault with a maximum throw of ~3 m. From new 5 to 30 cm-pixel resolution topography collected by an Unmanned Aerial Vehicle, we produce the most comprehensive dataset of systematically measured vertical separations from ~37 km of fault length activated by the 1983 and prehistoric earthquakes. We provide Digital Elevation Models, orthophotographs, and three tables of: (i) 757 surface rupture traces, (ii) 1295 serial topographic profiles spaced 25 m apart that indicate rupture zone width and (iii) 2053 vertical separation measurements, each with additional textual and numerical fields. Our novel dataset supports advancing scientific knowledge about this fault system, refining scaling laws of intra-continental faults, comparing to other earthquakes to better understand faulting processes, and contributing to global probabilistic hazard approaches. Our methodology can be applied to other fault zones with high-resolution topographic data. Background & Summary In the past 40 years, numerous moderate-to-large intra-continental extensional earthquakes (M w 6-7) have generated complex surface ruptures along primary and secondary synthetic and antithetic splay faults. In-depth studies of these systems contribute to understanding earthquake recurrence rates, surface rupture processes, fault displacement hazard, and the tectonic significance of these fault systems at late-Quaternary timescales. In 1983, the Borah Peak earthquake (M w 6.9, hereinafter referred to as 1983Eq), one of the largest and most recent normal-faulting earthquakes in the United States, ruptured ~35 km of the ~130-km-long Lost River Fault (LRF) in southeastern Idaho (Fig. 1). The LRF is in the northernmost portion of the Basin and Range Province 1 , strikes ~N25°W and dips ~75°SW. The LRF and the 1983Eq have been the focus of seminal investigations. Multiple studies constrained the fault geometry at depth, the seismic sequence, and tectonic strain from shallow seismic lines, seismological data and GPS velocities [2][3][4][5][6][7][8] , highlighting the nucleation of the rupture at a depth of ~16 km at the southern tip of the activated fault ( Fig. 1) with subsequent northwestward propagation. Geodetic data suggested a planar high-angle source fault [9][10][11] . Some studies characterized the surface and depth deformation pattern dividing the fault with boundaries and complexities in six ~SW-dipping active normal segments: Challis, Warm Springs, Thousand Springs, Mackay, Pass Creek, and Arco [12][13][14][15][16] . The Thousand Springs and the southern Warm Springs segments were activated in 1983 with a normal-oblique rupture mechanism (Fig. 1). In particular, Crone et al. 13 , mapped the surface ruptures over the ~37 km ruptured fault and measured the vertical (Supplementary Figure 1) and the strike-slip components, highlighting a ~17% left-lateral component of the total slip. Others constrained the timing of multiple prehistoric surface faulting events [17][18][19][20][21][22] from Quaternary geology, paleo-seismological trenching and radionuclide dating. DuRoss et al. 23 reexamined the surface deformation produced by the 1983Eq, showing that structural-geological complexities present along the fault guided the coseismic deformation pattern along its northern 16 km and providing new mapping and vertical separation measurements (Supplementary Figure 1). High-resolution surface deformation datasets from normal faults are limited to a few recent earthquakes [24][25][26][27][28] . Baize et al. 29 unifies this datatype from literature studies in a consistent database. Our objective was to collect and x Extent of digital surface models produced from low-altitude aerial imagery for this study using UAV Lost River Fault and Lone Pine Fault traces (From USGS and Idaho Geological Survey). Red if activated by the 1983 earthquake) 31 Extent of digital surface models from Bunds et al. 32 (areas 3 and 5) and Bunds 31,32 ). Circled letters (a -m) correspond to the photographs in Fig. 2. Inset map shows location of the LRF in the Basin and Range extensional intra-continental tectonic province of the western USA. 1983 Borah Peak main shock focal mechanism is from Doser and Smith 3 . systematically analyze vertical separations (VS) along the LRF using newly acquired high-resolution topography. We define VS as the vertical distance between the intersection of a vertical plane at the fault and lines projected along the hanging wall (HW) and footwall (FW) surfaces assumed to be continuous prior to their displacement. In spring 2019, we imaged ~21 km along-strike of the LRF (Fig. 1) using a Phantom 4-Pro Unmanned Aerial Vehicle (UAV) flying at 70-120 m elevation above ground level. Images were geolocated with on-board GNSS (Global Navigation Satellite System) and differential dGNSS Ground Control Points. We processed the images in Agisoft Metashape photogrammetric modeling software (versions 1.6.0) to produce high-resolution Digital Elevation Models and orthophotos 30 . We also used data from Bunds et al. 31,32 (~16 km along-strike) to create hillshades. From the above datasets, we mapped the observable 1983 coseismic surface ruptures and Quaternary fault scarps (hereinafter referred to as respectively CoRs and Qfs). For quality control, we assigned each trace an Outcrop Quality Ranking (OQR) on a 1-to-4 scale, based on the faulting evidence in the high-resolution image (1 is best). We created an interactive MATLAB (www.mathworks.com) algorithm that we used to make 2053 VS measurements along 1295 fault-perpendicular topographic profiles 33 with a 25 m spacing. We assigned a Measure Quality Ranking (MQR) to each VS measurement considering the vegetation, the angle between the HW and FW, and the fault position. Two geoscientists independently analyzed 10% of the profiles to access subjectivity. We provide the mapped traces as shapefiles, three tables that provide geometric information on the CoRs and Qfs of the areas shown in Fig. 1, VS measurements, methodology, topographic profiles, and quality parameters stored in Pangaea 34 . This database provides new high-resolution information on recent-ground-rupturing earthquakes along the LRF, a major active extensional fault. Our data are critical for informing paleoseismic, tectonic geomorphology and structural geologic investigations of the LRF, as well as for characterizing probabilistic fault displacement hazard analysis 29 , the effect of geometric discontinuities on rupture extent, and slip-length scaling in large earthquakes [35][36][37][38] . Our methodology advances systematic approaches for measuring fault scarp profiles from the growing archive of high-resolution topography. Images have geolocation information from onboard GNSS with a 10 m accuracy. We used reduce error along the Thousand Springs Segment and Mackay Segment (areas 8 and 10), from ground control points (GCP) measured with a dGNSS. We placed the ~1-m-square black and white vinyl GCP targets on both sides of the fault. Along the Thousand Spring (~4.35 km 2 of imagery) and Mackay Segment (~1 km 2 ), we used 17 (~4 GCP/ km 2 ) and 12 GCPs, respectively. We measured GCP locations with a GPS1200 base station (Fig. 2m), an RX1200 rover with an INTUICOM antenna, and a Leica AX1202GG tripod. The GCP position accuracy is ~0.02 m in the horizontal and vertical directions for area 10 and ~1.2 m and ~2.8 m, respectively, for area 8. We corrected the station locations using the National Geodetic Survey's Online Positioning User Service (Opus 46 ; http://www.ngs.noaa. gov/OPUS/) and reprojected positions into WGS84 UTM zone 12 N. www.nature.com/scientificdata www.nature.com/scientificdata/ Processing of aerial images and mapping. We manually selected the UAV images and eliminated those with a low quality, blurred, or acquired by mistake (for example, takeoff and landing photos). We processed the selected photographs with Agisoft Metashape image-based photogrammetric modeling software (Version 1.6.0) to produce dense point clouds, orthomosaics and digital elevation models (DEMs). Figure 3a shows an example of Orthomosaic and hillshade produced from a DEM [47][48][49][50][51][52][53] . The initial alignment was highest quality. Dense point clouds, mesh, and texture were made with high-quality settings. The DEM and orthophotos were exported with the default recommended resolution (2-30 cm/pix). The DEMs were then used to build slope maps, hillshade maps, curvature maps and aspect maps on ArcMap (ESRI ArcMap© 10.7) 30 . We also used DEMs and orthomosaics hosted by OpenTopography (https://opentopography.org) produced by Bunds et al. 31,32 (areas 3, 5 and 7 of Fig. 1). The DEMs and orthomosaics were used for mapping in ArcMap© at a fixed scale 1:400, also taking into consideration the maps produced by previous authors 13,23 . We mapped keeping the same continuous line for each clearly visible trace on the 1:400 scale of our DEMs and orthomosaics. The accuracy of the mapping is therefore reproducible at this scale. Figure 2 presents representative photos of the surface faulting. Figure 3a shows a detail of the map (hillshade and orthomosaic) where we mapped CoRs and Qfs. During the digital mapping, we assigned three attributes to fault traces: an identification number, the type of trace (Principal/distributed CoR or Qfs) and the dip direction (W-dip or E-dip). We assigned the "Type" attribute to the CoRs on the basis of four parameters considered: the dip-direction, the rupture length, the along-strike continuity and the amount of VS. In areas where only one CoR was visible, we ex officio assigned the Principal CoR attribute. Where instead there were more parallel CoRs, we assigned the Principal CoR attribute to those synthetic structures with greater continuity, length and/or VS. We have assigned the Distributed CoR attribute to all the remaining CoRs and, even in this case ex officio, to all the antithetic CoRs. These attributes were used in the profile analysis described below. An outcrop-quality ranking (OQR) was also assigned to each trace. The OQR consists of a 1 to 4 ranking (ascending quality; OQR 1 = very high, OQR 4 = very low), assigned based on the evidence of the trace on the high-resolution image (i.e., outcrop quality). Sequential analysis of fault-crossing topographic profiles. The main challenge was to investigate the topography along ~37 km of fault and efficiently measure vertical separation (VS). We developed a MATLAB algorithm that we used to systematically measure VS along 1295 topographic profiles 33 . The ~150 surface offset measurements from Crone et al. 13 document a minor left-lateral slip component of the 1983Eq ruptures. With our methodology we only measured the vertical component of the fault displacement. Following DuRoss et al. 23 , we ignored the ~17% of the moment released as left-lateral slip, considering it to have minimal influence. Our VS measurements can therefore be considered appropriate for future normal-fault surface-rupture processes studies. The inputs were the DEMs and the mapped fault traces. We tiled the DEMs using the "Split Raster" tool from ArcMap© (see the guide provided 33 ). The topographic profiles that were generated from the DEMs have a 25-m spacing and with elevations every 20 cm from a 30 cm moving window. The 25-m spacing ensured that we made at least one VS measurement for almost every CoRs, even for relatively short ones. The 2 m averaging window minimized the impact of the topography. The profiles are orientated perpendicular to the average rupture strike in the individual areas ( Fig. 1). Due to the complex pattern of ruptures characterized by distributed CoRs with variable strike, the profiles are not always perpendicular to the rupture traces. The vertical component (VS) of the displacement is not affected by the variation of the angle of the topographic profile only if there is no slope variation in the along-strike direction. In other cases, it can affect the measurements (discussion in the technical validation section). A graphical interface shows vertical lines along the topographic profile (red and blue for west-and for east-dipping faults, respectively) from the traces mapped in ArcMap©. To measure VS, we marked two points along each of the HW and FW to be used for the respective surface projections. While choosing the four points for the linear surface projections, we considered the small bushes that form the vegetation. While we did not classify vegetation, we selected bare ground points while measuring VS and avoided vegetation easily identifiable on orthomosaics, DEMs and topographic profiles. The possibility to change the lighting direction (to the hillshades made from DEMs) helped in this process. A fifth point associates the measurement with the trace ID. A sixth point indicates the position where the fault intersects the topography. We consider the scarp morphology degradation and accumulation factors to estimate position 18,54-60 , which often corresponds to the steepest part of the scarp face. Figure 3b shows a topographic profile with CoRs and Qfs and the points that we used to build the linear surface projections for the VS. Figure 3c is a photograph of the Double Springs Pass road area showing a natural example of the geometry used to interpret the 1295 topographic profiles. As shown in Figs. 2 and 3, CoRs are distinguishable from Qfs in the DEMs and orthomosaics. To measure VS, we picked four points within a few meters of the CoRs and within tens of meters of the Qfs. As established in the literature 13,23 and from our mapping, the 1983Eq produced a complex pattern of synthetic and antithetic coseismic ruptures, forming grabens and horsts. These structures vary in width substantially along the fault trace. We aimed to distinguish individual synthetic and antithetic CoRs while measuring VS. When this was not always possible (for example when the CoRs were within ~3-4 m of each other) because there was insufficient length for robust linear surface projections, we picked the four points in the first suitable position and added up the values of VS. For example in the case of a graben, we measured the principal rupture from far-field points. The graphical interface closes after the sixth point and reopens immediately showing the FW and HW linear surface projections, and the VS in centimeters. After seeing the projected lines, a seventh point confirm the fault position. Finally, it is decided whether to keep the measurement or, if there is a mistake, delete it and redo the interpretation. Following a decision to keep the measurement, the object (CoR or Qfs) and a measure-quality ranking (MQR) are saved in a MATLAB structure file. The MQR has a 1-to-4 value based on three parameters: the presence www.nature.com/scientificdata www.nature.com/scientificdata/ www.nature.com/scientificdata www.nature.com/scientificdata/ of vegetation, the angle between the linear surface projections at the HW and FW, and the trace position. A MQR = 1 (high-quality) indicates absent or minimal vegetation, a low angle between the linear surface projections (<30°) and a clear trace position. When the ground surface is completely covered with vegetation, there is a high-angle between linear surface projections (>30°), or factors such as high erosion make identifying the trace challenging, we assign MQR = 4 (low-quality). In addition, the MATLAB structure file includes the horizontal position of each clicked point and the VS. The graphical output from MATLAB is saved as a MATLAB figure and .EPS file. The compilation of the database derives from the storage of these information which are then exported in a .txt file. We subsequently opened these .txt files in Microsoft Excel where we homogenized and screened them and where we added other important textual and numerical information not originally saved in MATLAB. We provide the data organized in a simple database that is usable by other researchers. We compiled three tables: (1) Traces of mapped CoRs and Qfs, (2) topographic profiles, and (3) measurements acquired on topographic profiles. To make features uniquely identifiable, we assign a progressive ID to each individual trace, topographic profile and VS measurement. We used the topographic profiles and mapped rupture traces to measure additional fault parameters including the Rupture Zone Width (RZW). The RZW measures the rupture-to-rupture distance between the two most distant CoRs crossed by the topographic profile. Where a main trace is identified, we also measured the HW-and FW-RZW. RZW measurements could be affected by scarp degradation that hides rupture traces. Likely the affect is minimal because only the two most external ruptures are used, the measurement is the rupture-to-rupture distance and the ruptures are clearly identifiable in the DEMs and orthomosaics. We show an example in map view (Figs. 4a, b) and in section view (Fig. 4c) illustrating the RZW measurement and an along-strike plot (Fig. 4d) showing the distribution between HW, FW, and Tot-RZWs. We calculate the VS from the vertical distance between the intersection of a plane at the fault and lines projected along the HW and FW topographic surfaces. We assume that the surface was continuous prior to their displacement. Consistent with the literature 61 , we calculate the VS instead of the vertical displacement (i.e., throw according to McCalpin 62 ) described by McCalpin 62 as the "vertical distance between intersections of the fault plane, and planes [lines] formed by the displaced original geomorphic surfaces". Calculating throw would require knowing the CoR's dip. Using this approach we can make our measurements comparable to those from DuRoss et al. 23 who used a similar methodology to measure VS along the northern 16 km of the 1983Eq and to field-based measurements from Crone et al. 13 63,64 . We show along-strike profiles of the VS measurements acquired along the Warm Springs and Thousand Springs Segments in Figure 1) we plotted separately on the along-strike the sum of the VS measured on synthetic CoRs and Qfs as positive values and the sum of antithetic CoRs and Qfs as negative values. We also report the location of the measurements from Crone et al. 13 and from DuRoss et al. 23 and the along-strike profiles made with their data, as well as a correlation plot comparing a subset of the VS measurements from this paper and from Crone et al. 13 , and DuRoss et al. 23 papers (see Supplementary Figure 2 and its description). Unlike DuRoss et al. 23 , our topographic profile locations have a fixed spacing over the entire extent of the investigated areas. Following Salisbury et al. 63 , the choice not to identify correlative surfaces with the best scarp preservation is likely to decrease subjectivity and biases from selecting only high-quality features. If it is true that subjectivity decreases with this approach, it is also likely that consequently there is a corresponding increase in VS noise. This noise is due to complexities such as vegetation (e.g. bushes and shrubs), surface erosion (e.g. gully erosion), anthropogenic structures (e.g. irrigation channels, excavations, trenches). To minimize these effects, we acquired the measurements by carrying out an assiduous control of the surrounding conditions of the topographic profiles on 3D models, orthomosaics and DEMs. This made it possible to identify, and therefore select, the places of the topography without complexities (creating projection lines on the bare ground, avoiding, for example, bushes or gullies).The 2-m averaging window minimized the impact of the topography. In addition, as described above, a MQR was subjectively assigned to each acquired measurement, which results low on areas of which the geologists observed one or more complexities. Furthermore, we did not acquire VS measurements where the topography was clearly conditioned by anthropogenic structures. In summary, our measurement database is self-contained and well documented so that other investigators can examine our individual measurements. Data Records We acquired numerical, textual, and graphical datatypes. We have chosen the most appropriate repository for each datatype, whose formats and features we define here. The data record consists of: 1. High-resolution photogrammetric products in numbered locations in Fig. 1. These were processed from survey campaign photographs using Agisoft Metashape. Metadata are summarized in Table 1. Point clouds are saved in.laz format and the orthomosaics and Digital Elevation Models are saved in GeoTIFF format. Datasets were processed and analyzed in the WGS1984 geographic coordinate system with UTM Zone 12 N projection (EPSG: 32612) and stored in the OpenTopography repository 30 . 2. A shapefile (feature type: polyline) where each line represents a trace of a CoR or a Qfs mapped by the analysis of high-resolution images (example in Figs. 3a and 4a,b). The shapefile keeps in its attribute table: (i) an identification number (called "trace ID") which identifies the trace in a uniquely and which corresponds to the identification number of the first column in Table 2, (ii) the field "Type", a text value to www.nature.com/scientificdata www.nature.com/scientificdata/ make the categorization of individual traces immediate according to their characteristics, facilitating the use of the database on ArcMap© platforms, (iii) a field (called "dip") indicating the dip direction (~west-or ~east-dipping) to differentiate the synthetic from the antithetic structures and, (iv) the OQR (described above). The shapefile is stored in the Pangaea repository 34 . 3. A shapefile (feature type: polyline) of the topographic profiles constructed to acquire the VS measurements, stored in the Pangaea repository 34 . 4. 1295 topographic profiles figures, saved from the MATLAB analysis, in.pdf format, stored in the Pangaea repository 34 . www.nature.com/scientificdata www.nature.com/scientificdata/ The three dataset tables provided in this work (points 5, 6, and 7 above) were uploaded in the Pangaea repository 34 as .TXT files. We have chosen to repeat some initial fields of framing the three datasets to make each of them self-consistent and facilitate their use. Each of the fields have a name and a short name and are uniquely coded in the first row. The fields that make up the three datasets are described below. TRACES dataset. Each of the record listed in this dataset reports the trace location, a summary of the measurements acquired on each trace and the geometric characteristics of them. An example of the records is shown in Table 2. Table 1). 5. Area ID (short name: Area): number that identified the single area where the trace was mapped (Area numbers reported in Fig. 1 and Table 1 Topographic profiles dataset. This dataset reports topographic information, RZW measurements and the cumulated VS of the CoRs and Qfs traces crossed by each topographic profile. An example of the records is shown in Table 3. (1) Topographic Profile ID (short name: ID): text indicating an abbreviation with which the topographic profile is uniquely identified, corresponding to a sequential number from North to South. The Topographic profile ID is also present in the shapefile attribute table of the topographic profiles; www.nature.com/scientificdata www.nature.com/scientificdata/ Measurements dataset. This dataset reports all measurements with location, geometric characteristics, VS and related parameters. An example of the records is shown in Table 4. Table 3) along which the measurement was acquired; (10) Trace ID (short name: Tr.ID): number indicating the trace ID (Tr.ID in Table 2) on which the measurement was acquired; Data Statistical Properties Further demonstration of the value of the data we present here comes from the following statistical analysis. We mapped a total of 757 traces including 662 CoRs generated by the 1983Eq and 95 Qfs. All the mapped traces are divided between synthetic and antithetic, 48% and 39% of the total, respectively, for the CoRs (55% and 45% if considering only CoRs), and 9% and 4% of the total for the Qfs (69% and 31% if considering only Qfs) (Fig. 6a). By normalizing the traces by their length, the synthetic and antithetic traces are respectively 49% and 18% of the total for the CoRs and 25% and 8% of the total for the Qfs. By re-dividing these values for Qfs and CoRs, we obtain a substantial similarity between synthetic, 73% and 76%, and antithetic www.nature.com/scientificdata www.nature.com/scientificdata/ structures, 27% and 24%, suggesting a recurrence of the subdivision of the surface coseismic deformation for similar events in ~ ¾ on synthetic structures and ~ ¼ on antithetic structures. The total length of the mapped CoRs is ~51 km. Azimuthal information of the traces indicates the occurrence of a directional peak of strike for synthetic structures at N140°-150° and for antithetic structures a variable strike between N300° and N330° (Fig. 6b). We characterized the traces of the CoRs and the VS measurements by dividing them into three categories based on their position with respect to the main trace; 43% of the mapped CoRs represent the main trace (Principal-CoRs) while 47% and 9% lie respectively along the HW and the FW. Similarly, 40% of VS measurements represent the trace of the principal CoRs, while 50% and 10% lie respectively along the HW and the FW (Fig. 6c). We characterized the RZW, with widths shown for the HW, FW and total in Fig. 6d. The frequency histogram plot indicate that the FW-RZW averages ~67 m with a median value of ~70 m and a maximum of 236 m. The HW-RZW averages ~72 m with a median value of ~47 m and a maximum of 519 m. Following, for example, Boncio et al. 65 , we calculated the Total-RZW by adding the distance between the FW-RZW and the main www.nature.com/scientificdata www.nature.com/scientificdata/ trace to the distance between HW-RZW and the main trace (rupture-to-rupture distance); where a main trace has not been identified, the Total-RZW refers to the rupture-to-rupture distance between the furthest surface coseismic ruptures along the same topographic profile. For the Total-RZW we obtain an average of ~98 m and a maximum of 519 m. In general, for all ranges of values on the x-axis, the HW-RZW has a higher frequency than the FW-RZW. The values for which the HW-RZW has the same frequency as the TOT-RZW are representative of areas where we did not map CoRs on the FW. We made 2053 VS measurements. Of these, 1431 are VS measurements of CoRs and 619 are VS measurements of Qfs. The four frequency histogram plots in Figs. 6e and 6f, respectively, show the frequency distribution of these measurements separated by synthetic and antithetic structures. The VS of the synthetic CoRs is characterized by a sharp peak between 10 and 30 cm and a median value of 54 cm, while the VS of the antithetic CoRs peaks from 10 to 20 cm and a median value of 24 cm (Fig. 6e). The frequency graph of the synthetic Qfs shows a wider distribution of the values between 1.5 and 4.5 m, with a peak between 3 and 3.5 m and a median value of ~3 m; the frequency graph of the antithetic faults shows a peak correspondingt to 50 cm and a median of about 1 m in a decreasing trend up to 5.5 m. Technical Validation Even with the high-quality of the topographic data, and the efficiency and consistency of the profile analysis tools, the measurements still were made by humans. There are sources of both aleatory and epistemic uncertainty in the VS measurements 63 . While the aleatory uncertainty is considered to be irreducible, inherent and due to chance, the epistemic uncertainty is considered to be reducible, subjective and due to lack of knowledge 66,67 . The sources of these uncertainties are manifold. Scarp changes from erosion and deposition after the rupture induce uncertainty in the reconstruction. During VS measurement, decisions regarding the final geometric model of the area of interest are made based on the scientist's confidence in interpreting the topographic profile. The points chosen for the regression lines, for example, despite the possibility of being able to control the preservation status of the outcrop thanks to 3D topographic models, DEMs and orthomosaics, can modify the final result in terms of VS. A further source of epistemic uncertainty is due to the choice of the topographic profiles directions. As stated above, the vertical component of displacement is not affected by the variation of the angle of the topographic profile with respect to the strike of the faults. This statement is theoretically true, but it does not take into account a number of complexities that arise from the landform geometry. For example, the footwall and hanging wall that may have dissimilar slopes or slope-facing directions. In choosing the directions of the 1295 topographic profiles interpreted to generate our dataset, we took into account the average strike of the fault in the different areas but, with serial profiles, it is not possible to consider all the innumerable strike changes both along-strike and on CoRs parallel to each other. This source of uncertainty cannot therefore be considered negligible, although, in most cases, it is minimal. Similarly, the fault locations chosen can also vary the final result. For these reasons, the epistemic uncertainty is considered likely to exceed the aleatory uncertainty. With the assignment of the quality parameters made in this work (described above), and with the calculation of a statistical uncertainty (aleatory), we have tried to constrain the values of our data as much as possible. The VS database with uncertainty measures enables end users to decide whether to use values with high-quality ratings only, for example. As discussed in Salisbury et al. 63 , the difficulty of correctly interpreting the offset of earthquake ruptures may also depend on the natural variability of the slip along-strike. In numerous previous cases, important variations, even greater than 30%, have been documented within a few units of meters or tens of meters 68,69 . Also in this case, the subjectivity of the scientists plays an important role in acquiring the measurement and in establishing its reliability, avoiding the conditioning of the measurements acquired in the immediate vicinity. To assess subjectivity, two geoscientists experienced in fault scarp studies measured VS. After an initial comparison to standardize the basic scientific knowledge and literature on the LRF area, and to decrease the operator biases, as discussed in Gold et al. 70 and in Scharer et al. 71 , and suggested by Salisbury et al. 63 , the two operators interpreted the topographic profiles independently, dividing the profiles to be interpreted with even and odd numbers. 10% of the profiles were randomly chosen to be analyzed twice by both geoscientists. As shown in Fig. 7f, ~90 of the ~100 repeated measurements overlap within error, for VS separation, ranging from -50 cm (antithetic) and about 1.5 m (synthetic). Errors on each measurement are calculated by assuming 50 cm error in CoR position. For quality control, we assigned a quality ranking for each trace during the mapping phase. The ranking corresponds to the evidence of the trace on the hillshade and therefore to the outcrop quality (OQR, described above in the section Methods). Perfectly evident traces were ranked highly (OQR = 1) while poorly evident traces received a low rating (OQR = 4). While measuring VS we reviewed traces mapped on the hillshade on the ArcMap© platform but that were not evident in topographic profiles. In many cases, low ranked traces were eliminated. Following this procedure, we improved the trace and VS dataset quality and decreased uncertainty 63 . We assigned two independent uncertainties to the vertical separation measurements. (1) A manually assigned while interpreting the topographic profiles. We assigned each VS measurement a rating (MQR, described in the section Methods) based on the confidence accounting for three factors: i) presence of vegetation, ii) angle between the linear surface projections (at the HW and at the FW) and iii) position of the trace. Figure 7e shows the frequency distribution of the VS measurements based on their assigned MQR. For the 2053 VS measurements, we assigned a MQR = 1 (high-quality) to 470 measurements, a MQR = 2 to 780 measurements, a MQR = 3 to 523 measurements and a MQR = 4 (low-quality) to 280 measurements. (2) A quantitative fault VS error (aleatory uncertainty) based on the identified HW surface projection, FW surface projection (see Fig. 3b,c), and fault location. We use a non-weighted linear least-squares inversion to solve for the best-fit line to elevation measurements along a 2 m wide swath along both the HW and FW. Along the FW, the best-fit line (Foot line ) is (2021) Where m foot is the slope, x is the position along the profile, and b foot is the y-intercept. Along the HW, the best-fit line (Hanging line ) is where m hanging is the slope and b hanging is the y-intercept. We perform a coordinate transformation so that the coordinate system origin is at the location of the fault ( www.nature.com/scientificdata www.nature.com/scientificdata/ We solve for the uncertainty in the VS (ΔVS) using a propagation of uncertainty, We found it reasonable to assume an error in the position of the fault ΔFx ( ) of 25% of the VS, and not a fixed value. Δ = . Fx vertical separation/4 (5) Assuming a fixed error ΔFxwould have incorrectly estimated the true VS error. We estimate Δb foot and Δb hanging based on the covariance matrix with weights based on the average root-mean-square error of Eqs. 1 and 2. Figures 7a and 7c illustrate the relationship between VS measurements and the calculated uncertainties; measurements of separated synthetic and antithetic CoRs are represented with positive and negative values, respectively. Measurement uncertainty generally increases with VS. The CoRs (Fig. 7a) are clustered with small VS and error while the Qfs (Fig. 7c) are less clustered. CoRs and Qfs show similar best-fit lines. The frequency histogram plot in Fig. 7b indicates that the CoRs uncertainty have a sharp peak between 0 and 5 cm with a median value of 5 cm with rapidly decreasing distribution with increasing uncertainty value. The frequency histogram plot of the Qfs in Fig. 7d shows a peak between 0 and 10 cm and a median value of 25 cm. Usage Notes An in-depth study of earthquake surface rupture facilitates a better understanding of the controls on rupture processes along the fault zone and over time. This new database contributes towards mitigating earthquake hazard from a better understanding of fault sources and normal surface rupture characteristics. Our fault traces, VS and all the other information described above, can be used in a wide variety of ways in multiple geoscience fields. We provide some key examples below. The 1983 earthquake ruptures, along the Thousand Springs and the southern Warm Springs segments, develops almost entirely in alluvium and colluvium deposits, close to the contact with bedrock 13 . Our data can serves as critical input for scaling relationships for three-dimensional fracturing processes of a fault that cuts both bedrock and soft soils [72][73][74][75][76] . Furthermore, the integration of our database with lithological and geotechnical data, given the large extent of the mapped area and the heterogeneity of the rock types along the Lost River valley, could be used for microzonation studies for areas adjacent to surface rupturing faults [77][78][79][80][81][82] , along the LRF, and as example in similar contexts. Measurements of Rupture Zone Width and trace classification inform studies on hazard on the amplitude of HW and FW surface faulting relative to the principal coseismic surface rupture 65,81,82 . The mapped traces and the VS measurements integrated with other geometric and kinematic information (such as fault dip and lateral-slip components) indicate the surface slip distribution. Integrating this data with seismological data, seismic lines and well data, researchers can reconstruct the relationship between the deep tectonic structures and their surface manifestation [83][84][85][86] . The generation of profiles of VS along strike using data of the Measurements dataset and Topographic profile dataset can be compared and integrated with prior measurements 13,23 to gain additional scientific knowledge of the LRF, the seismic behavior of the fault segments, earthquake recurrence times, strain rates and propagation of displacements along strike. Further, these results can be compared with global data 87,88 . The VS data, the quality parameters (OQR and MQR), and the uncertainties can inform future study of subjectivity in the acquisition of similar types of data 63,89,90 and the comparison between data collected with our methodology and field collected data, whether collected shortly after the earthquake or several decades later. The mapped Qfs and the VS data along segments activated and not activated by the 1983Eq are useful for the paleoseismological assessment of characteristics and number of earthquakes released by the LRF and for detailed studies, at the outcrop scale, on fault scarps in extensional contexts in the world 18,[54][55][56][57][58][59][60]72,[91][92][93] . The topographic profiles and the VS data of the mapped Qfs (not activated by the 1983Eq) may be important because after mapping and measuring them with this reproducible methodology, their effects and shapes will be comparable before and after surface faulting when a new earthquake will ultimately occur. This same process could then be applied in any area with similar characteristics. The traces of the Qfs also provide starting points and locations for further palaeoseismological studies. This entire effort can contribute to constrain the surface fault trace geometry in the areas where we acquired imagery with detail, helping to implement the reliability of the location of the USGS's Quaternary faults database 106 . Information about surface faulting is used for seismic hazard studies in similar tectonic contexts in the world. Comparing of our database with similar databases 25,27,28 could help define probabilistic estimates to refine scaling laws 72,76,[107][108][109] , and could integrate worldwide databases 29 , improving knowledge of global earthquake properties. Code availability The MATLAB code developed for this work is available from Zenodo 33 . In addition, we created a small sample dataset that can be with the code as well as a complete guide that illustrates fundamental steps from the preparation of the input data to making the VS measurements. The guide and the example dataset are also hosted in Zenodo 33 .
8,603
2021-02-26T00:00:00.000
[ "Geology", "Environmental Science" ]
The Antiviral Efficacy of Withania somnifera (Ashwagandha) against Hepatitis C Virus Activity: In Vitro and in Silico Study Objective: Evaluation antiviral effects of Withania somnifera (Ashwagandha) leaf extract against HCV. Methods: cell proliferation was assessed using MTT assay after isolation of lymphocyte cells and treated with Ashwagandha water extract (ASH-WX) (6.25 mg/ml 100 mg/ml). Assessment of quantitative Real-time PCR, Colony forming assay, TNF-α and molecular docking studies after infection of normal lymphocyte cells with 1 ml (1.5 × 106 HCV) serum then incubated with ASH-WX at concentration 25 mg/ml & 50 mg/ml. Results: MTT assay revealed a significant increase (p < 0.001) in normal lymphocyte proliferation at all concentration’s particularity at 25 mg/ml with SI (6.06) and at 50 mg/ml with (5.8). While TNF-α significantly decreased following ASH-WX treatment compared with control untreated infected cells (p < 0.05). PCR results showed a marked viral load reduction after treatment by ASH-WX at concentration 25 mg/ml to 6.241 × 103 IU/mL. Colony formation assay test revealed colony formation reduction compared to positive untreated control. Molecular docking analysis revealed good prediction of binding between Ashwagandha and NS5B and PKN2 compared to Sovaldi. Conclusion: ASH-WX may be a powerful antiviral against HCV infection. Introduction Hepatitis C is an infectious disease caused by hepatitis C virus (HCV) that essentially influences the liver [1] [2]. The global prevalence of HCV infected adults is estimated at 2.5% (177.5 million) ranging from 2.9% in Africa to 1.3% in Americas, with a global viraemic positive cases of 67% (118.9 million), varying from 64.4% in Asia to 74.8% in Australia [3] [4]. Most of the cases are caused by HCV genotypes 1 (70%) and 4 and less frequently by genotypes 2 and 3 [5]. HCV is epidemic in Egypt that has the highest prevalence in the world (15%) [6]. Genotype type 4 is the most prevalent in Egypt of about 73% followed by genotype 1 (26%), whereas 15.7% of HCV infection in Egypt were mixed genotypes [7] [8]. There is no protective vaccine available for HCV treatment but there are several recent drugs that could be used as a treatment for HCV including pegylated interferon (PEG IFN), boceprevir, ribavirin, Sofosbuvir (Sovaldi) and telaprevir [9]. Every drug has its mechanism against HCV, for example, Pegylated IFN is used due to its increased stability in vivo that activates cellular antiviral responses. Approximately 50% of responders relapse will appear upon withdrawal of treatment. Ribavirin had a broad-spectrum activity against several RNA and DNA viruses, however the treatment of chronic HCV using ribavirin alone had no significant effect on HCV RNA levels, so it has been used in combination with IFN-alfa [10] [11] [12]. Sofosbuvir (Sovaldi) can mimic the physiological nucleotide and competitively blocks the NS5B polymerase which is one of the non-structural proteins essential for viral RNA replication and inhibits the HCV-RNA synthesis by RNA chain termination. Due to the high-cost and severe side effects of current HCV treatments such as fatigue, hematologic toxicity, ophthalmologic disorders, cardiac diseases, myocardial infarction and the probability of virus recurrence [12], scientists are in great need to find new agents that are less expensive and less non-toxic and highly effective in combating HCV. Natural products have been used as traditional medicines in many parts of the world like Egypt, China, Greece, and India since ancient times [13]. Ayurvedic medicine eliminates many symptoms of different human diseases, including infectious diseases, and has been used for thousands of years [14]. One of the important natural products is Withania somnifera. Withania somnifera belongs to family Solanaceae and is commonly known as Ashwagandha or Indian ginseng and considered as a valuable medicinal herb in the Ayurvedic and indigenous medical systems [15]. Ashwagandha and its pharmaceutical derivatives; Withaferin A (WA) have vital role as antiviral agents against different types of viruses like; Infectious Bursal Disease Virus (IBDV) [16], HIV-1 [17], HPV [18], HSV [16] and the only one study investigated the effect of WA against HCV where, Sen et al showed that WA inhibits phosphorylation of PKC substrate peptide HCV [19] and suppresses HCV replication. Therefore, the present study was conducted to check antioxidant and antiviral activity of Ashwagandha against HCV replication. Proliferation Analysis by MTT Assay Lymphocyte proliferation assay was used as an indicator of cellular immune Assessment of Anti-Oxidants' Activities Using Colorimetric Analysis Antioxidants are synthesized or natural compounds that may avoid or delay some kinds of cell damage [26]. Total antioxidant, Glutathione S transferase, and Glutathione reductase were measured in normal lymphocytes after isolation and treated with ASH-WX at different concentrations (25 mg/ml & 50 mg/ml) and incubation for 48 h at 37˚C in a 5% CO 2 incubator using colorimetric assay kits (Biodiagnostic, Giza, Egypt) following the manufacturer's instructions. Lymphocyte Cell Infection with HCV Serum After isolation of Lymphocyte from normal cells as described above, cells were RNA Extraction Total RNA from cultured lymphocyte cells was extracted using QIAamp® RNA Blood Mini Kit according to the manufacturer's instructions (QIAGEN, Hilden, Germany). Quantitative Real Time RT-PCR The RNA copy number of HCV in the supernatant of infected normal lympho- Colony Forming Assay for HCV Replication Colony-forming assays could be used as a pre-clinical tool to assess HCV colony formation as a reflection of the antiviral drugs effectiveness as previously described [27]. Briefly, lymphocyte normal cells (1 × 10 6 cell/ml) were isolated and infected with 1 ml (1.5 × 10 6 HCV) serum as described above, then treated in 24 well plate with ASH-WX at concentrations (25 mg/ml and 50 mg/ml), the cells were centrifuged and washed with 1.0 M PBS, pH 7.4. Coomassie blue stain (Coomassie® Brilliant blue G 250, Sigma-Aldrich, St Louis, MO, US) was added as follows: fixing solution: 50% methanol and 10% glacial acetic acid, staining solution: 0.1% Coomassie® Brilliant blue G 250, 50% methanol and 10% glacial acetic acid, storage solution (5% glacial acetic acid) on the cells. The cells were then incubated at 37˚C in a 5% CO 2 incubator with fixing solution for 1 h to overnight with gentle agitation, then staining solution was added for 20 minutes with gentle agitation and finally destaining solution was added, the solution was replenished several time until background of the gel was being fully destained and the cells (1 × 10 6 cell/ ml) were divided in 24 well plates. Finally, the cells in each panel were examined under an inverted microscope (ZeissAxio Vert.A1; Zeiss; Gottingen, Germany) at 40× magnification, morphological changes were observed, and cells were photographed using the digital camera of an inverted microscope (Color Digital Imaging-SPOT Idea 3MP). Assessment of Protein Concentration of TNF-α, Using ELISA Tumor necrosis factor (also known as TNF-α or cachectin) is one of the most vital cytokines which regulate the cell signaling. The TNF-α system is enhanced in patients with HCV chronic infection with high levels of circulating TNF-α and a parallel increase in the level of the soluble TNF receptors [28]. TNF-α levels were measured using an ELISA Kit (K0331131P; KomaBiotech, Seoul, South Korea) was measured in lymphocytes normal cells after isolation and infection with1ml (1.5 × 10 6 UI HCV). HCV serum as described above and treated with ASH-WX (25 mg/ml & 50 mg/ml) following the manufacturer's instructions. Molecular Modelling (Docking Study) The purpose of this study is to analyze the inhibitory action of variables between 2 study groups was done using Mann-Whitney U-test for independent samples while analysis of variance (ANOVA) with Bonferroni correction was done for more than 2 groups' comparison Two-tailed P-values < 0.001 were considered statistically significant. Lymphocyte Proliferation Using MTT Assay Proliferation of lymphocyte cells increased following treated with gradual concentrations (6.25 mg/ml -100 mg/ml) of ASH-WX, particularity at concentration 25 mg/ml ( Table 2). The mean of cell proliferation concentration was 0. 75 ± 0.01 with SI of 6.06 and at concentration 50 mg/ml of ASH-WX, the mean of cell proliferation concentration was 0.72 ± 0.05 with SI of 5.8. While the untreated cell proliferation concentration was 0.13 ± 0.04 with the stimulation index of 1. Also, increase the number of cells after incubation with ASH-WX at gradual concentration (6.25 mg/ml -100 mg/ml) for 24 h and 48 h compared to untreated cells was observed, particularity at concentration 25 mg/ml and 50 mg/ml of ASH-WX (10 g/100ml distillated water). Assessment of Anti-Oxidant Activities Normal lymphocyte cells treated with ASH-WX at different concentrations (25 mg/ml & 50 mg/ml) for 48 h revealed a significant increase in anti-oxidant activities compared with the untreated control cells (P < 0.001). The highest activity of total antioxidant (331.8 ± 9.6) was observed when lymphocyte normal cells were treated with 25 mg/ml of ASH-WX, while the highest activity of Glutathione reductase (946.3 ± 26.1) was observed when lymphocyte normal cells were treated with 50 mg/ml of ASH-WX. Glutathione-S-transfers revealed the highest activity (1534.6 ± 9.7) when treated with 25 mg/ml of ASH-WX ( Table 2, Table 3). Real-Time PCR (Normal Infected Lymphocyte with HCV) Real time PCR results revealed a reduction of the viral load from a very high viral titer which recorded 1.5 × 10 6 IU/mL to 3.71 × 10 5 IU/mL and after treatment by ASH-WX at concentration 25 mg/ml, viral load was reduced to 6.241 × 10 3 IU/mL and at concentration 50 mg/ml, viral load was reduced to 2.6878 × 10 4 IU/mL (Table 4). Colony Formation Assay Colony formation assay results revealed that ASH-WX enhanced reduction of HCV colony formation compared to untreated infected lymphocyte cells (positive control), where ASH-WX at 25 mg/ml had more significant effect on the reduction of colony formation than 50 mg/ml (Figure 1). Human Tumor Necrosis Factor Alpha (TNF-α) Activity in Infected Lymphocytes with Hepatitis C Virus ELISA analysis showed a significant decrease in the TNF-α concentration (P < 0.05) of infected lymphocytes cells with Hepatitis C virus treated with ASH-WX at concentrations (25 mg/ml and 50 mg/ml) for 48 h at 37˚C in a 5% CO 2 incubator compared with the control untreated cells (Figure 2). Molecular Docking Study of Antiviral Activity The binding mode of Sovaldi (1) and Ashwagandha (2) in the active sites of Human Protein Kinase N2, and NS5B, molecular docking was carried using the Glide software. Human Protein Kinase N2 can bind with Sovaldi (1) through two hydrogen bonds with Arg 917, Leu 918 ( Figure 3; Table 5). Ashwagandha (2) can form one hydrogen bond with Sep 755. The docking scores of Human protein kinase N2 (PKN2) with compounds Sovaldi (1) and Ashwagandha (2) were −2.002 and −3.474 kcal/mol, respectively. The binding mode of Sovaldi (1) with NS5B showed it can form four hydrogen bonds with Asn 142, Glu 398, Trp 397, Ser 39. It has a calculated docking score of −4.688 kcal/mol. The docking results also showed that Ashwagandha (2) having the highest docking score of −5.599 kcal/mol and maximum inhibitory activity with NS5B ( Figure 4, Table 5). It forms hydrogen bonds with Arg 394 and Asn 411. Discussion The present study investigated the anti-viral effects of Egyptian Ashwagandha leaves, a well-known herbal medicine that is full of anti-oxidants, against hepatitis C virus. The phytochemical analysis of Egyptian Ashwagandha leaves suggests that it belongs to chemotype III, which is different to the Indian Ashwagandha regarding the antioxidant activity [21] [30] [31]. To the best of our knowledge, this is the first investigation of this chemotype against HCV. Our results showed increased proliferation of lymphocyte normal cells after These results agreed with previous studies which showed that Ashwagandha has powerful anti-oxidant action as it increased the levels of three natural anti-oxidants; superoxide dismutase, catalase, and glutathione peroxidase in the rat brains [34]. Moreover, agreed with Andallu and Radhika who reported that Ashwagandha is as an important medicinal plant that has good antioxidant potentials throughout its root [35]. The effect of ASH-WX on Hepatitis C virus revealed a reduction of the viral load in infected lymphocyte normal cells before treated with ASH-WX from a very high viral titer which recorded 1.5 × 10 6 IU/mL to 3.71 × 10 5 IU/mL then, colony forming assay was performed and the results showed that ASH-WX enhances reduction of colony formation compared to positive control. Tumor necrosis factor-α (TNF-α) is a pro-inflammatory cytokine produced in response to infectious pathogens. Previous studies demonstrated that the blood level of TNF-α is increased in HCV patients and that correlated with increase of HCV pathogenesis and the severity of liver diseases [36] [37]. Our results of effect of ASH-WX on TNF-α in infected lymphocyte normal cells with Hepatitis C revealed a significant decrease in the TNF-α activity in infected lymphocytes treated with ASH-WX compared to control with significant p value (<0.05) and that agreed with previous studies which demonstrated that ASH-WX has anti-inflammatory effects, specifically reducing gene expression of CCL2 and CCL5 in response to TNF-α stimulation [38]. Another study on effect of ASH-WX on TNF-α showed a significant decrease in the TNF-a concentration (P < 0.05) of HepG2 cells treated with ASH-WX at the IC50 concentration (5.0 mg/ml) for 48 h compared with the control untreated cells [21]. Based on the in-silico docking study between ASH-WX and Sovaldi, which an example of current drugs in HCV treatment, with Human protein kinase N2 (PKN2, PRKCL2) and NS5B revealed that Ashwagandha, has a better binding affinity and inhibitory activity against PKN2 and NS5B than Sovaldi. Our results on the effect of Ashwagandha on Hepatitis C replication agreed with previous studies that reported that Withaferin A has an effective role in suppression of HCV replication, where it inhibits phosphorylation of PKC substrate peptide HCV [19]. Conclusion In conclusion, Ashwagandha (Withania somnifera) water extract is a powerful anti-oxidant and has antiviral properties in HCV infected lymphocyte cells. It might have potential as a promising anti-viral agent against HCV and these results should be confirmed in animal studies.
3,204.6
2020-09-10T00:00:00.000
[ "Medicine", "Biology" ]
Pressure-Driven Responses in Cd 2 SiO 4 and Hg 2 GeO 4 Minerals: A Comparative Study : The structural, elastic, and electronic properties of orthorhombic Cd 2 SiO 4 and Hg 2 GeO 4 were examined under varying pressure conditions using fi rst-principles calculations based on density functional theory employing the Projector Augmented Wave method. The obtained cell parameters at 0 GPa were found to align well with existing experimental data. We delved into the pressure dependence of normalized la tt ice parameters and elastic constants. In Cd 2 SiO 4 , all la tt ice constants decreased as pressure increased, whereas, in Hg 2 GeO 4 , parameters a and b decreased while parameter c increased under pressure. Employing the Hill average method, we calculated the elastic moduli and Poisson’s ratio up to 10 GPa, noting an increase with pressure. Evaluation of ductility/brit-tleness under pressure indicated both compounds remained ductile throughout. We also estimated elastic anisotropy and Debye temperature under varying pressures. Cd 2 SiO 4 and Hg 2 GeO 4 were identi fi ed as indirect band gap insulators, with estimated band gaps of 3.34 eV and 2.09 eV, respectively. Interestingly, Cd 2 SiO 4 exhibited a signi fi cant increase in band gap with increasing pressure, whereas the band gap of Hg 2 GeO 4 decreased under pressure, revealing distinct structural and electronic responses despite their similar structures. Introduction Thenardite is a mineral consisting of anhydrous sodium sulfate and is commonly found in dry evaporite environments.Gmelin [1] identifies eight distinct anhydrous phases of sodium sulfate.The phase commonly referred to as thenardite, named after the mineral, is designated as Na2SO4(V).It is documented to exhibit stability within the temperature range of 32 °C to approximately 180 °C.The thenardite mineral crystallizes in an orthorhombic structure with Fddd space group.The atomic arrangement in ternary compounds of the AB2O4 type is known to be influenced by the relative sizes of the A and B cations.Muller and Roy [2] demonstrated that the stability of the structure type relies on the comparative sizes of these cations.They found that, within silicate and germanate compounds, the olivine structure type can accommodate a wider spectrum of octahedral cation sizes in contrast to other structure types.Despite earlier expectations from cation radius assessments indicating that chromous orthosilicate would adopt the olivine structure, Cr2SiO4 was observed to crystallize in the thenardite-type structure instead [3][4][5].Typically, the latter is associated with cations that are larger than those present in silicates and germanates exhibiting the olivine structure type.Mercury orthogermanate, Hg2GeO4 [6], and cadmium orthosilicate, Cd2SiO4 [7], represent the only other known compounds featuring this particular structure type. Mehrotra et al. [5] conducted a comprehensive review of compounds exhibiting isomorphism with thenardite in 1978.Apart from Na2SO4 and Na2SeO4, the structural configuration resembling thenardite is also observed in Ag2SO4, various mixed phosphates, and arsenates (such as AgHgPO4, NaCdAsO4, and AgCdAsO4).In such materials, nontetrahedral elements (Na, Ag, Cd, Hg) are situated close to the centers of distorted sixcornered polyhedra, maintaining significant separation.Cr2SiO4 exhibits the Fddd space group, similar cell dimensions, and comparable oxygen atomic coordinates.The primary distinction lies in the z coordinate of Cr in contrast to that of Na, Ag, Hg, or Cd atoms.In Na2SO4, Na atoms within highly distorted octahedra are spaced 3.60 Å apart.Contrastingly, in the comparable observation of Cr2SiO4, the movement of O atoms, notably the displacement of the Cr atom along the c axis, results in (1) flattening one side of the octahedron to form an equatorial plane (distorted), and (2) relocating the Cr atom from its position near the midpoint of the octahedron to the face of the equatorial plane.This motion simultaneously extends the distance to the oxygens in the tetrahedral edge while reducing the distance to the Cr atom in the neighboring equatorial plane. Cr2SiO4 and Cd2SiO4 possessing the orthorhombic thenardite structure have undergone examination under elevated pressures [15,16].The Cd2SiO4 structure underwent thorough refinement under different pressures, reaching up to 9.5 GPa.This analysis unveiled a bulk modulus of 119.5 ± 0.5 GPa, accompanied by a pressure derivative B0′ valued at 6.17 (4).On the contrary, the structure of Cr2SiO4, refined up to 9.2 GPa, demonstrated a bulk modulus of 94.7 ± 0.5 GPa, alongside a B0′ value of 8.32 (14).Miletech et al. [16] attributed the relatively lower bulk modulus and higher B0′ of the chromous structure to the compression of the unusually long Cr-O bond and the comparatively small size of the Cr 2+ ion relative to the coordination polyhedron's size.Meanwhile, there have been no reports on the structural properties of a similar compound, Hg2GeO4 of the thenardite type, under high pressure to date. In this study, our objective is to examine the distinct physical characteristics of Cd2SiO4 and Hg2GeO4 and investigate the effects of pressure variations on their behavior.The subsequent section of our research focuses on the computational methodology employed.Transitioning to the third segment, we present a thorough overview of our major discoveries and engage in discussions regarding structural properties, elasticity, mechanical responses, dynamic features, and electronic properties.We will analyze these aspects under both ambient and pressured conditions.Lastly, we will synthesize our findings and present concluding remarks in the final section. Materials and Methods The molecular geometry optimization was carried out using the Projector Augmented Wave (PAW) method within the density functional theory (DFT) framework, as implemented in VASP [17].Specifically, the Projector Augmented Wave formalism-based pseudopotentials were employed.The Perdew-Burke-Ernzerhof for solids (PBEsol) [18] functional, within the Generalized Gradient Approximation (GGA), was utilized for generating exchange-correlation functionals.Additionally, PBE [19] calculations were performed, revealing that PBEsol provides a better description of the crystal structure of Be2SiO4.Consequently, this study primarily focuses on the results obtained from PBEsol calculations.For all computations, a plane wave energy cutoff of 600 eV was set, with a selected energy convergence criterion of 10 −8 eV.Geometry optimization employed a dense k-mesh based on the Monkhorst-Pack technique [20].Phonon dispersion calculations were conducted using density functional perturbation theory with VASP and Phonopy [21].The phonon calculations utilized a 2 × 2 × 2 supercell (112 atoms), employing a 6 × 6 × 6 k-point mesh and an energy convergence criterion of 10 −8 eV.To address the underestimation of the band gap for semiconductors/insulators in GGA, the hybrid functional HSE06 was employed to compute electronic characteristics.The Hartree-Fock screening value was set at 0.2 Å [22].For analyzing the band gap response to pressure variations, calculations were carried out using VASP with HSE06 functional.The bulk modulus and its pressure dependency were determined by fitting the pressure-volume (P-V) data to a third-order Birch-Murnaghan equation of state (EOS), elucidating the unit cell volume's response to compression [23]. Structural Properties Both mineral compounds feature an orthorhombic crystalline structure with the space group Fddd, which is illustrated in Figure 1.In our case, calculations were conducted using GGA-PBE [19] and GGA-PBEsol [18] functionals.Our computational analysis indicates that the GGA-PBEsol functional accurately describes the ground state of the investigated compound, as summarized in Table 1.The predicted cell parameters closely match experimental values [6,15].Table 1 reveals that PBE tends to slightly overestimate the cell parameters.The obtained atomic positions are listed in Table 2.The thenardite-type minerals' structure, as described in Figure 1, features tetrahedrally coordinated Si (Ge) and octahedrally coordinated Cd (Hg) atoms in Cd2SiO4 (Hg2GeO4).In Cd2SiO4, the O atom is coordinated by three Cd atoms and one Si atom.Similarly, in Hg2GeO4, the O atom forms bonds in a tetrahedral coordination with three identical Hg atoms and one Ge atom.The CdO6 (HgO6) octahedra link together via shared edges, creating zigzag chains aligned with the crystallographic [110] and [−110] directions.These chains establish a three-dimensional network, further connected through shared edges among CdO6 (HgO6) octahedra.Additionally, the SiO4 (GeO4) tetrahedra play a role in this connectivity, sharing edges with CdO6 (HgO6) octahedra.Both octahedra and tetrahedra exhibit significant distortion due to the unique connectivity of the SiO4 (GeO4) groups.In Cd2SiO4, the tetrahedron undergoes uniaxial elongation along its two-fold axis parallel to the c axis due to the sharing of two edges of the silicate tetrahedron with edges of the CdO6 polyhedra.Conversely, in Hg2GeO4, the elongation of the tetrahedron along its two-fold axis, aligned with the b axis, is induced by the sharing of two edges between the germanate tetrahedron and the HgO6 polyhedra. Furthermore, we have computed the structural parameters as variables dependent on pressure, illustrated in Figure 2a,b.For Cd2SiO4, we observed a close correspondence between the volume (V/V0) and lattice constants (a/a0, b/b0, c/c0) and experimental data, as indicated in Figure 2a.The response of lattice parameters to pressure exhibits significant anisotropy in both compounds.As pressure increases, all lattice parameters decrease in Cd2SiO4, whereas, in Hg2GeO4, parameters a and b decrease with increasing pressure, while parameter c shows an increase under pressure.From Figure 2a,b, it is evident that Cd2SiO4 and Hg2GeO4 exhibit less compressibility along the b and the a axis, respectively.The compression of the unit cell of Cd2SiO4 is highly anisotropic, similar to Cr2SiO4, though the anisotropy is more pronounced in Cr2SiO4.In Cd2SiO4, the a axis is more compressible than the b axis, whereas, in Cr2SiO4, the b axis is more compressible than the a axis [16]. Additionally, the volume (V0), bulk modulus (B0), and its pressure derivative (B0′) at zero pressure were determined through least-squares analysis of pressure-volume data.To calculate the bulk modulus for both compounds, we utilized the third-order Birch-Murnaghan equation of state (EOS).The resulting values for Cd2SiO4 are 702.03Å 3 , 120.53 GPa, and 4.43 for V0, B0, and B0′, respectively, and, for Hg2GeO4, they are 812.95Å 3 , 54.95 GPa, and 7.50.In Figure 2c,d, the unit cell volume data plotted against pressure are shown, along with the pressure-volume curve determined using these fitted parameters.R. Miletich obtained B0 = 119.2(5)GPa and B0′ = 6.17(4) for Cd2SiO4, indicating agreement with our theoretical results [15].The compressibility of Cd2SiO4, with B0 = 120.53GPa, is lower than that of Hg2GeO4 and Cr2SiO4 (B0 = 94.7(4)GPa) [16].Other known cadmium oxides with compression data include CdO and CdWO4, for which B0 = 108 GPa [24] and 123 GPa [25] were reported, respectively.This is a consequence of the fact that, in the three compounds, compressibility is dominated by changes induced by pressure in the coordination polyhedral of Cd.Conversely, the compressibility of Cd2SiO4 is quite similar to that of olivine-type M2SiO4 compounds containing large M cations, such as Fe2SiO4 (B0 = 123.9GPa) [26] or CaMgSiO4 (B0 = 113 GPa) [27].The compression of both the CdO6 and SiO4 coordination polyhedra in Cd2SiO4 is affected by their respective polyhedral geometries.The CdO6 polyhedra exhibit an increase in angular and bond length distortion as pressure increases, as depicted in Figure 3a.The polyhedral volume of CdO6 shows a more significant decrease with pressure compared to the CrO6 in Cr2SiO4 [16].Figure 3c illustrates the Cd-O bond lengths as pressure varies, demonstrating anisotropic polyhedral compression.The longest bonds, Cd-O(5,6), share edges with the SiO4 tetrahedra, resulting in the shortest inter-cation distance (Cd-Si distance of 3.098 Å).This Cd-Si distance is shorter than the Cr-Si distance of 3.418 Å in Cr2SiO4.On the other hand, the Cd-Cd distance is significantly larger than the Cr-Cr distance [16] 3b and 3c, respectively, while angular distortion notably decreases.In Cr2SiO4, both the quadratic elongation and the bond angle variance of SiO4 increase with pressure, whereas, in Cd2SiO4, the polyhedral distortion of SiO4 decreases under pressure [16].The decrease in distortion signifies the diminishing uniaxial elongation caused by repulsion between Cd and Si atoms along shared edges.Likewise, O-Si-O angles tend toward the ideal tetrahedral angle with increasing pressure.The compression mechanism of Cd2SiO4 structure is mainly governed by cation-cation repulsions across shared O-O polyhedral edges.Symmetrically distinct metal-metal distances exhibit minimal compression up to 10 GPa, indicating stiffness relative to the overall structure.These repulsions induce distortions in CdO6 octahedra and displace Cd atoms, leading to rapid compression of the Cd-Cd(3) distance between opposing octahedra, even surpassing compression along the parallel c axis.Likewise, we investigated the HgO6 and GeO4 polyhedra within the Hg2GeO4 compound.Both the HgO6 and GeO4 polyhedra demonstrate heightened angular distortion and bond length variation with increasing pressure, as shown in Figure 4a,b.The HgO6 polyhedra experience greater compression under pressure compared to CdO6 and CrO6 [16].Figure 4c 4b and 4c, respectively.Unlike the SiO4 polyhedra in Cd2SiO4, the angular and bond length distortions in GeO4 increase under pressure, similar to the behavior observed in the SiO4 polyhedra of Cr2SiO4 [16].3), and Ge-O, respectively.In Figure 4d, the shapes square, circle, up triangle, and diamond correspond to the Hg-Ge, Hg-Hg(1), Hg-Hg(2), and Hg-Hg(3), respectively. Elastic and Mechanical Properties A material's elastic properties govern its reaction to stress, encompassing both deformation and the subsequent restoration to its original form when stress is relieved.These properties play a crucial role in revealing the bonding dynamics between neighboring atomic layers, the directional characteristics of binding, and the overall structural integrity.Elastic constants of solids serve as a bridge between their mechanical and dynamical behaviors, offering crucial insights into the forces operating within them.Furthermore, these constants serve as predictive tools for determining the structural stability of materials.In the case of orthorhombic symmetry, there are nine distinct elastic constants: C11, C22, C33, C44, C55, C66, C12, C13, and C23 [28].These elastic constants adhere to the generalized lattice stability criteria [29] across various pressure ranges, signifying the mechanical robustness of both compounds up to 10 GPa.With increasing pressure, almost all elastic constants experience growth, reflecting strong interactions between atoms.Consequently, the compounds exhibit enhanced strength, as depicted in Figure 5a,b. Employing the elastic constants acquired, we calculated the bulk modulus (B) and shear modulus (G) utilizing the Voigt-Reuss-Hill (VRH) approximation [30,31].For [34].Additionally, Young's modulus (E) can be derived from the bulk and shear moduli.For Cd2SiO4, the obtained value of E is 113.37 GPa, while, for Hg2GeO4, it is 58.78 GPa.These values of E are significantly lower than those of other compounds such as BeAl2O4 (357.72 GPa) [32], Be2SiO4 (241.05GPa) [33], Mg2SiO4 (256 GPa), and Fe2SiO4 (254 GPa) [34].We have calculated the elastic moduli up to 10 GPa.As pressure increases, the values of B, G, and E also increase, as depicted in Figure 5c,d.The values of ν and the B/G ratio serve to characterize the brittle or ductile nature of a structure.If ν and B/G are both less than 0.26 and 1.75, respectively, the structure is considered brittle; otherwise, it is regarded as ductile [35,36].Our findings suggest that both proposed structures exhibit ductile behavior.Poisson's ratio serves as an indicator of volume alteration during uniaxial deformation, with a value of ν = 0.5 indicating no volume change during elastic deformation.The low values observed for both compounds imply significant volume changes during their deformation.Additionally, ν provides insights into the bonding forces' characteristics more effectively than other elastic constants [37].It has been established that ν = 0.25 represents the lower limit for central-force solids, while 0.5 indicates infinite elastic anisotropy [38].The low ν values observed for both structures suggest central interatomic forces within the compounds.In addition, we have determined the ν and B/G ratio under various pressures for both compounds.Both the ν and B/G ratio exhibit an increase as pressure increases, as illustrated in Figure 5e,f. To assess the elastic anisotropy of the compounds under investigation, we acquired shear anisotropic factors, which measure the degree of anisotropy in atomic bonding across various planes.These factors play a critical role in evaluating the durability of materials.We calculated shear anisotropic factors for the {100} (A1), {010} (A2), and {100} (A3) crystallographic planes, as well as percentages of anisotropy in compression (AB) and shear (AG) [39,40].In a crystal displaying isotropy, the values of A1, A2, and A3 should be one; any deviation from this indicates the presence of elastic anisotropy.A percentage anisotropy of 0% signifies perfect isotropy.For both compounds, the calculated shear anisotropy values under different external pressures are detailed in Table 3.For Cd2SiO4, anisotropies increase in the {010}, {001}, and {100} planes (A2 < A3 < A1) at zero pressure, with the {100} plane exhibiting the highest anisotropy.Conversely, Hg2GeO4 displays an anisotropy sequence of A3 < A2 < A1.Percentage anisotropies in compression and shear are approximately 5% and 8%, respectively, for the Cd compound whereas, for the Hg compound, they are 32% and 22%, respectively.In addition, the universal anisotropy index (A U ) also provides information about anisotropy in crystals [41].The departure of A U from zero denotes the level of single-crystal anisotropy, incorporating both shear and bulk contributions, which sets it apart from other established metrics.Thus, A U serves as a universal metric for quantifying single-crystal elastic anisotropy.Cd2SiO4 exhibits an anisotropy index of 0.96, while Hg2GeO4 displays a value of 3.78.Elastic wave velocities describe the rate at which waves travel through a substance when it undergoes elastic deformation.These waves consist of compression (longitudinal) waves (vl) and shear (transverse) waves (vt).For both compounds, the transverse (vt) and longitudinal (vl) mode velocities can be derived from the elastic constants [42].The findings for both compounds are delineated in Table 4, indicating that the vl increases with pressure, whereas the vt initially rises before declining.Moreover, the Debye temperature (θD), a fundamental parameter, is linked to various solid-state properties like specific heat, elastic constants, and melting temperature.In this investigation, θD was determined from the mean elastic wave velocity (vm) [43] and is shown in Table 4.The vm and θD values obtained for both compounds are significantly lower than those for other compounds, such as BeAl2O4 (vm = 7.06 km/s, θD = 1035.85K) [32], Be2SiO4 (vm = 6.3 km/s, θD = 906 K) [33], Mg2SiO4 (vm = 5.87 km/s, θD = 748 K), and Fe2SiO4 (vm = 5.39 km/s, θD = 727 K) [34].For both compounds, the calculated value of θD was found to increase with pressure. Lattice Dynamics We conducted phonon dispersion analysis along the high-symmetry path (Γ-Y-T-Z-Γ-X) under zero pressure to assess the dynamical stability of the material.Figure 6a,b display the calculated phonon dispersions for both compounds.Our findings indicate that the acoustic branches exhibit positive frequencies, confirming the dynamical stability of the system under analysis.Notably, there is a significant interaction between the optical and acoustic modes.Additionally, we computed the total and partial phonon density of states (PhDOS) for both compounds, as depicted in Figure 6c,d for the Cd and Hg compounds, respectively.The PhDOS profiles of both compounds are primarily influenced by oxygen atoms.Furthermore, significant contributions of Cd and Hg atoms are evident in the lower-frequency region (0-200 cm −1 ), indicating that Cd and Hg atoms have lower frequencies with increasing atomic weight: Cd < Hg.Moreover, we have computed the vibrational modes at zero pressure.Both compounds exhibit 42 vibrational modes.The mechanical representation of 39 optical modes is ΓM = 4Ag + 6B1g + 5B2g + 6B3g + 4Au + 5B1u + 4B2u + 5B3u.The calculated optical modes contain the 21 Raman active (4Ag + 6B1g + 5B2g + 6B3g) and 18 IR active (4Au + 5B1u + 4B2u + 5B3u) modes.The obtained frequencies of these modes are given in Table 5.There are no previous studies on Raman or IR spectroscopy of Cd2SiO4 or Hg2GeO4, making it impossible to compare our results with existing experiments.We hope our studies will inspire future experiments in this area.The band gap energies and vibrational mode frequencies of Cd2SiO4 and Hg2GeO4 indicate that spectroscopy studies could be conducted using current advanced methods.Our calculations will be useful for mode assignment in upcoming Raman and/or IR experiments. Electronic Properties To thoroughly characterize the physical properties of the compounds under investigation, we employed the GGA-PBE functional to calculate their electronic band structures.The resulting band gap is 1.54 eV for Cd2SiO4 and 0.82 eV for Hg2GeO4, as shown in Figure 7a and 7b, respectively.Considering the known tendency of GGA to underestimate band gaps in insulators and semiconductors [44], we applied the HSE06 functional to improve the accuracy of the band gap determination.Figure 7c,d illustrate the revised band gaps, which are 3.34 eV for the Cd compound and 2.09 eV for the Hg compound.Both compounds exhibit indirect gap characteristics, with the valence band's highest point and the conduction band's lowest point located at different positions.The calculated band gap is smaller than that of other minerals such as Al2BeO4 (8.3 eV) [32], Mg2SiO4 (4.6 eV) [45], Be2SiO4 (7.8 eV) [33], Mg2GeO4 (3.9 eV) [46], MgAl2O4 (7.8 eV), and ZnAl2O4 (3.9 eV) [47].Furthermore, the analysis of the band structures indicates minimal dispersion in the valence band, while significant dispersion is observed in the conduction band.This implies that electrons will have a significantly smaller effective mass than holes. To gain a deeper insight into the electronic properties, we analyzed the total and partial density of states (DOS), as depicted in Figure 7e,f, using the HSE06 functional.From these figures, it is evident that the primary contributors to the highest peak of the valence bands are the O-3d states.Additionally, minor contributions from Cd-4d and Si-3s states are observed in the valence bands of the Cd compound, and Hg-5d and Ge-4s states in the Hg compound.The conduction bands of the Cd and Hg compounds are primarily dominated by Cd-5s and Hg-6s states, respectivelyThe Hg-6s orbitals are responsible for the smaller band gap of Hg2GeO4.The same phenomenon is observed when PbWO4 [48] is compared to other tungstates [49] due to the role of Pb-6s states.Moreover, we carried out high-pressure calculations to investigate the electronic properties under different pressure conditions.The calculated electronic band gaps under pressure for both structures are depicted in Figure 8.As pressure rises, there is a notable expansion in the band gap of Cd2SiO4 (see Figure 8a), whereas, in the case of the Hg compound depicted in Figure 8b, the band gap decreases with increasing pressure.The widening of the band gap in Cd2SiO4 results from the intensified crystal field splitting between bonding and antibonding states under compression [50].Conversely, the distinctive behavior of the band gap in Hg2GeO4 is attributed to the involvement of Hg-6s states in the lower portion of the conduction band, causing it to decrease under compression.Similar phenomena are observed with Pb-6s states in PbWO4 [48] and PbMoO4 [51]. Conclusions In summary, we examined the structural, elastic, and electronic properties of Cd2SiO4 and Hg2GeO4 across different pressures using first-principles calculations based on density functional theory.Our analysis revealed that the computed lattice parameters at ambient pressure closely matched experimental data.The investigated compounds were determined to be mechanically and dynamically stable.We explored the pressure-dependent behavior of structural parameters.The compression of lattice parameters displayed an anisotropic behavior in both compounds.Notably, the compression mechanism in these structures is mainly governed by cation-cation repulsions across shared O-O polyhedral edges.The calculated elastic and mechanical properties reveal that Cd2SiO4 exhibits a comparatively larger bulk modulus and shear modulus, indicating greater stiffness and resistance to deformation under stress.We have computed the frequencies of vibrational modes, which include 21 Raman active modes, 3 acoustic modes, and 18 IR active modes.Both minerals were identified as indirect band gap insulators.Notably, Cd2SiO4 exhibited a significant increase in band gap with increasing pressure, while the band gap of Hg2GeO4 decreased under pressure.These findings revealed distinct structural and electronic responses despite the similar structures of the two compounds. Figure 2 . Figure 2. The variations in lattice parameters and volume under pressure for (a) Cd2SiO4 and (b) Hg2GeO4.In Figure 2a,b, the shapes square, circle, up triangle, and down triangle correspond to the . The shorter Cd-O(1,2) and Cd-O(3,4) bonds engage in weaker cation-cation interactions across shared O-O edges.The atypical angular distortion of the CdO6 octahedron, arising from polyhedral connections and shared edges, elucidates the alterations in polyhedral geometry induced by pressure.Significantly, the axes of the O-Cd-O octahedron exhibit substantial deviations from the ideal 180°, with O(3)-Cd-O(4) measuring 141.5° and O(1)-Cd-O(6) at 113.39°, as illustrated in Supplementary Figure S1.The displacement between Cd and O atoms along the c axis influences the O(3)-Cd-O(4) angle, whereas displacement along the b direction affects bond angles like O(5)-Cd-O(6), O(1)-Cd-O(4), and O(3)-Cd-O(5).Furthermore, the displacement of the Cd atom concerning the surrounding O atoms elucidates the distinction in compression between Cd-O(1) and Cd-O(2) bonds under pressure, leading to variations in compression along the crystallographic a and b axes, as depicted in Figure 2a.In the SiO4 tetrahedron, polyhedral volumes and Si-O distances remain relatively unchanged with pressure, as depicted in Figure Figure 3 . Figure 3. Pressure dependence of (a,b) polyhedral volume along with the quadratic elongation (QE) and angular variance (AV) of CdO6 and SiO4.(c) Selected bond lengths and (d) inter-cation distances as a function of pressure of Cd2SiO4.In Figure 3c, the shapes square, circle, up triangle, and down triangle correspond to the Cd-O(1), Cd-O(2), Cd-O(3), and Si-O, respectively.In Figure 3d, the shapes square, circle, up triangle, and diamond correspond to the Cd-Si, Cd-Cd(1), Cd-Cd(2), and Cd-Cd(3), respectively. depicts the fluctuation in Hg-O bond lengths under varying pressures, highlighting anisotropic compression of the polyhedra.The longest bonds, Hg-O(3), share edges with the GeO4 tetrahedra, resulting in the shortest inter-cation distance (Hg-Ge distance of 3.403 Å).This Hg-Ge distance is larger than the Cd-Si but nearly equal to the Cr-Si distance [16].The shorter Hg-O(1) and Hg-O(2) bonds engage in weaker cation-cation interactions across shared O-O edges.The atypical angular deformation of the HgO6 octahedron, stemming from polyhedral connections and shared edges, elucidates pressureinduced alterations in polyhedral geometry.Notably, the O-Hg-O octahedron axes exhibit significant deviations from the ideal 180°, with O(3)-Hg-O(4) measuring 145.78° and O(1)-Hg-O(6) at 112.26°, as shown in Supplementary Figure S2.The displacement of Hg atoms relative to O atoms along the b axis impacts the O(3)-Hg-O(4) angle, whereas displacement along the c direction alters bond angles such as O(5)-Hg-O(6), O(1)-Hg-O(4), and O(3)-Hg-O(5).Moreover, the displacement of the Hg atom relative to surrounding O atoms accounts for the discrepancy between Hg-O(1) and Hg-O(2) bond compressions with pressure, contributing to variations in compression along crystallographic a and b axes, as depicted in Figure 2b.In the GeO4 tetrahedron, polyhedral volumes and Ge-O distances remain relatively stable under pressure, as shown in Figure Cd2SiO4, the calculated B and G values are 124.41GPa and 42.05 GPa, respectively.Conversely, for Hg2GeO4, the B and G values are 73.06GPa and 21.54 GPa, respectively.Notably, Cd2SiO4 exhibits a comparatively larger bulk modulus and shear modulus, indicating greater stiffness and resistance to deformation under stress.The determined B values at ambient pressure for both compounds align closely with the B0 value derived from the Birch-Murnaghan equation of state (EOS).The calculated B and G values of both compounds are smaller than those of other minerals such as BeAl2O4 (B = 213.30GPa, G = 146.55GPa) [32], Be2SiO4 (B = 181.21GPa, G = 94.29 GPa) [33], Mg2SiO4 (B = 183 GPa, G = 101 GPa), and Fe2SiO4 (B = 194 GPa, G = 99 GPa) Figure 6 . Figure 6.The phonon dispersion and phonon density of states for (a,c) Cd2SiO4 and (b,d) Hg2GeO4. Figure 7 . Figure 7.The electronic band structure of (a) Cd2SiO4 and (b) Hg2GeO4 using PBEsol-GGA.The electronic band structure and projected density of states using HSE functionals of (c,e) Cd2SiO4 and (d,f) Hg2GeO4. Figure 8 . Figure 8.The electronic band gap of (a) Cd2SiO4 and (b) Hg2GeO4 using HS06 under external pressures. : The pressure dependence of selected bond angles of Cd2SiO4; Figure S2: The pressure dependence of selected bond angles of Hg2GeO4.Author Contributions: J.S.-Formal Analysis, Investigation, Visualization, Conceptualization.D.E.-Validation, Visualization, Writing-Review and Editing.G.V.-Conceptualization, Investigation, Methodology, Writing-Review and Editing.V.K.-Validation, Resources, Writing-Original Draft, Supervision, Funding Acquisition.All authors have read and agreed to the published version of the manuscript. Table 1 . The unit cell parameters for Cd2SiO4 and Hg2GeO4, comprising outcomes from both PBE and PBEsol computations, as well as experimental findings. Table 2 . Calculated atomic positions of both compounds using PBEsol functional. Table 3 . The shear anisotropy factors as a function of pressure. Table 5 . The vibrational modes of Cd2SiO4 and Hg2GeO4.
6,374.2
2024-06-07T00:00:00.000
[ "Materials Science", "Physics" ]
How logical reasoning mediates the relation between lexical quality and reading comprehension The present study aimed to examine the role of logical reasoning in the relation between lexical quality and reading comprehension in 146 fourth grade Dutch children. We assessed their standardized reading comprehension measure, along with their decoding efficiency and vocabulary as measures of lexical quality, syllogistic reasoning as measure of (verbal) logical reasoning, and nonverbal reasoning as a control measure. Syllogistic reasoning was divided into a measure tapping basic, coherence inferencing skill using logical syllogisms, and a measure tapping elaborative inferencing skill using indeterminate syllogisms. Results showed that both types of syllogisms partly mediated the relation between lexical quality and reading comprehension, but also had a unique additional effect on reading comprehension. The indirect effect of lexical quality on reading comprehension via syllogisms was driven by vocabulary knowledge. It is concluded that measures of syllogistic reasoning account for higher-order thinking processes that are needed to make inferences in reading comprehension. The role of lexical quality appears to be pivotal in explaining the variation in reading comprehension both directly and indirectly via syllogistic reasoning. Introduction Reading comprehension encompasses both lower and higher level language skills (Cain, Oakhill, & Bryant, 2004). Lower level skills entail the ability to decode, as well as the quality of the mental lexicon (Perfetti & Hart, 2002). However, to gain a full understanding of a text and accomplish a situation model, the reader has to make inferences. The ability to do so can be seen as a higher level language skill (e.g. Oakhill & Cain, 2012). Inferencing requires the reader to make logical steps in deducing a conclusion based on premises stated in the text. This helps in filling in details that are not explicitly mentioned in the text. The ability to make inferences partly predicts reading comprehension; poor comprehenders have inadequate inferencing skills (Cain & Oakhill, 1999). A distinction can be made between two types of inferences: those that are made with little or no effort during reading, and those that are made after the text has been read and require more effort (Graesser, Singer, & Trabasso, 1994). Less-skilled comprehenders are especially poor in generating the more effortful, elaborative, type of inferences (Bowyer-Crane & Snowling, 2005). The Lexical Quality Hypothesis (Perfetti, 2007;Perfetti & Hart, 2002), mainly focuses on the quality of the mental lexicon in explaining differences between readers. On the one hand, inferencing may be seen as complementary to the quality of the mental lexicon, but there may also be an indirect relation of lexical quality to reading comprehension via inferencing. In the present study, we made an attempt to find out in what way the two different types of inferencing skills mediate the relation between lexical quality and reading comprehension, and to what extent they explain additional variance in reading comprehension. Lexical quality refers to the pivotal role of word meaning with regard to reading comprehension (Perfetti & Stafura, 2014). Readers need both a large lexicon, as well as high-quality representations of the words within their lexicon. The Lexical Quality Hypothesis (Perfetti, 2007;Perfetti & Hart, 2002) argues that the richness of word representation helps both lower and higher level processes in reading comprehension. In longitudinal studies, it has been found that with progression of grade the role of word decoding as predictor for reading comprehension becomes smaller because of increasing reading fluency, whereas the role of vocabulary becomes larger (Protopapas, Sideridis, Mouzaki, & Simos, 2007;Verhoeven & Van Leeuwe, 2008). Word decoding can be seen as a sine qua non for reading comprehension (Hoover & Gough, 1990). Especially the speed of decoding is relevant with regards to lexical quality, as a high speed of word decoding is an indication of a high quality of phonological and orthographic word knowledge. Perfetti and Stafura (2014) also pointed to the connection of decoding with word meaning, in the sense that in reading comprehension, word meaning has to be activated via the written form of a word. The reader with a good, high-quality, orthographic word knowledge and a broad vocabulary knowledge can read the text at a high speed, and can establish fast word-to-text integration. In order for word-to-text integration to take place, and thus to comprehend a text, readers have to make inferences, using both prior knowledge and propositional meaning within the sentences of the text at hand (Perfetti & Stafura, 2014). These higher-level processes might be complementary to lexical quality (see also Perfetti, 2007), and partially mediating the relation between lexical quality and reading comprehension (e.g., Cromley & Azevedo, 2007). In their Constructionist Theory, Graesser et al. (1994) described two types of inferences: (1) those that are generated during the course of comprehension to establish both local and global coherence (coherence inferences), and (2) those that are generated after the text has been read (elaborative inferences), for example during a retrieval attempt. This latter type of inferencing is not required for maintaining coherence, but may create a more vivid mental picture of the text. A reader may only make such inferences during reading a text when (s)he has a specific goal and when the inferences are highly predictable. It takes more effort to make these inferences, and exactly this effort is needed to construct meaning and to learn from text (Kintsch & Rawson, 2005). Elaborative inferences are thus often not generated during reading, since they are timeconsuming, and their use is highly subject to individual differences (Thurlow & Van den Broek, 1997). Bowyer-Crane and Snowling (2005) showed that questions in a reading comprehension task that require these elaborative types of inferences made the difference between skilled and less-skilled comprehenders (9-year-olds). In a study by Cain, Oakhill, Barnes, and Bryant (2001), 7-to 8-year old skilled comprehenders overall made more inferences but not relatively more elaborative inferences than less skilled comprehenders. And contrary to the expectations, the children made similar amounts of elaborative and coherence inferences. The authors presumed that this might be due to the fact that the children were not yet fluent readers. In research on thinking and reasoning, the case has been made that there are two separate cognitive systems underlying reasoning (Evans & Stanovich, 2013;Wason & Evans, 1975). The first system processes rapid, automatic, and unconscious, and is shared between humans and animals. The second system is believed to be specifically human. It is slow, conscious, and affords testing of hypotheses. The connection to different types of inferences during reading comprehension is a small next step. Graesser, Wiemer-Hastings, and Wiemer-Hastings (2001) related different types of inferences in reading comprehension to propositional logic and syllogistic reasoning which involves integration of information, drawing of inferences, and consideration of alternative states (Johnson-Laird & Bara, 1984). They suggested that logical reasoning might be a way of capturing the quite varying coherence preserving processes that occur in text comprehension. Inferences like modus ponens are made automatically, and do not take much effort. Indeed, in a masked priming study, Reverberi, Pischedda, Burigo, and Cherubini (2012) evidenced the automaticity of modus ponens. And although modus tollens does not seem to be automatic, this is also a logical type of inferences that is made with little effort, and high accuracy, even in primary school children (see e.g., Haars & Mason, 1986). Modus ponens and modus tollens thus may capture the ability to draw the more easy coherence inferences. Pseudo-logical inferences cause more problems for participants, but performance increases when the problems are made concrete (Wason, 1968;Evans, 2003). Negation of antecedent and affirmation of consequence are examples of such more difficult, indeterminate, types of syllogisms. Haars and Mason (1986) investigated whether primary school children could solve different types of syllogisms. When a ''maybe'' answer, instead of a simple ''yes'' or ''no'' was required, error rates were higher, but children in the upper primary school grades were able to solve the indeterminate problems. These type of syllogisms may capture the ability to make the more demanding elaborative inferences in reading comprehension, as readers have to come up with various situations before the ''maybe'' answer can be given. Examples of the different types of syllogisms are presented in Table 1 (based on Schröder, Bödeker, & Edelstein, 2000). The link between syllogistic reasoning and reading comprehension has been made before. For example, two previous studies investigated whether print exposure would positively affect syllogistic reasoning (as a measure of cognition), based on the idea that readers have to make deductions in order to understand a text. The more practice they have (i.e. the more they read), the better they become. Both studies also included reading comprehension measures, but syllogistic reasoning, and not reading comprehension was the dependent variable. Siddiqui, West, and Stanovich (1998) found very modest support for their hypothesis that print exposure fostered syllogistic reasoning, but their results did show positive correlations between syllogistic reasoning and reading comprehension in college students. Osana, Lacroix, Tucker, Idan, and Jabbour (2007) found that exposure to scientific texts that included more logical forms predicted syllogistic reasoning in undergraduate students, and again that logical reasoning was related to reading comprehension. In the present study, we examined the relation between lexical quality and reading comprehension, and the mediating role of syllogistic reasoning in this relation. Inferencing is a higher-language skill that may be complementary to lexical quality. However, lexical quality may partly encompass the easy, logical type of inferences, as high word knowledge facilitates easy word-to-text integration; theoretically, lexical quality should thus (partly) predict inferencing, and not the other way around. In the present study, we first expected that lexical quality (operationalised as decoding efficiency and vocabulary, cf. Verhoeven, Van Leeuwe, & Vermeer, 2011) would be related to both logical and indeterminate syllogistic reasoning, and that both lexical quality and syllogistic reasoning would be related to reading comprehension. Second, we expected both easy, logical and more difficult, indeterminate syllogisms to predict reading comprehension after controlling for nonverbal reasoning. However, when performing a mediation analyses with a model containing all four variables, and controlling for nonverbal reasoning, we expected direct effects of decoding and vocabulary to reading Method Participants Participants were 146 children in 4th grade (74 boys, 72 girls). The children came from seven groups of five different schools in the central-east of the Netherlands. Their average age was 121 months (SD = 5.6 months). The parental educational level was at university (either university or university of applied sciences) level for almost one-third of the group (father 32.2 %, mother 27.4 %), at vocational level for about one-third of the group (father 34.2 %, mother 35.6 %) and below that for the remainder part of the group (father 26.7, mother 30.8 %). The educational level of father and mother was unknown for respectively 6.8 % and 6.2 %. Most children (90.4 %) were monolingual, the others spoke Dutch and Turkish (6.8 %), or Dutch and another language (2.7 %). Materials Nonverbal reasoning was assessed with the Raven's Standard Progressive Matrices (Raven, Raven & Court, 1998). In this task, children are asked to solve five sets of 12 puzzles of which a part is missing. They have to choose out of six items. The puzzles increase in difficulty. Each correctly solved puzzle was scored with one point. The total raw score was used for the present study. Reliability of this test is high, with a reported median of .90. Decoding Speed was assessed with a paper-and-pencil lexical decision task in which children were asked to cross out pseudowords in a list of 120 bisyllabic words (Van Bon, 2007). The words were presented on one page with four columns, which contained 90 nouns, and 30 pseudowords. Children were given 1 min to complete the test. The score is the number of words judged within a minute, minus the number of errors. Reliability is considered good with alpha's around .80. Vocabulary was assessed with a standardized test in which children were asked to find the synonym of an underlined word in a sentence out of four alternatives (Verhoeven & Vermeer, 1993). A (translated) example item is: ''The children are very enthusiastic. (A) intently, (B) funny, (C) absently (D) exuberant''. The paperand-pencil test consisted of 50 items. The total score comprises the number of correct answers. Reliability is .87 (Verhoeven & Vermeer, 1996). Syllogistic reasoning was measured with two subtests based on a syllogistic reasoning test of Schröeder, Boedeker, and Edelstein (2000). Children read short stories consisting of three sentences. They were instructed as follows: ''Below are short stories. At the end of each story, you get a question. Answer each questions with either ''yes'', ''no'' or ''maybe''. Only encircle the one answer that you are sure is correct.'' The first subtest consisted of 10 items that tapped the more basic reasoning skills, and included easy, logical syllogisms as modus ponens and tollens. The answers to these syllogisms were ''yes'' or ''no''. Another set of 10 items tapped the more complex reasoning skills, and included indeterminate syllogisms that required a ''maybe'' answer. In all 20 items, answering possibilities were ''yes'', ''no''', and ''maybe''. The 20 (translated) items can be found in the ''Appendix''. An example of a yes/no and a maybe item are: All seals that get a tasty fish are happy. (A) Seal 1 gets a tasty fish. Is seal 1 happy? (answer: yes). (B) Seal 2 did not get a tasty fish. Is seal 2 happy? (answer: maybe). Both subtests turned out reliable, with Cronbach's alpha being respectively .78 and .72. The data of both subtests were furthermore normally distributed with skewness (-.69 and .40) and kurtosis (-.48 and -.71) within normal ranges. Reading comprehension was assessed by the schools with a standardized Item Response Theory based scale that covers text-based and situation-based comprehension in a range of measures, including summarizing, implicit and explicit questions, and inferring novel word meanings from text (Staphorsius & Krom, 1998). The test consists of three parts, each containing several texts and 25 multiplechoice questions. Each child makes two parts. After the first, general, part, children receive the second part. Based on their score on the first part, this is the more easy or difficult part. The total number of correct answers is transformed to a scale score. The scale was developed by the Dutch National Institute for Measurement in Education (Cito) and is part of a tracking system used by most schools in the Netherlands to monitor children's abilities during primary school. Reliability and validity are considered ''good'' according to the Commission on Testing Matters (COTAN) of the Netherlands Institute of Psychologists (NIP). Procedure The schools comprised a convenience sample of schools that worked with the authors on a larger project on science and technology. The first author contacted five schools, that all were willing to participate in exchange for results of the tests. Parents of the children were informed via a letter, and had the opportunity to indicate that they did not want their child to participate. None of them did, and as such, all approached parents gave passive consent for the participation of their child. Data was assessed in three sessions that took place in the classrooms, with the teacher of the children being present. Five undergraduate students Educational Science were trained, and were each assigned to one of the schools. The students received course credits for their help in data collection. The students explained the procedure of each test in the classroom. The tests were divided over three blocks that were assessed in various orders within the seven classes. Assessment took about 45 min per block. The present data is part of a larger dataset regarding science and technology. Not all children were present at the three occasions of testing, and at points they did not fill in all items in the tests. The data thus contained missing values that were kept as such in the analyses, explaining the slightly varying numbers of participants in the ''Results'' section. Descriptive statistics As a preliminary check, we tested whether children scored equally well on the logical syllogisms, the Syllogistic Yes (M = 3.41, SD = 1.53) and No (M = 3.33, SD = 1.47) items. This was indeed the case, t(136) = .75, p = .46, d = .05. Furthermore, the Pearson correlation between the two was high (r = .65, p \ .001). Following our hypotheses and the results of these analyses, the scores were therefore collapsed for further analyses. Table 2 presents the descriptive statistics. Whereas children on average scored 67.4 % correct of the Logical Syllogisms, this score dropped to 37.4 % for the Indeterminate Syllogisms. This was a statistically significant difference, t(135) = 8.30, p \ .01, d = 1.17. Regarding the Logical Syllogisms, 35 % of the children score 9 or 10 points, whereas only 3.6 % reached such a high score for the Indeterminate Syllogisms. However, 26.4 % of the children scored 6 or more points in this latter category. In Table 3, the correlations between the variables under study are presented. As expected, Nonverbal Reasoning, Decoding, Vocabulary, and both types of Syllogistic Reasoning were related to Reading Comprehension, and to each other, with only two exceptions. The Indeterminate Syllogisms had a correlation approaching zero with Decoding, whereas this was not the case for the Logical Syllogisms. On the other hand, the Indeterminate Syllogisms correlated significantly with Nonverbal Reasoning, while the Logical Syllogisms did not. Furthermore, especially the negative correlation between the two syllogistic reasoning tests is noticeable. Some children with a very high score on the Indeterminate Syllogism items scored a bit lower on the Logical Syllogism items. On the other hand, there were some children that hardly ever came up with a Maybe answer, while having a high score at the Yes/No items. The role of syllogistic reasoning in reading comprehension We first performed a hierarchical linear regression analysis to investigate the unique effects of Syllogistic Reasoning on Reading Comprehension, without taking lexical quality into account. In Step 1, Nonverbal Reasoning was entered as a control variable. In Step 2, the two syllogistic reasoning variables were entered: Logical Syllogisms and Indeterminate Syllogisms. The results are presented in Table 4. The Direct and indirect effects of lexical quality on reading comprehension To combine all variables under investigation for our final hypothesis, we used the Process add-on in SPSS (Hayes, 2013), and performed a mediation analysis. The two lexical quality measures, Decoding Speed and Vocabulary were the independent variables, the two syllogistic measures (Logical Syllogisms and Indeterminate Syllogisms) were the mediators, and Reading Comprehension was the dependent variable, with Nonverbal Reasoning as covariate. Because of the two independent variables, the model was run twice, each with one of the independent variables as covariable, to be able to estimate the effects. Bootstrapping was set at 5000 cycles, as recommended by Hayes (2013). In mediation models, the total effect c of an independent variable on the dependent variable, is the addition of the direct effect c' and the indirect effect ab. The indirect effect ab is the product of the effect of the independent variable on the mediator (a) and the effect of the mediator on the dependent variable (b). An indirect effect ab may be significant, even when a or b in itself are not (Hayes, 2013). Discussion and conclusion The main aim of the present study was to investigate in what way lexical quality, syllogistic reasoning and reading comprehension were related, and whether lexical quality incorporates aspects of inferencing to predict reading comprehension. Fig. 1 Model for predicting reading comprehension via lexical quality through syllogistic reasoning, while controlling for nonverbal intelligence. Unstandardized coefficients are reported. Between brackets are total effects (c), outside the brackets the direct effects (c 0 ). Note *p \ .05; **p \ .01; ***p \ .001 How logical reasoning mediates the relation between… 585 Inferencing was operationalized in the present study via two types of syllogistic reasoning: easy, logical syllogisms (requiring a yes or a no answer) and more difficult, indeterminate syllogisms (requiring a maybe answer via elaborative inferencing). Both types of syllogisms had a unique additional effect on reading comprehension, on top of lexical quality, but its effects were also indirect, via lexical quality, and more specifically via the breadth of the lexicon. Lexical quality had strong direct effects on reading comprehension, also when syllogistic reasoning was taken into account. Our first expectation entailed the relations between aspects of lexical quality (decoding efficiency and vocabulary), nonverbal reasoning, syllogistic reasoning, and reading comprehension. As expected, all variables related to each other, but nonverbal reasoning was not associated with easy, logical syllogistic reasoning. This is consistent with the idea that less effort is needed for these types of syllogisms. Furthermore, decoding efficiency was not associated with indeterminate syllogistic reasoning, probably because decoding efficiency is a reading speed measure and not a reflection of a higher-order thinking process. More puzzling is the negative correlation between the two types of reasoning. Some children who were very good in logical syllogisms, scored very low at the indeterminate syllogisms. Perhaps these children have a strategy to hardly ever answer ''maybe'', whereas we could speculate that those children that do very well in the indeterminate but poor in the logical syllogisms, tend to overthink the problem, and so are prone to answer ''maybe''. Haars and Mason (1986) also showed the difficulty children may have in selecting the ''maybe'' answer, even though they have full understanding of both premises. The authors suggested that these children stop analyzing the problem prematurely, whenever a possible correct answer is encountered. The overall picture, however, is in line with the dualprocessing distinction advocated by Evans (2003) and Evans and Stanovich (2013) suggesting that there are two systems underlying reasoning: one autonomous, automatic system, and one evolutionary newer system that allows hypothetical thinking and abstract reasoning. The term ''automatic'' should be taken with caution though, as although there is evidence that modus ponens is automatic, this is not the case for modus tollens (Reverberi, et al., 2012). The second hypothesis entailed the associations of both components of reasoning with reading comprehension, after controlling for nonverbal reasoning. As expected, both were related to reading comprehension, in line with results from Siddiqui et al. (1998), andOsana et al. (2007). When lexical quality aspects (decoding and vocabulary) were taken into account, these effects remained, and indirect effects from lexical quality via syllogistic reasoning on reading comprehension were established. Probably because there was no time constraint in the assessment, especially vocabulary drove the indirect effect, and not so much the decoding efficiency measure. In this respect, it should also be noted that word decoding efficiency in a more transparent language (Dutch in the present study) is less salient as predictor of reading comprehension in the upper grades of primary school (e.g., Verhoeven, et al., 2011). It appeared that lexical quality does not fully entail the easy, straightforward, inferencing processes, as we had expected. The ability to solve both types of syllogisms made a unique contribution in explaining variation in reading comprehension on top of lexical quality, partly confirming our second hypothesis. The top-down skill is thus not contained in the bottom-up skill, and even the more easy, logic syllogisms add to the prediction of reading comprehension. Monti and Osherson (2012) posed that deductive reasoning is not necessarily linguistic, but that this possibly only holds for the more easy syllogisms. They claimed that the role of language is most salient in the initial coding of verbal information. In fMRI research, Kuperberg, Lakshmanan, Caplan, and Holcomb (2006) showed that a large bilateral network is activated when processing connected sentences. The discussion is ongoing, and it is difficult to draw a conclusion in this respect based on our results. Following Monti and Osherson, one could, however, argue, that the ability to solve syllogisms is a form of nonverbal intelligence. As a case in point, Shikishima, et al. (2011) suggested a strong association between syllogistic reasoning ability and general intelligence (g). We controlled for this by taking a nonverbal measure of intelligence (nonverbal reasoning) as covariate in our design. There are some limitations that should be acknowledged at this point. First, we did not incorporate a measure of working memory, while this has an important relation to reasoning (Evans & Stanovich, 2013). In future research, adding this measure will help to more fully understand the associations between lexical quality, reasoning, and reading comprehension. Second, different measures of reading comprehension could be assessed for a more fine-grained of reading comprehension abilities. In the present study, we used a composite measure that could not be disentangled into its components. In future research, it would be interesting to find out whether the logic syllogisms are more associated with forming a text-based model, and the indeterminate syllogisms with forming a situation model. Finally, it should be acknowledged that the present study was a first attempt to connect syllogistic reasoning ability to reading comprehension, in order to get one step closer in understanding individual differences in reading comprehension. This study was the first to combine these two measures in developing readers in primary school. When further studying these relations, it is recommended to also incorporate other, more refined, measures of logical reasoning, and by taking up a longitudinal approach. This may provide more insight in the possible reciprocal relations between the different measures. Based on the results of the present study, it can be concluded that measures of syllogistic reasoning account for higher-order thinking processes that are needed to make inferences in reading comprehension. However, the role of lexical quality appears to be pivotal in explaining the variation in reading comprehension both directly and indirectly via syllogistic reasoning. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
6,021.4
2016-02-05T00:00:00.000
[ "Linguistics", "Psychology" ]
Self-organized topological insulator due to cavity-mediated correlated tunneling Topological materials have potential applications for quantum technologies. Non-interacting topological materials, such as e.g., topological insulators and superconductors, are classified by means of fundamental symmetry classes. It is instead only partially understood how interactions affect topological properties. Here, we discuss a model where topology emerges from the quantum interference between single-particle dynamics and global interactions. The system is composed by soft-core bosons that interact via global correlated hopping in a one-dimensional lattice. The onset of quantum interference leads to spontaneous breaking of the lattice translational symmetry, the corresponding phase resembles nontrivial states of the celebrated Su-Schriefer-Heeger model. Like the fermionic Peierls instability, the emerging quantum phase is a topological insulator and is found at half fillings. Originating from quantum interference, this topological phase is found in"exact"density-matrix renormalization group calculations and is entirely absent in the mean-field approach. We argue that these dynamics can be realized in existing experimental platforms, such as cavity quantum electrodynamics setups, where the topological features can be revealed in the light emitted by the resonator. Introduction Manifestation of topology in physics [1,2] created a revolution which is continuing for almost four decades. With the discovery of topological materials, condensed matter physics has gained a new terrain where quantum phases of matter are no longer controlled by local order parameters as in paradigmatic Landau theory of phase transitions but rather by the conservation of certain symmetries [3]. These new phases of matter, so called symmetry-protected topological (SPT) phases, display edge and surface states that can be robust against perturbations, making them genuine Titas Chanda<EMAIL_ADDRESS>candidates for quantum technologies [4]. To date, a detailed understanding of noninteracting topological materials, such as e.g., topological insulators and superconductors [5], has been obtained through a successful classification based on fundamental symmetry classes, the so-called "ten-fold way" [6][7][8]. On the other hand, interactions are almost unavoidable in many-body systems. It is therefore a central issue to understand whether SPT phases can survive the inter-particle interactions, or perhaps whether interactions themselves might stabilize SPT phases and even give rise to novel topological properties [9][10][11][12][13][14][15][16]. These questions are at the center of an active and growing area of research [17,18]. Recent works have argued that the range of interactions plays a crucial role on the existence of SPT phases. Specifically, in frustrated antiferromagnetic spin-1 chain with power-law decaying 1/r α interactions, topological phases are expected to survive at any positive value of α [19]. This behavior shares similarities with the topological properties of noninteracting Kitaev p-wave superconductors [20] that are robust against long-range tunneling and pairing couplings [21][22][23][24]. It was found that for infinite-range interactions the one-dimensional Kitaev chain can exhibit edge modes for appropriate boundary conditions [25]. On the other hand, in spin chains and Hubbard models, the infinite-range interactions suppress the onset of hidden order at the basis of the Haldane topological insulator [26,27]. In this work, we report a novel mechanism where nontrivial topology emerges from the quantum interference between infinite-range interactions and singleparticle dynamics. We consider bosons in a onedimensional (1D) lattice, where the global interactions have the form of correlated tunneling resulting from the coupling of the bosons with a harmonic oscillator. We identify the conditions under which this coupling spontaneously breaks the discrete lattice translational symmetry and leads to the emergence of non-trivial edge states at half filling. The resulting dynamics resembles the one described by the famous Su-Schrieffer-Heeger (SSH) model [28,29]. The topology we report shares analogies with the recent studies of symmetry breaking topological insulators κ Ω Figure 1: Interference-induced topological phases can be realized in a cavity quantum electrodynamics setup. The bosons are atoms (red spheres) confined by a one-dimensional optical lattice and dispersively interacting with a standingwave mode of the cavity. The cavity standing-wave field is parallel to the lattice, its wavelength is twice the lattice periodicity, and the atoms are trapped at the nodes of the cavity mode. Correlated hopping originates from coherent scattering of laser light into the cavity, the laser Rabi frequency Ω controls the strength of the interactions. The field at the cavity output is emitted with rate κ and provides information about the phase of the bosons. [14-16, 30, 31]. Nevertheless, differing from previous realizations, here the interference between quantum fluctuations and global interactions is essential for the onset of the topological phase and cannot be understood in terms of a mean-field model. We argue that these dynamics can be realized, for instance, in many-body cavity quantum electrodynamics (CQED) setups [32][33][34][35][36][37], like the one illustrated in Fig. 1 highlighting the experimental feasibility of our proposal. In the following, we first give a brief description of the model in relation to the experimental setup. Then, we ascertain the phase diagram of the system at half filling, characterize the topological phase, and point out how to experimentally verify its existence. Bose-Hubbard model with global correlated tunneling The model we consider describes the motion of bosons in a 1D lattice with N s sites and open boundaries. The specific experimental situation realizing this dynamics is sketched in Fig. 1. The dynamics is governed by the HamiltonianĤ in terms of the field operatorsb j andb † j , which annihilate and create, respectively, a boson at site j in the lowest lattice band. The bosons, tightly confined in the lowest band of the lattice, strongly couple with a cavity standing-wave mode (the harmonic oscillator), which is parallel to the lattice. When the atoms are transversely driven by a laser, the Hamiltonian takes the detailed form of a Bose-Hubbard (BH) model with the additional optomechanical coupling with the oscillator [38]: where we have set = 1. Here, the first two terms on the right-hand side (RHS) of Eq. (1) are the nearestneighbor hopping with amplitude t and onsite repulsion with magnitude U . The third term on the RHS is the bosons-cavity coupling, where operatorsâ and a † annihilate and create, respectively, a cavity photon, while the operatorB acts on the bosonic Hilbert space. The last term of (1) gives the cavity oscillator energy in the reference frame rotating at the frequency of the transverse laser with ∆ c the detuning between cavity and laser frequency. Finally, the bosons-cavity coupling is scaled by the coefficients S and y that can be independently tuned. The first one, in fact, is proportional to the amplitude Ω of the transverse laser field, c.f. Fig. 1. The coefficient y, instead, depends on the localization of the scattering atoms at the lattice sites (see Appendix A). The specific form of operatorB depends on the spatial dependence of the cavity-bosons coupling [39][40][41][42]. In the present case, we consider The staggered coupling, (−1) i+1 b † ib i+1 + H.c , originates from the spatial mode function of the cavity mode along the lattice when the lattice sites are localized at the nodes of the cavity mode, and when the periodicity of cavity mode standing wave is twice the periodicity of the optical lattice. Hamiltonian (1) with (2) is reminiscent of the phonon-electron coupling of the SSH model [43]. Differing from the ionic lattice of the SSH model, the bosons couple to a single oscillator -the cavity mode. Here, the instability is associated with a finite stationary value of the field quadraturex =â +â † : for x = 0 the bonds connecting even-odd and odd-even sites differ by a quantity proportional to x . Sincex is a dynamical variable, this process is accompanied by a spontaneous breaking of the lattice translational symmetry. In a cavity, for instance,x is the electric field that is scattered by the atoms and depends on the atomic mobility. The resulting bosonic dynamics is thus intrinsically nonlinear. We analyze the quantum phases of the system in the limit in which the cavity field (oscillator) can be eliminated from the equations of the bosonic variables assuming that |∆ c | is the largest frequency scale of the dynamics. In this limit, the time-averaged field iŝ ε(τ ) = 1 ∆t τ +∆t τ dtâ(t) ≈ −SB(τ )/∆ c , where τ is the coarse-grained time in the grid of step ∆t [39,44]. The effective Bose-Hubbard Hamiltonian takes the form where U 1 = S 2 N s /∆ c and the explicit dependence on N s warrants the extensivity of the Hamiltonian in the thermodynamic limit when U 1 is fixed by scaling S ∝ 1/ √ N s . The term proportional toB 2 emerges from the back-action of the system through the global coupling with the oscillator. It describes a global correlation between pair tunnelings. Before discussing the emerging phases, we note that the model in Eq. (1) has extensively been employed for describing ultracold atoms in optical lattices and optomechanically coupled to a cavity mode [32,33,39,41,45]. The specific form of the coupling with operator (2) has been discussed in Ref. [41] and can be realized by suitably choosing the sign of the detuning between laser and atomic transition as in Ref. [34]. The scaling 1/N s of the nonlinear term corresponds to scaling the quantization volume with the lattice size [46]. In the time-scale separation ansatz, the bosons dynamics is coherent provided that |∆ c | is also larger than the cavity decay rate κ: in this regime shot-noise fluctuations are averaged out. Correspondingly, there is no measurement back-action, since the emitted fieldε(τ ) is the statistical average over the time grid ∆t [47]. We note that topological phases can also be realized in plethora of platforms where the range of interactions can be tuned and the geometry controlled, prominent examples are optomechanical arrays [48], photonic systems [49], trapped ions [50], and ultracold atoms in optical lattices [32,[51][52][53][54][55][56]. Specifically, this model could be also realized in the experimental setups as discussed in [57][58][59]. In this work, we shall keep on referring to a CQED setup, where the dynamics predicted by Hamiltonian (1) has been extensively studied and can be realized [32][33][34]. Phase diagram at half filling In the rest of this work, we consider the system at half filling, i.e., density ρ = 1/2. We determine the ground state and its properties using the density matrix renormalization group (DMRG) method [60,61] based on matrix product states (MPS) ansatz [62,63] -for details see Appendix B. The phase diagram is determined as a function of t/U and U 1 /U . The phases and the corresponding observables are summarized in Table 1, the observables are detailed below. We characterize off-diagonal long-range order by the Fourier transform of the single particle correlations, i.e., the single particle structure factor which can be experimentally revealed by means of time-of-flight measurements [64,65]. The coupling with the cavity induces off-diagonal long-range order, that is signaled by the bond-wave order parameter This quantity is essentially the cavity field in the coarse graining time scale and is directly measurable by heterodyne detection of the electric field emitted by the cavity [66]. We also consider the density-wave order parameter, O D = 1 Ns j (−1) jn j , which signals the onset of density-wave order and typically characterizes phases when the lattice sites are at the antinodes of the cavity field (see Appendix A). Moreover, we analyze the behavior of the string and parity order parameters: These order parameters depend nonlocally on onsite fluctuations δn j =n j − ρ from the mean density ρ. In our calculations of O S and O p , we take i = 10 and j = N s − 11 to neglect the boundary effects. We first remark that, for U 1 = 0, hence in the absence of global interactions, the phase is superfluid (SF). We also expect that for U 1 > 0 the quantum phase of Hamiltonian in Eq. (3) is SF, since the formation of a finite cavity field costs energy. Instead, we expect that correlated hopping becomes relevant for U 1 < 0. We have first performed a standard Gutzwiller mean-field analysis of the model assuming two-site translational invariance. At sufficient large values of |U 1 | mean-field predicts the formation of a SF phase of the even or odd bonds accompanied by a finite value of bond-wave order parameter O B . This phase maximizes the cavity field amplitude and we denote it by Bond Superfluid phase (BSF). We point out that mean-field predicts that the ground state exhibits off-diagonal long-range order for any value of U 1 . We now discuss the quantum phase obtained from DMRG calculations. The phase diagram is reported as a function of the ratio t/U , which scales the strength of the single-particle hopping in units of the onsite repulsion, and of the ratio U 1 /U , which scales the strength of the correlated hopping. We sweep U 1 from positive to negative values. We note that in a cavity the sign of U 1 is tuned by means of the sign of the detuning ∆ c . The effective strength, in particular, shall be here scaled by the parameter y 2 , depending on the particle localization. Here, y is constant across the diagram, since we keep in fact the optical lattice depth constant and tune the ratio t/U by changing the onsite repulsion U (experimentally, this is realized by means of a Feshbach resonance). where OB has been determined using DMRG. The mean-field Gutzwiller approach for bond-wave order parameter agrees well with DMRG in SF and BSF regimes, but misses the appearance of BI phase, which is further verified by computing the mean-field results for the maximum of single particle structure factor max M M F the SF from the BSF phase, where the effective tunneling amplitudes b † ib i+1 + H.c. attain a staggered pattern characterized by a finite value of bond-wave order parameter O B . The long range coherence of the BSF phase is manifested by narrow peaks of M 1 (k) centered at k = ±π/2 (see Fig. 2). Remarkably, we observe a reentrant insulating phase separating the SF and the BSF. The insulator is signaled by vanishing off-diagonal long-range order and therefore by vanishing structure factor M 1 (k). It is characterized by the non-zero (zero) values of the string order parameter and by vanishing (non-vanishing) parity order parameter depending on the boundary sites of these non-local parameters (see the next section for details). We denote this phase as a Bond Insulator (BI). This phase is separated from the SF by a continuous phase transition. The transition BI-BSF is also continuous. We note that the bond insulator phase is entirely absent in the standard Gutzwiller mean-field approach [67][68][69][70][71], where the bosonic operators are decomposed asb with Φ j = b j being the superfluid order parameter in the Gutzwiller analysis. By performing such meanfield analysis with two-site unit cell, we can obtain the bond-wave order parameter O M F B at the mean-field level. Figure 3 displays the mean-field bond-wave order O M F B (top left panel) and the deviation of it from the DMRG result (top right panel). While the exact borders between various phases quantitatively differ, the mean field Gutzwiller approach agrees well with DMRG in SF and BSF regimes, but misses the appearance of BI phase. To be sure about this, we check the mean-field value of the maximum of single particle structure factor M 1 (k), which is to be redefined as The max M M F 1 (k) is presented in the bottom panel of Fig. 3. It has possesses high values in the entire (t/U, U 1 /U )-plane confirming that the mean-field analysis cannot capture the insulating BI phase where M 1 (k) must be vanishingly small. Moreover, the mean-field value of the string-order parameter O S remains exactly zero in the entire phase diagram. It is to be noted that max M M F 1 (k) does not match the DMRG results presented in Fig. 2, as in reality the offdiagonal correlations b † ib j decay algebraically with the distance |i − j| in the superfluid phases, while at the mean-field level Φ i Φ * j does not. Indeed the novel BI phase is entirely due to the interplay between the long-range coupling induced by the cavity and the single-particle tunneling. Due to this quantum interference, the insulating BI phase reveals itself in quasi-exact calculations like DMRG, while Gutzwiller analysis can only capture two superfluid phases. We further note that studies of the ground state of (3), based on exact diagonalization for small system sizes, did not report the existence of the BI phase [41]. In the following section, we characterize the topology associated with the BI phase and argue that it is stable in the thermodynamic limit. Emergent topology associated with the BI phase By means of excited-state DMRG, we reveal that the BI phase has triply degenerate ground state (quasidegenerate for finite N s ) separated by a finite gap from the other excited states. The site distribution is visualized in Fig. 4 which shows that the absolute ground state has a uniform mean half-filling, while the other two states possess edge excitations, namely, fractional 1/2 particle-hole excitations with respect to the mean half-filling (bottom two rows of Fig. 4). Such edge excitations are characterized by the bondwave order parameter with opposite sign than the trivial phase. They suggest that the BI phase is a symmetry protected topological (SPT) phase. Similar topological edge states have been reported e.g., for noninteracting system [56] or in superlattice BH model [72], where the superlattice induces a tunneling structure resembling that of the SSH model [28,29]. In our case, instead, the effective tunnelings are spontaneously generated by the creation of a cavity field that breaks discrete Z 2 translational symmetry of the system. However, bond centered inversion symmetry still remains intact -it protects this SPT phase. This ib i+1 + H.c. as a function of the bonds (i, i + 1). Orange (teal) bars denote the even (odd) bond. Right panels: Density ni as a function of the lattice site. The dashed line is a guide for the eye. Observe the characteristic alternate weak/strong pattern in the bonds with weak bonds occurring at the edges for topological states that reveal topological particle-hole edge excitations. These excitations are due to exactly fractional ±1/2 particle localizations on the edges. Here, we set U1/U = −10, t/U = 0.05, and y = −0.0658 (see Appendix A) to obtain the states using DMRG algorithm. On a further inspection, it is found that the string order O S and parity order O P can be non-zero (zero) depending on the location of the two separated sites that sit at the boundaries of the non-local operators (sites i and j in Eqs. (6)). We illustrate it in Fig. 5(a). We find that O S = 0 and O P = 0, as reported in Fig. 2, when the non-local operators start at the second site of a strong bond (i.e., the bond with larger tunneling element) and end at the first site of a weak bond (the bond with smaller tunneling element) further away. That is why we have chosen i = 10 (even site) and j = N s −11 (odd site) to calculate these nonlocal operators for the trivial state in Fig. 2. These are unusual properties when compared to, say, topological Haldane phases of extended BH models at unit filling [27,74]. To check whether we are indeed dealing with topological states, we calculate the entanglement spectrum of the system. For this purpose we partition the chain into a right (R) and left (L) subsystem as |ψ GS = n λ n |ψ L ⊗ |ψ R where λ n are the corresponding Schmidt coefficients for the specific bipartition. The entanglement spectrum is then defined as the set of all the Schmidt coefficients in logarithmic scale ε n = −2 log λ n and is degenerate for phases with topological properties in one dimension [75]. We find that ε n are degenerate near the chain center when the bipartition is drawn across a strong bond, while it is non-degenerate at the weak bonds. In Fig. 5(b), we display the entanglement spectrum for trivial and topological ground states of the BI phase for N s = 60, when ε n are measured across the bipartition at the chain's center. The entanglement spectrum, together with the density pattern and the behavior of string and parity order parameters, provide convincing proof of the topological character of the BI phase. Furthermore, to show that the BI phase is stable in the thermodynamic limit, we consider the entanglement gap, ∆ε = ε 1 − ε 0 , for different system sizes. Fig. 5(c) presents the variation of ∆ε across BSF-BI-SF phases for fixed t/U = 0.05 -confirming the stability of SPT BI phase in the thermodynamic limit. We perform similar analysis with different values of t/U too. Such a finite-size analysis with the entanglement gap also confirms that the phase boundaries predicted for N s = 60 in Fig. 2 remain stable with increasing system-size. In order to reveal the bulk-edge correspondence, we determine the many-body Berry phase [76] and show that it is Z 2 -quantized in the BI phase. We first note that, because of the strong interactions, the winding number or the Zak phase [56,77,78] is not a good topological indicator in our case. Therefore, we follow the original proposition of Hatsugai [79] and determine the local many-body Berry phase, which is a topological invariant playing the role of the local "order parameter" for an interacting case [79]. For this purpose, we introduce a local twist t → te iθn in the Hamiltonian (3), such that the system still remains gapped in the BI phase. Then the many-body Berry phase is defined as where ψ θn 's are the ground states with θ 0 , θ 1 , ..., θ K = θ 0 on a loop in [0, 2π]. Here, we consider the local Berry phase corresponding to a bond by giving the local twist in tunneling strength t → te iθn only on that particular bond, and take K = 10. Since the ground state manifold is (quasi-)degenerate, we need to put small local terms to distinguish between the different states in the manifold in order to compute local many-body Berry phases. Specifically, we add ∓0.02 b † 1b 2 +b † Ns−1b Ns + H.c. respectively for the trivial and topological states to the twisted Hamiltonians. However, in case of topological states, two edge states (bottom two rows of Fig. 4) are exactly degenerate and cannot be distinguished by the local term mentioned above. In that case, to calculate the many-body Berry phase we put one extra particle, i.e., N s /2 + 1 bosons in total, so that we have only one edge state where both edges are localized with an extra +1/2 particle. Similarly, we could have reduced one particle (N s /2−1 bosons) where the unique edge state would have an extra −1/2 particle localization on both the edges. The local Berry phases γ (K) 's are displayed in Fig. 5(d) for the system size N s = 60. Similar to the entanglement spectrum, we find γ (K) = π for the strong bonds, while γ (K) = 0 on the weak bonds. Discussion The BI phase of this model is a reentrant phase. It separates the SF phase, where correlated hopping is suppressed by quantum fluctuations, from the BSF phase, where correlated tunneling is dominant and single-particle tunneling establishes correlations between the bonds. We have provided numerical evidence that the emerging topology is essentially characterized by the interplay between quantum fluctuations and correlated tunneling. Interactions are here, therefore, essential for the onset of topology. Their global nature is at the basis of the sponta-neous symmetry breaking that accompanies the onset of this phase and which induces an asymmetry between bonds. In this respect, it is reminiscent of the Peierls instability of fermions in resonators [31], where the topology is associated with the spontaneous breaking of Z 2 symmetry. Differing from that case, where photon scattering gives rise to a self-organized superlattice trapping the atoms, in our model photon scattering interferes with quantum fluctuations. Like in [31], gap and edge states can be measured in the emitted light using pump-probe spectroscopy. The single-particle structure factor may be directly accessible by the time-of-flight momenta distributions [64,65] enabling the detection of insulator-superfluid phase transition. The two combined measurements of the cavity output and of the structure factor shall provide a clear distinction between the BI, SF and BSF phases. We observe that the global long-range interaction of this model inhibits the formation of solitons. For other choice of periodicity, and thus of the form of op-eratorB, one could expect glassiness associated with the formation of defects [39], whose nature is expected to be intrinsically different from the one characterizing short-range interacting structures. To conclude, we have presented a new paradigm of topological states formation via interference between single particle dynamics and interaction induced hopping. A Coefficients of the extended Bose-Hubbard model To fix the notation, let us consider N atoms of mass m confined within an optical cavity in a quasi-onedimensional configuration (almost) collinear with a one-dimensional optical lattice created by light with wave number k L = 2π/λ, which may be different from k -the wave number of the cavity field. The optical lattice is created by the trapping potential, V 0 making the atomic motion effectively onedimensional. In our calculations we take V = 50E R , where E R = 2 π 2 /2ma 2 is the recoil energy and a denotes the periodicity of the optical lattice. The Wannier basis in the lowest band is W i (x, y, z) = w i (x)Φ 0 (y, z) with w i (x) being the standard onedimensional Wannier function centered at site i and Φ 0 (y, z) -a two-dimensional Wannier function for the transverse deep lattice. For our purposes, Φ 0 (y, z) is essentially equivalent to a Gaussian with width σ = a 2 π 2 E R /V . After adiabatically eliminating the cavity field one gets an effective Hamiltonian describing atomic motion in terms of atomic creation/annihilation operatorsb † i /b i for atoms at site i is decomposed into the sumĤ =Ĥ BH +Ĥ Cav [39,47], where the motion in the lattice and atom-atom interactions are described by the standard Bose-Hubbard Hamiltonian (we assume contact interactions and neglect densitydependent tunneling terms known to be small for such interactions [80]) are given by the integrals The rescattering due to the cavity mode leads to a long range "all-to-all" interaction terms expressible aŝ (13) We rewrite the integrals above as Here, β = k/k L is the ratio between k, the wavenumber of the cavity mode, and k L , the wavenumber corresponding to the optical lattice, and φ is a phase in the mode function. In our work we consider β = 1, and note that arbitrary values of β would lead to a quasiperiodic Hamiltonian [81,82]. For β = 1 and for i = j the magnitude of the integral becomes independent of i, that can be written as Please note here that we have defined y ij (Eq. (14)) coming from Eq. (13) with a minus sign to fix y 11 = z to be positive, as in our convention the lattice index starts from i = 1. For non-diagonal y ij , we observe that due to localization of Wannier functions i = j ± 1 terms may be only significant ones. For our choice of β = 1, the magnitude of the integral again becomes independent of i, and can be written as ThenĤ Cav may be put in the form, Typically, one assumes no phase difference (φ = 0) between the optical cavity and the external optical lattice, i.e., the lattice sites are located at the antinodes of the cavity mode. Then |z| = 0 and |y| ≈ 0, and the terms proportional to y are dropped off leading to the standard case considered in the past [83]. The importance of additional terms was noticed already in [41] where an identical Hamiltonian is obtained for a slightly different arrangement of the cavity and external optical lattice. Here, we have focused on a immediate vicinity of φ = π/2. This corresponds to the atoms being trapped at the nodes of the cavity mode, where z vanishes (see left panel of Fig. 6). However, as φ starts to deviate from π/2 the y term rapidly decreases and z term becomes significant (see right panel of Fig. 6). Note that the quadratic form ofĤ Cav is responsible for the long-range character of the couplings. SquaringD leads then to all-to-all density-density interactions, responsible for a spontaneous formation of density wave phase for sufficient U 1 [84]. For the case considered by us, z ≈ 0 and B 2 term leads to the all-to-all long-range correlated tunnelings alternating in sign. Throughout the paper, we have fixed V = 4E R resulting in y = −0.0658 and z = 0 for φ = π/2. In Fig. 7, we plot the order parameters and the phase diagram in the vicinity of φ = π/2 for t/U = 0.05. As φ starts to deviate from π/2, y starts to diminish and z becomes increasingly larger (see right panel of Fig. 6). As a result, BI phase is replaced by a more standard density wave (DW) phase [84], when φ becomes sufficiently different from π/2. In the DW phase O DW as well as O S and O P are both non-zero, while the structure factor vanishes. B Numerical implementation We use standard matrix product states (MPS) [62,63] based density matrix renormalization group (DMRG) [60,61] method to find the ground state and low-lying excited states of the system with open boundary condition, where we employ the global U (1) symmetry corresponding to the conservation of the total number of particles. For that purpose, we use ITensor C++ library (https://itensor.org) where the MPO for the all connected long-range Hamiltonian can be constructed exactly [85,86] using AutoMPO class. The maximum number bosons (n 0 ) per site has been truncated to 6, which is justified as we only consider average density to be ρ = 1/2. We consider random entangled states, |ψ ini = are random product states with density ρ = 1/2, as our initial states for DMRG algorithm. The maximum bond dimension of MPS during standard two-site DMRG sweeps has been restricted to χ max = 200. We verify the convergence of the DMRG algorithm by checking the deviations in energy in successive DMRG sweeps. When the energy deviation falls below 10 −12 , we conclude that the resulting MPS is the ground state of the system. To obtain low-lying excited states, we first shift the Hamiltonian by a suitable weight factor multiplied with the projector on the previously found state. To be precise, for finding the n th excited state |ψ n , we search for the ground state of the shifted Hamiltonian, where W should be guessed to be sufficiently larger than E n − E 0 .
7,281.4
2020-11-03T00:00:00.000
[ "Physics" ]