text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Shell-shaped condensates with gravitational sag: contact and dipolar interactions
We investigate Bose-Einstein condensates in bubble trap potentials in the presence of a small gravity. In particular, we focus on thin shells and study both contact and dipolar interacting condensates. We first analyze the effects of the anisotropic nature of the dipolar interactions, which already appear in the absence of gravity and are enhanced when the polarization axis of the dipoles and the gravity are slightly misaligned. Then, in the small gravity context, we investigate the dynamics of small oscillations of these thin, shell-shaped condensates triggered either by an instantaneous tilting of the gravity direction or by a sudden change of the gravity strength. This system could be a preliminary stage for realizing a gravity sensor in space laboratories.
Introduction
The recent progress in microgravity experiments with Bose-Einstein condensates (BECs) [1][2][3][4] and the development of new exotic confining potentials [5][6][7][8][9][10] have fostered a novel field of research on shell-shaped BECs. These hollow condensates were first realized in 2004 [11] and are at present under investigation in the NASA Cold Atom Laboratory (CAL) on the International Space Station [12][13][14]. Due to Earth's gravity, the atoms sag to the bottom of the trap destroying shell-shaped BECs. Thus, microgravity conditions-in which gravity is small enough to be neglected-ensure that these condensates are realizable in experiments. The observation of a shell-shaped BEC at CAL [12] has spiked the interest in such condensates under microgravity conditions [15][16][17]. In this paper, however, we are interested in the effect of a small gravity in shell-shaped BECs. Hence, to ensure that shells are realizable, we consider gravities larger than microgravity to study its effect but still some orders of magnitude smaller than the terrestrial gravity. This range of gravities is valid for the parameters we used in the numerical calculations from sections 3 and 4. However, one can extend it easily to other values-see section 5-by considering a different set of parameters, provided that the system is within the mean-field regime and in the thin-shell limit.
BECs in shell-shaped potentials open the possibility to investigate condensation and superfluidity phenomena in new topologies: collective modes [18][19][20], self-interference effects [17], thermodynamics of shells and curved manifolds [21,22], quantized vortices [16,23,24], topological transitions in curved systems [25], the dimensional reduction to a ring-shaped condensate [26], and the transition from filled to hollow condensates [27,28], among others. Theoretical work has focused mainly on shell-shaped BECs with contact-interacting atoms rather than with atoms that possess a non-negligible dipolar moment. The latter situation has been examined in the limit of a thin shell [29] and under rotation [30].
While contact interactions are short-range and isotropic, the interaction between particles with a dipolar moment presents a long-range and anisotropic character [31]. This intrinsic feature of dipolar BECs makes these systems especially sensitive to the shape of the trapping potential. Besides, the existence of a privileged direction defined by the dipole polarization might endow dipolar BECs with an interesting sensitivity to small changes in orientation, such as perturbations of gravity.
In this work, we investigate the effects of the dipolar interaction in thin shell-shaped condensates. Moreover, since experiments in microgravity conditions or at CAL facilities might suffer gravity perturbations, we study the dynamics of small oscillations in small-gravity conditions-for both contact and dipolar interacting condensates-, which could yield to identify small changes in the direction or magnitude of the gravity.
The paper is structured as follows. Section 2 introduces the shell-shaped potential and the theoretical framework. In section 3, we analyze the ground state configurations-both for contact and dipolar interacting BECs-in the presence of gravitational sag. Then, we discuss two cases: when the gravity is parallel with the z-axis-the polarization direction in dipolar BECs-and when it is slightly misaligned. Section 4 explores the dynamics of small oscillations triggered by a tiny variation in the gravity direction or its strength. In section 5, we extend our study to other sets of parameters and ranges of gravity. Lastly, we summarize our results and provide future perspectives in section 6.
Theoretical framework
We consider N dilute and weakly-interacting dipolar bosons at zero temperature confined in a shell-shaped potential V ext ( r). In the mean-field framework, the Gross-Pitaevskii equation (GPE) provides a good description of a weakly interacting dipolar BEC: where Ψ( r, t) is the condensate wave function normalized to the total number of particles N. The atom-atom mean-field interaction is characterized by the contact-interacting potential with coupling constant g = 4π 2 a s /m, where a s is the s-wave scattering length and m the atomic mass, and the dipolar The dipolar interaction potential for a polarized sample of particles with dipolar moment μ oriented along the z-axis is where | r − r | is the relative distance between particles, θ is the angle between r − r and the direction of polarization and C dd is μ 0 μ 2 (d 2 / 0 ) for a magnetic (electric) dipole moment. Analogously to contact interactions-which are characterized by the s-wave scattering length-, a dipolar effective length can be introduced for dipole-dipole interactions, a dd = C dd m/(12π 2 ). Then, the relative strength of both interactions is defined as the ratio of these two effective lengths, dd = a dd /a s , which in the case of magnetic moments is Shell-shaped BECs have been experimentally realized by employing time-dependent, radio-frequency induced adiabatic potentials within a conventional magnetic trap [5]. In the thin-shell limit, where the thickness of the shell is small compared to its radius, these bubble trap potentials can be approximated by a radially shifted harmonic trap [19,27]: This potential defines a spherically symmetric shell of radius r 0 , with ω ≡ ω x = ω y = ω z and r 2 ≡ x 2 + y 2 + z 2 .
Ground states
From now on, we consider a typical BEC in the mean-field regime: N = 10 4 atoms of 164 Dy polarized along the z-axis with magnetic dipolar moment μ = 10 μ B , scattering length a s = 120a 0 , and mass m = 164 amu. In this case, the relative strength of the interactions is dd = 1.11. We start by characterizing the ground state of the system with and without gravitational sag. In the spherical-shell geometry described before, we obtain the shell-shaped ground state wave function by solving the time-independent GPE with the imaginary-time propagation method in 3D. The dipolar term transforms the GPE into a more complicated equation. However, one can evaluate the dipolar interaction integral, V dd ( r), employing Fourier transform techniques-see [32] and references therein 3 . In particular, we use the FFTW package [33]. The size of the 3D box shown in all the figures is 16 μm × 16 μm × 16 μm.
Dipole-dipole interactions
The dipolar interaction deforms the ground state density to minimize the energy of the system. This effect is a consequence of the anisotropic character of the dipolar interactions and depends on the specific trapping potential. It was observed already in the first dipolar condensates as the appearance of new structured biconcave ground states for some particular values of the strength of the dipolar interactions and the harmonic trap anisotropy [31]. Afterward, this feature was proposed to generate a self-induced bosonic Josephson junction in a toroidally confined dipolar condensate [34,35].
For a spherical shell-shaped confining potential (4), we show in figure 1 the 2D contour plot of the density in the xy(yz) plane in the left (right) panel. Figure 1(a) corresponds to a pure contact interacting BEC and figure 1(b) to a dipolar BEC with the dipoles aligned along the z-axis. Due to the confinement, the condensate has a hollow core and is shell-shaped. The density distribution with only contact interactions-see figure 1(a)-is entirely isotropic, while the addition of dipolar interactions produces a density accumulation around the equatorial region of the bubble-see right panel of figure 1(b). In the equator, the dipoles lie mainly in a head-to-tail configuration, and the resulting interaction is attractive. In the polar regions, the dipoles sit instead side by side, which gives a net repulsive interaction.
This anisotropic effect of the dipolar interaction was already shown in toroidal condensates [36] and, more recently, in spherical shell-shaped potentials [29,30]. Note that, despite the contour plot in the xy plane is almost the same with and without dipolar interactions-see left panels on figures 1(a) and (b)-, the maximum value of the density is higher in the presence of dipolar interactions than without them. The asymmetry in the density contour plot enhances as the relative strength between dipolar and contact interaction, dd , increases [30,36].
Gravitational sag
The effect of gravity can be accounted for by including an additional potential term in the GPE (1), the gravitational sag potential V g [19]. To investigate the anisotropic effects of the dipolar interaction, we consider a general case in which the direction of gravity is not aligned with any of the axes of the trap but lies in the xz plane. The gravitational sag potential reads where θ 0 is the angle between the gravity direction and the z-axis. In the particular case where the gravity and the z-axis are aligned (θ 0 = 0), the gravitational sag V g (z) = mgz is equivalent to adding a vertical displacement to the trap's center [19,37].
Here we investigate the effects of gravity in the same spherical shell-shaped trapping potentials as in the previous subsection, both in contact-interacting and in dipolar BECs. We restrain our study to small strengths of gravity-with small we mean larger than microgravity but smaller than Earth's gravity-since the terrestrial gravity destroys shell-shaped geometries [12] and microgravity effects are negligible in our case.
Gravity aligned with the z-axis
We start by considering a gravity aligned with the z-axis and, for dipolar BECs, parallel with the polarization direction. Figure 2 depicts the numerical results-see figure 1 for comparison without gravity. As we can see in the right panel (yz plane) of the contact interacting case-figure 2(a)-, the atoms fall to the bottom of the shell-shaped potential. The density distribution in the xz plane is the same as in the yz plane due to the axial symmetry of the system: confinement, gravity, and polarization. The distortion of the trap, which results in a partially filled shell, is a clear signature of gravitational sag [19,38].
In the presence of dipolar interactions-figure 2(b)-, the interplay between their anisotropic character, the confining potential, and the gravitational sag leads to a partially filled shell, like in the contact-interacting case, but with a density depletion in the south region. As discussed in the situation with no gravity, the repulsive interaction between two parallel dipoles produces a significant reduction of the density in the bottom of the condensate-see the right panel of figure 2(b). The maximum density band lies slightly below the equatorial region, depending on the balance between the gravity and the dipolar moment of the atoms.
Misaligned gravity
We now explore a more general situation where the gravity and polarization direction are not aligned. Instead, gravity forms an angle θ 0 with the z-axis and lies in the xz plane. In figure 3, we show the 2D contour density plots in the three planes: xy (left), yz (middle), and xz (right). Figure 3(a) corresponds to a pure contact interacting BEC, and figure 3(b) to a dipolar one. For a contact interacting BEC, the 2D contour plot in the yz plane remains almost unaltered as compared to figure 2(a), but the density's maximum in the xz plane tilts in the direction of gravity-marked with a green arrow in the right panels of figures 3(a) and (b). As one can see in the xy plane, this tilting also produces an accumulation of particles in the right part of the bottom region of the shell.
The situation becomes more amusing for dipolar BECs, though, since the polarization axis fixes a privileged direction that breaks the symmetry when the gravity and the dipoles are not aligned. As a result, the density configurations in the xz and yz planes are now different from the contact-interacting case, as shown in figure 3(b). The density contour plot in the yz plane is also similar to the density configuration when the gravity is parallel to the z-axis-see the right panel of figure 2(b). However, changes in the density in the xz plane are more significant now: the maximum of the density lies in the right lobe of the shell and at a larger tilting angle compared to the direction of gravity. Within this region, the dipoles mainly lie head-to-tail, which results in an attractive interaction, whereas in the bottom of the shell (south pole), the atoms sit side by side, and hence the net interaction is repulsive.
It is interesting to stress that this symmetry-breaking phenomenon, shown in the xz plane, is produced by the anisotropic character of the dipolar interactions and depends both on the tilting angle θ 0 and on the strength of gravity. In figure 4, we show the 2D contour plots of the density in the three planes (xy, yz and xz) for a condensate with the same shell-shaped potential and for different small values of the gravity (0.001 g/g E 0.007), tilted an angle θ 0 = −0.1 rad from the z-axis. Figure 4(a) corresponds to the numerical results for a contact interacting BEC, and figure 4(b) to a dipolar condensate. For small values of the strength of gravity (below 0.003g E ), the condensate forms a full shell with a higher density on the bottom. For slightly larger values (above 0.004g E ), the system is no longer a full shell due to the sag effect of the gravity; there are practically no atoms at the top of the trap, and the shape of the condensate is a hollow half shell. When we include dipolar interactions, their anisotropic character counterbalances the effect of gravity. As a result, the hole that appears at the top of the shell is small compared with the contact interacting case.
Dynamics of small oscillations
In this section, we investigate the dynamical response of the system in the regime of small oscillations. To this aim, we trigger the dynamics by an instantaneous change in the tilting angle of gravity or its strength. We obtain the real-time evolution of the system by numerically solving the GPE (1).
In the first scenario-subsection 4.1-, we consider gravity is initially tilted forming a small angle θ 0 with the z-axis but contained in the xz plane, and then it is suddenly aligned with the z-axis at t = 0. In the second scenario-subsection 4.2-, the gravity is parallel with the z-axis (θ 0 = 0), and we analyze the dynamics when slightly changing its strength from g 0 to g. To avoid large oscillations and complicated dynamics, we constrain our study to small variations. Table 1 provides a summary of all the particular cases discussed in this section.
Precision of the frequencies calculated. We have calculated some cases with longer evolution times to check the numerical value of the frequencies. For example, for a BEC with only contact interactions, gravity g = 0.005g E and variation of the angle of 0.1 rad-see figures 5(a) and 6(a)-we obtain the frequency 15.80 Hz for an evolution time t f = 0.3 s, and 15.82 Hz for t f = 1.0 s. We have also checked for other cases that the results start to vary at the second decimal digit. Therefore, the estimated error of the frequencies given in subsections 4.1 and 4.2 is ±0.05 Hz.
Variations in the orientation of gravity
For all the results presented here, we consider θ 0 = −0.1 rad. We have checked that, for a given strength of gravity, the dynamics are the same independently of the sign and value of the initial tilting angle as long as Table 1. Summary of numerical frequencies obtained from the oscillation of the center of mass for the particular cases studied in subsections 4.1 and 4.2. We indicate the angle-with the z-axis-and strength of gravity and which of them is changed to trigger the dynamics. For each situation, we give the frequency for the BEC with only contact interactions (CI) and with both contact and dipolar interactions (DDI), and we indicate if the shape of the ground state is a half shell or a full shell. such angle is small. We open this subsection with a detailed study of two particular cases-one with g > 0.004g E and the other with g < 0.004g E -to see how the shape of the ground state affects the dynamics. The results we show for discussion are the oscillations of the center of mass-figure 5, see numerical frequencies in table 1-and some snapshots of the density during the first period of the evolution- figure 6. Lastly, we analyze how the oscillation frequency depends on the strength of gravity-see figure 7.
Particular cases
We start with a gravity of strength g = 0.005g E such that the system resembles a half shell, as we discussed in section 3. First, we consider a shell-shaped BEC with only contact interactions. In figure 5(a), we present the time evolution of the coordinates of the center of mass: x(t) , y(t) , and z(t) . Since we prepare the system with a slight misalignment of the gravity, the sudden alignment with the z-axis forces the system to bounce back and forth in the xz plane around the new equilibrium position, the z-axis. This behavior appears as a sinusoidal-like oscillation of x(t) as a function of time, while the other coordinates remain almost unaltered. The sinusoidal fit of the numerical evolution of x(t) gives a frequency of 15.80 Hz-we checked that the frequency of oscillation is close to this value when the initial angle |θ 0 | is approximately below 0.15 rad. Figure 6(a) displays a few snapshots-the times shown cover a whole period-of the 2D Figure 6. Snapshots of the 2D contour plots of the density in the xy (left panels) and xz (right panels) planes at different times of the evolution. We discarded the yz planes because the densities remain unchanged, as the variation in gravity is constrained to the xz plane. The initial tilt of the gravity is θ 0 = −0.1 rad, and we study the same two situations as in figure 5: a half shell for g = 0.005g E with either (a) only contact interactions or (b) both contact and dipolar interactions, and a full shell for g = 0.002g E with (c) only contact and (d) also dipolar interactions. In both dipolar cases, the dipoles have magnetic moment μ = 10 μ B . The green arrow shows the initial direction of gravity, which is later aligned with the z-axis to start the dynamics. See summary of cases in table 1.
Figure 7.
Oscillation frequency of x(t) as a function of the gravity with contact (red) and dipolar (green) interactions, where the gravity has an initial tilting angle θ 0 = −0.1 rad. We obtain the oscillation frequency by fitting a sinusoidal function to the numerical data. The panels on both sides show the 2D contour plots of the initial density in the xz plane-which contains gravity-for the different values of the gravity labeled from (a) to (f), both for the contact interacting BEC (left panels) and the dipolar one (right panels). The green arrow, as in the previous figures, marks the initial direction of gravity. Lines between data points are added to guide the eye. The dashed line indicates the frequency of a mathematical pendulum, g/r 0 , with fixed length r 0 .
contour plots of the density in the two planes where the oscillations are observable, xy and xz. Since the shell-shaped BEC is 3D, the oscillatory behavior of x(t) -see figure 5(a)-produces symmetric rearrangements of the density in the other directions, as shown in the xy plane of figure 6(a). Figure 5(b) shows the numerical evolution of the center of mass for a dipolar condensate with an initial tilting angle of gravity θ 0 = −0.1 rad. As we discussed before-see section 3 and figure 4(b)-, the filled region of the shell-shaped potential appears at a larger tilting angle in a dipolar condensate than in a contact interacting one. This feature of the anisotropy of the dipolar interactions produces a larger amplitude of the oscillations of x(t) in dipolar BECs. The sinusoidal fit of the numerical evolution of x(t) gives a frequency of 10.71 Hz; as in the contact interacting case, the other components of the center of mass of the system, y(t) and z(t) , show practically no variations. When the gravity is suddenly aligned, the system oscillates around the z-axis as expected. However, unlike in the contact interacting case, the atoms do not pass over the south pole of the half shell, where the net dipolar interaction is repulsive: their movement is instead constrained to the high-density band that appears below the equatorial region. One can see this behavior in figure 6(b), which shows a few snapshots covering one period in the xy and xz planes.
Lastly, we study the situation where the gravity is small enough-in particular, g = 0.002g E -that the BEC still retains its full shell shape. From figures 5(c) and (d), we can see that the oscillations of x(t) are broader and slower in the dipolar BEC than in the contact interacting one, as in the previous case. The frequencies of oscillation we obtain from the fit are 15.87 Hz (contact BEC) and 10.62 Hz (dipolar BEC), which resemble those from the previous case. If we compare the oscillations-figures 5(c) and (d)-with those obtained for a heavier gravity-figures 5(a) and (b)-, we observe that the frequencies are similar in both the contact and the dipolar BECs, but the amplitudes of the oscillations are much lower now. From the snapshots of the density shown in figures 6(c) and (d), we can see that in the case of a smaller gravity, as expected, the atoms can move around the whole shell-not just the lower part-, which could explain why the oscillations of the center of mass in the x direction are more restricted. We will explore in more detail the effect of gravity in the dynamics in the following subsection.
The role of gravity
Here, we study the dynamics of small oscillations due to variations in the tilting angle-initially θ 0 = −0.1 rad in all the cases-for different strengths of gravity. In figure 7, we show the oscillation frequency of the x coordinate of the center of mass, x(t) , as a function of the strength of gravity. We consider BECs with both contact and dipolar interactions. As figure 7 shows, the oscillation frequency depends on the strength of gravity, and two different behaviors arise: first, starting from the lowest gravity, the frequency decreases as g increases until it reaches a particular value (between 0.003g E and 0.004g E ); then it increases again. These two behaviors are related to the two distinct shapes that can be observed in the ground states of the system for different values of gravity, as is shown in figure 4 and discussed in section 3: when the strength of gravity is small, the ground state of the system is a full shell, while for heavier values of gravity it resembles a half shell.
At small strengths of gravity, the condensate is a full shell with a higher density at the bottom of the trap. Then, an increase in gravity drags more atoms to the bottom of the trap, which leads to a decrease in the oscillation frequency. When considering dipolar interactions, though, their anisotropic nature compensates for the effect of gravity; as a result, the oscillation frequency becomes almost invariant to small changes in the strength of gravity.
On the other hand, at larger values of g, the system is no longer a full shell but a half shell, and the oscillation frequency increases as the strength of gravity does. The angular frequency of a mathematical pendulum is related to gravity g and its length l by g/l. For the contact-interacting case, in particular, we can see that for g > 0.004g E , the frequency approaches this behavior as g grows. For comparison, we show in figure 7 the frequency for a pendulum-see dashed line-assuming a fixed length l ∼ r 0 = 3 √ 3 μm. In the dipolar case, the frequency also grows with gravity for g > 0.005g E . However, now the system does not behave like a pendulum. The atoms bounce from the right to the left lobe, but they never cross the south pole of the shell since they can only move around the high-density band, as we show in figure 6 and discuss in the accompanying text. Therefore, the classical pendulum analogy fails in this case.
Variations in the strength of gravity
In the previous subsection-4.1-, we discussed the dynamics due to variations in the angle of gravity. Here we fix the angle of gravity with the z-axis to zero (θ 0 = 0) and study the system's response to variations in the strength of gravity. As before, we constrain our study to small oscillations, which now translates to small variations in the strength of gravity. We start by preparing the system under a gravity g 0 aligned with the z-axis, and then, at t = 0, we change g 0 to g.
In the first part of this subsection, we study in detail two cases: first, when g, g 0 > 0.004g E , so the corresponding ground states resemble a half shell, as discussed in section 3; then, we set g, g 0 < 0.004g E , with both values of gravity laying in the regime where the system is still a full shell. See table 1 for a summary of the numerical frequencies obtained and figure 8 for some snapshots of the evolution. For these cases, we choose a large change in gravity (|g − g 0 | = 0.001g E ) to see the system's dynamics well. In the second part, we fix a smaller value of the variation (|g − g 0 | = 0.0001g E ) to study small oscillations and compare the frequencies of oscillation obtained for different values of the final gravity g-see figure 9.
Particular cases
In the first case of our study, the initial strength of gravity is g 0 = 0.005g E , and the evolution starts when we abruptly increase it to g = 0.006g E . Within these values of the gravity, the ground state of the system resembles a half shell, as we already mentioned-see the last row in figure 4. In figures 8(a) and (b), we plot the densities at different times-the snapshots cover a whole period of the oscillation-to show the dynamics of both the contact and dipolar cases. In the contact-interacting case- figure 8(a)-, the atoms are mainly located at the south pole of the shell, occupying a region that shrinks and grows periodically due to the increase in gravity. This behavior resembles a spring that oscillates vertically. Here, though, the movement of the atoms is confined to the surface of the shell. In the dipolar case- figure 8(b)-, instead, the band of maximum density appears below the equatorial region. Then, the sudden change in gravity Figure 8. Snapshots of the 2D contour plots of the density in the xy (left) and xz (right) planes at different times of the evolution. Gravity is parallel to the z-axis, and its strength is varied from g 0 to g at t = 0. Since the densities in the xz and yz planes are equivalent, we discarded the yz planes. First case, half shell: g 0 = 0.005g E and g = 0.006g E , with (a) contact interactions only and (b) contact and dipolar interactions. Second case, full shell: g 0 = 0.003g E and g = 0.002g E , with (c) contact interactions and (d) dipolar interactions. The dipole moment, μ = 10 μ B , is the same for all the cases with dipolar interactions. See summary of cases in table 1. Notice the density accumulation that appears in case (c) at the top of the shell-right panel-of the last row: it is an effect of considering such a large change in gravity, |g − g 0 | = 0.001g E . To avoid this, we study the effect of gravity on the oscillation frequency-figure 9-with smaller variations. Figure 9. Oscillation frequency of z(t) as a function of the final gravity g, with initial gravity g 0 = g + 0.0001g E . The red line corresponds to the case with only contact interactions, and the green line to the dipolar interacting BEC. As before, we obtain the oscillation frequency by fitting a sinusoidal function to the numerical data. Lines are added to guide the eye. causes this band to oscillate along the z direction. Since the gravity is parallel to the z-axis, we study the oscillation frequency of the z coordinate of the center of mass through a sinusoidal fit to the numerical results for z(t) . We obtain a frequency of 24.69 Hz for the contact-interacting BEC and 25.12 Hz for the dipolar one. Unlike in subsection 4.1, here we find that both frequencies are similar.
For the second case, where the gravity is small enough that the system has the shape of a full shell, we decrease the initial gravity g 0 = 0.003g E to g = 0.002g E . The dynamics are very similar to the previous case, as figures 8(c) and (d) shows. In this case, however, the oscillation frequencies of z(t) for the contact and dipolar BECs are more different; in particular, we find 16.10 Hz for the contact-interacting BEC and 22.20 Hz for the dipolar one.
The oscillations of the center of mass-in both frequency and amplitude-depend on the strength of gravity. These results match those found in subsection 4.1. In contact-interacting BECs, when the gravity is light-and the system is a full shell-, we find that the oscillations are slow and broad since the atoms can move around the whole shell. For a heavier gravity, the atoms drop to the bottom of the trap. Then, the amplitude of the oscillation decreases while its frequency increases. The differences found in dipolar BECs come from the anisotropic nature of the dipolar interactions, which counterbalances gravity. The band of maximum density is no longer at the south pole but below the equator. Therefore, compared to the contact-interacting case, the oscillations change much less when different strengths are studied.
The role of gravity
Finally, we study the dynamics of small oscillations induced by a variation in the strength of gravity and how these results differ depending on whether the value of gravity is relatively small-and the ground state resembles a full shell-or large-when the system becomes a half shell. We choose |g − g 0 | = 0.0001g E to ensure small oscillations. For these values of g − g 0 , the results we obtain for a given g are equivalent-in frequency and amplitude-either if g > g 0 or g < g 0 . Therefore, we define from now on g 0 such that g 0 = g + 0.0001g E .
In figure 9, we plot the frequency of oscillation of z(t) as a function of the final gravity g. The results resemble those from figure 7. The frequency increases with the final gravity for large values of the gravity-when the system is a half shell-, while it decreases with the final gravity for smaller values-when the system resembles a full shell. Since the dipolar interaction compensates for the gravity, the effect of the variation in strength is more noticeable in the contact-interacting BEC than in the dipolar BEC-as in subsection 4.1-for small final gravities.
Comparing the results obtained for the contact-interacting BEC either with changes in strength-see figure 9, red line-and orientation-see figure 7, red line-, we can see that the frequencies lie within a similar range of values in both cases. The frequencies we obtain now for the dipolar BEC, however, are faster. This increase in frequency is an effect of the anisotropy of the dipolar interactions. In the first scenario-see figure 7, green line-the center of mass moves mainly along the x-axis, and all the dipoles point toward the z-axis. Then, an atom that moves in that direction feels a net repulsive interaction from its neighbors, which reduces the frequency of oscillation. In the second scenario-see figure 9, green line-the center of mass moves instead around the z-axis. Since the resulting interaction between dipoles along the direction of motion is attractive and twice as large as in the previous case, the frequency of the oscillation is much larger.
Extension to other systems
Writing the time-dependent GPE in dimensionless units in the usual way-using the oscillator length a ho = /(mω) as the unit length, ω −1 as the unit time, and ω as the unit energy-one can define three dimensionless constants, which are the coefficients of the gravitational, contact-interacting and dipolar terms of the dimensionless equation: G = g/(a ho ω 2 ), C = 4πNa s /(a ho ), and D = Nmμ 0 μ 2 /(4π 2 a ho ). We also define the dimensionless radius ξ 0 = r 0 /a ho . Then, the system can be scaled in terms of these dimensionless constants. In our configuration, the numerical values are: G = 31.62g, C = 1016.46, D = 268.27 and ξ 0 = 6.62, where we introduce the gravity in units of the terrestrial gravityg = g/g E for convenience, and we also fix dd = 1.11. With these values, one can translate the same physics to another set of parameters that are experimentally accessible in 52 Cr [39,40], 164 Dy [41], or 168 Er [42].
For example, we consider a condensate of 168 Er with N = 10 4 atoms, which has a dipolar moment of 7 μ B . The configuration proposed with our dimensionless constants can be obtained with a scattering length a s ∼ 60a 0 and a trap frequency ω ∼ 2π × 390 Hz. Then, the radius of the resulting shell is r 0 ∼ 2.6 μm, and the range of valid gravities-large enough that its effect is noticeable but does not destroy the system-is, in this case, between 0.008g E and 0.050g E .
More generally, we can use these dimensionless constants and state that our study can be extended to values of gravity between 0.003g E and 0.023g E in units of a ho ω 2 . Consequently, it is possible to explore a different range of gravities by considering another set of parameters for the system.
For instance, going back to the system we consider in this work ( 164 Dy), one could study a range of gravities closer to microgravity by increasing the number of atoms and the radius of the shell, and reducing the frequency of the trap. In particular, with N ∼ 3 × 10 4 , ω ∼ 2π × 11 Hz and r 0 ∼ 15.6 μm, the range of valid gravities lies between 4 × 10 −5 g E and 26 × 10 −5 g E .
Summary and outlook
In the present work, we have studied the statics and dynamics of shell-shaped condensates with contact and dipolar interactions in the presence of a small gravity. We have constrained our study to gravity values above microgravity-and thus non-negligible-and below terrestrial gravity, which destroys shells.
First of all, we have analyzed the ground states of the system in three cases: without gravitational sag, with gravity parallel to the z-axis-which is the polarization direction we considered for dipolar BECs-, and with a small gravity misaligned with the z-axis and contained in the xz plane. We have discussed the effect of the dipolar interactions in either of the three cases. Next, we have done a more general analysis of the ground states to examine the effect of gravity's strength. Observing the shape that the system displays, we have defined two regimes: a full shell for small gravities and a half shell for larger gravities. These two regimes play a relevant role in the dynamical behavior of the system. Later, we have studied the dynamics of small oscillations due to changes in the orientation and strength of gravity. For each of those two scenarios, we have studied two particular cases-comparing the full shell and half shell regimes-, and we have analyzed, more generally, the effect that gravity has in the behavior of the oscillations when the variation-in angle or strength-is fixed and very small. With this, we have seen how the two static regimes translate into two distinct dynamical behaviors: the oscillation frequency increases with gravity for large values of gravity (half shell) while it decreases for smaller values (full shell). Additionally, we have compared the results obtained for contact-interacting BECs with those obtained for their dipolar-interacting counterparts. We have discussed that dynamics due to changes in angle or strength are equivalent in the contact BECs, but the dynamical behavior differs in dipolar BECs due to the anisotropic nature of their interactions, which counterbalances the effect of gravity. We have shown that even though the dipolar interaction adds a privileged direction to the one already defined by gravity, the resulting shells with gravitational sag and dipolar interactions present a configuration that, compared to contact-interacting one, is less sensitive to misalignments and perturbations in the gravitational sag.
Finally, we have extended our study to other systems, and we have seen that the range of valid gravities depends on the particular system in consideration. Therefore, one could choose a set of parameters-namely the mass, the frequency, and the number of atoms-such that the gravitational effects become non-negligible but non-destructive at much smaller gravities than the ones we have studied here. We want to stress that the physics of shell-shaped condensates under gravitational sag is not limited to dysprosium BECs, as we have consider. It can also be exported to other condensates with controllable contact interactions, either with and without dipolar interactions.
The atomic cloud in this system is not only sensitive to changes in its orientation, but it is also sensitive to small gravitational variations, either in its direction or strength. However, one cannot discern directly from our results-especially in the contact-interacting case-if the cause of the oscillations is a change in gravity's orientation or its strength. Studying instead a more simplistic configuration, such as a ring-shaped BEC, may shed some light on how to discriminate between these two situations. In any case, these findings could pave the way to the experimental realization of a gravity or accelerometer sensor intended for small gravity conditions. Monitoring gravity and its changes from space in satellite missions-see [43] and references therein-is another possible application of this system. To conclude, we want to point out the experimental feasibility of the proposed system. We have used values for the experimental parameters that are currently available in laboratories.
Due to the complexity of the 3D dynamics of this system, a more exhaustive theoretical analysis is beyond the scope of this paper. However, we consider that a new proposal of a gravity sensor with restricted low dimensional dynamics may provide a more analytical insight of the system. Discussing other configurations such as toroidal BECs under gravity conditions will be addressed elsewhere. | 9,241.4 | 2021-07-09T00:00:00.000 | [
"Physics",
"Geology"
] |
Considerations about learning Word2Vec
Despite the large diffusion and use of embedding generated through Word2Vec, there are still many open questions about the reasons for its results and about its real capabilities. In particular, to our knowledge, no author seems to have analysed in detail how learning may be affected by the various choices of hyperparameters. In this work, we try to shed some light on various issues focusing on a typical dataset. It is shown that the learning rate prevents the exact mapping of the co-occurrence matrix, that Word2Vec is unable to learn syntactic relationships, and that it does not suffer from the problem of overfitting. Furthermore, through the creation of an ad-hoc network, it is also shown how it is possible to improve Word2Vec directly on the analogies, obtaining very high accuracy without damaging the pre-existing embedding. This analogy-enhanced Word2Vec may be convenient in various NLP scenarios, but it is used here as an optimal starting point to evaluate the limits of Word2Vec.
Introduction
In Natural Language Processing (NLP) problems approached with neural networks, individual words, that typically belong to large vocabularies, must be transformed into compressed representations. Although the state-of-the-art of NLP is today 1 3 Considerations about learning Word2Vec almost totally based on the use of Transformers [10,30,34], the difficulty of training such structures (both related to computational costs and the need for huge datasets) often leads to a preference for different approaches [5,11,17,18,26] where each word needs to be individually coded.
In these cases, it is therefore natural to look for codings that account for semantic relationships between words (what in [33] is called attributional similarity). This leads to the creation of a so-called word embedding (sometimes named "semantic vector space" or simply "word space"), i.e., a continuous vector space in which the relationships among the vectors is somehow related to the semantic similarity of the words they represent. The ways of creating these spaces are almost entirely based on the distributional hypothesis [14][15][16]25], that is, on the idea that contextual information alone is able to define the semantic connections that exist between individual words. 1 Through the use of very large corpora, these models typically produce vector spaces with hundreds of dimensions to grasp different levels of similarity between words. Similarity proportions such as "Man is to Woman as King is to Queen" are thus reproducible through vector arithmetic [24], allowing to express the relationship between words as geometric proximity. For example, the sum vector obtained from the equation "King" − "Man" + "Woman" returns the vector relative to "Queen" as the closest neighbor, which is obviously extremely useful in NLP. It should be noted that, in general, the uniqueness of the vectors is not mathematically guaranteed but is always supposed to be verified, given the very low probability of the opposite happening.
Starting from the work in [9], such semantic vector spaces began to be learned through neural models. To date there are numerous word embedding models (a fairly complete list is present in [2]), but the main scheme that makes use of neural networks is known by the name of Word2Vec (W2V) [22,23]. The production of a word embedding through W2V can take place in two different ways: Continuous Bag-of-Words (CBOW) and Skip-Gram (SG). The two approaches rely on different management of the input and the output variables, but basically use the same structure of the network. In the following, we will focus only on the SG approach, which is the most used in practice and studied in the literature [3,19,21]. The success of this structure is undoubtedly linked to its performance which on the task of analogies proves better than both classic techniques, such as Singular Value Decomposition (SVD) [19,20] and Latent Semantic Analysis (LSA) [3,4], and modern countbased methods, such as GloVe [20,27,29].
Although many authors have tested Word2Vec on analogies [13, 19-21, 24, 28], rarely enough attention has been given to the modalities in which such embeddings are obtained. In this work, we try to shed light on the performance of W2V as the number of epochs changes, showing how the particular behavior of the learning rate justifies an individual analysis of the single epochs. This innovative way of proceeding highlights elements of extreme interest, including: the inability of W2V to learn syntactic, the absence of overfitting and the stabilization of learning around a maximum value. Finally, it is shown how to improve W2V through an ad-hoc training directly on the analogies, achieving high accuracy by introducing very few adjustments to a pre-trained embedding. This process highlights the limitations of Word2Vec, demonstrating that it is insensitive to better starting conditions.
In Sect. 2, the details of W2V are introduced, in Sect. 3, the elements that emerge from the tests performed are examined, in Sect. 4, the analogy-enhanced version of W2V is shown. Conclusions and comments are included in Sect. 5.
Word2Vec
Given a vocabulary V = {w 1 , w 2 , … , w V } , the W2V SG structure (Fig. 1) derives from a two-layer neural network with linear activation (identity) in the hidden layer and no bias, mathematically expressed 2 as: where H ∈ ℝ V×M , Z ∈ ℝ M×V , i is the V-dimensional one-hot row vector relative to the generic word w i at the input of the neural network, h h h i ∈ ℝ M is its related embedding, i ∈ ℝ V is the linear combination before the activation functions, and y y y i ∈ ℝ V is the network output after the activation function (⋅) . The dimensions of the input and output layer of this network are therefore the same and equal to the size of the vocabulary V = |V| , while the size M of the hidden layer represents a hyperparameter chosen arbitrarily to be much smaller than V. Figure 1 shows the architecture with two different activation functions that will be discussed later.
3
Considerations about learning Word2Vec w appears throughout C a number of times equal to (w) . The original corpus C is pre-processed to produce a smaller reference corpus C , from which all the words that occur less than T times are eliminated: This pre-processing removes writing errors, or words that are too rare to be considered in the embedding. Then the distinct words that belong to the reference corpus C constitute the vocabulary V , for which the empirical probability is:
Learning the embedding
According to a criterion that will be specified in the following, the training of the network requires a set of input/output (i, o) ordered pairs P = {(w i [ ], w o [ ])} , generated in advance from the reference corpus C . Every single pair of words (w i , w o ) is associated with its relative one-hot vectors ( i , o ) that represent, respectively, the input and desired output of the network. Training takes place through a classic stochastic gradient descent (SGD) algorithm with instantaneous categorical cross-entropy loss and gradient: where o j , y i j and i j represent the j-th element of the vectors o , y y y i and i , respectively, and when at the network output there is a softmax activation function (Fig. 1a). Note that i o denotes the component of i corresponding to the non null element of o . Since the use of pure softmax at the output layer would represent an excessive computational cost (as the network, although simple, has a decidedly large number of parameters due principally to the dimension of the vocabulary V), the typical alternatives fall either on adopting an approximation of it (called "hierarchical softmax", and which we will not discuss here), or resorting to a technique known as "negative sampling" [23]. In this case, the network is modified to the architecture of Fig. 1b, which has sigmoid activation function on each neuron of the output layer. The computational cost is reduced by backpropagating only N randomly chosen errors of the V − 1 ones, relating to the output words w n that do not correspond with the word w o present in the single pair (i.e., n ≠ o ). The negative sampling then turns the problem into a multi-label classification one, where the instantaneous binary cross-entropy loss and its gradient are: The N random words of Eq. (5), that act as "negative" set for that single training pair, are sampled from the heuristically modified "unigram distribution" of the words in the corpus C [24]:
Pairs generation
Since the presence of common words (such as "the", "of", etc.) is very high in regular texts, a classic problem in creating a set of training pairs lies in making sure that they are not considered too often [23]. To achieve this, W2V modifies the true word empirical probability by defining a "keeping probability" as: where is a heuristically-determined value, typically set between 10 −3 and 10 −5 (in the following we take it equal to 10 −5 ). The nonlinear transformation (7) is highly peaked around small probability values and reduces the effect of very frequent words. Each single word w of the corpus is then analysed using the following procedure: take a uniformly distributed random values r ∼ U (0,1) , i.e., extracted according to a uniform distribution in [0, 1], if r < P keep (w) the word becomes a "central word", otherwise is discarded. The corpus C is also divided into sentences (each containing at most a maximum number of words). Once a central word has been chosen, two windows of words are built within the sentence: one towards its right and the other one towards its left. The words that belong to these windows constitute the "context words" for that central word. The window size is not fixed but varies dynamically and randomly on each epoch and for each central word considered, according to a uniform distribution in [1, W] (with W hyperparameter defined at the beginning) [22]. In this way, the words closer to the central words are considered more times but also words further away are too. Also note that, being limited by the extremes of the sentence, the two windows do not always have the same size. Finally, each central word is associated with each of the words in its context to generate the training pairs. For each pair, the central word represents the input while the context word the output.
Word embedding evaluation
The main problem after having obtained a word embedding is precisely how to test it. Unfortunately, semantic proximity is indeed difficult to prove, and probably all tests (whether extrinsic and intrinsic [32]) prove arbitrary or subjective in evaluating this property. The use of analogies, however, has been a standard approach for some time [13, 19-21, 24, 28], although it should be noted that they are certainly not perfect. For example, if we consider the semantic proportion "Athens is to Greece as Tehran is to..." (and although the correct answer is undoubtedly "Iran") it is hard to assess whether or not the possible answer "Persia" should be declared as an error. Natural language is in fact usually highly polarized, as it also depends on sociocultural influences. However, the use of triads of words certainly makes the search field of the desired more limited than all other possible tests, making it one of the most important tests in this field.
In the present study, we use the most common test set of analogies, known as Google Set and included in the original distribution of the W2V package [22].
It consists of 19,544 analogies divided into 14 categories, typically grouped into "semantic" (5 categories with 8869 analogies) and "syntactic" (9 categories with 10,675 analogies) macro-areas; an example table is presented in [22]. Each of the analogies in this dataset can be written symbolically as: where typically the word w b ⋆ is chosen as the test target. For example, if we have: "Man: Woman = King: Queen", with w a (Man), w a ⋆ (Woman) and w b (King), we expect to fill-in the answer with w b ⋆ (Queen). In all the tests performed, however, it was decided to totally neglect all the analogies that contain one or more words not present in the vocabulary. 3 Nevertheless it is good to specify that, since the goal of this work is not to compare different models, this choice is completely irrelevant from our point of view.
Following previous works [13,21,24], to provide the answer for the single analogy we use the "classical" cosine distance, also known as 3CosAdd [19]. The cosine distance has the advantage of not excessively weighing the amount of contributions obtained from the backpropagation of the gradient during the training phase, which can lead to excess increase or decrease of the single vector norm. In this way, the balance achieved with respect to the other vectors present is mainly considered. More specifically (assuming the following relations: , the answer will be the word whose index is: where the set H e is the collection of all the embeddings except h h h a , h h h a ⋆ and h h h b . In the network of Fig. 1, this corresponds to an amended embedding matrix: where 1 1 1 is the V-dimensional all-ones vector, and s = b + a + a ⋆ . By eliminating the rows relating to the words of the analogy used in the first part, the amended matrix H e reflects the classic attitude that seeks the solution in the word space from which the words used in the sum have been excluded. Note that this also implicitly imposes that all analogies are constructed so that the searched word is never contained in the triad used in the query.
Matrix H can also be normalized by row in advance, generating a new matrix Ĥ that now contains all normalized embedding ĥ h h . This preventive normalization allows to calculate the cosine distances through simple scalar products, since (by it is possible to observe that: , in the index position of the word w b ⋆ , the response of the network is considered correct (increasing the accuracy), otherwise it is considered incorrect.
The importance of learning time
In this paper, we focus on various issues related to the results obtained from training W2V. In our experience, also in obtaining W2V for the Italian language [12] and in its usage [11], we found that some important choices have become so common that they are used almost mechanically, without questioning about their effectiveness.
More specifically, what is the correct number of epochs that need to be used before we can declare an embedding satisfactory? What is the role of the learning rate?
More importantly: what is the effect of these choices when performances are studied in comparison for both accuracy and loss?
In fact, regardless of corpus used (which certainly impacts strongly on the quality of the generated embedding), no one seems to have ever bothered to analyse the behavior of the W2V as the number of epochs varies, sometimes making comparisons with other word embedding methods without even reporting this parameter [7, 13, 19-21, 24, 28, 32]. Our goal is therefore precisely to fill this gap, observing the behavior of the embending in the different epochs as the training hyperparameters vary.
We describe here our experience on several simulations applied to the classic Text8 corpus, composed of the first 100 MB of cleaned text of the English Wikipedia dump of Mar. 3,2006. From this corpus, all the words repeated less than T = 5 times have been removed, thus obtaining a vocabulary composed of V = 71,290 words. Although much larger corpora are usually used for more recent W2V embedding [1,12], we chose this one because we consider it sufficiently typical for focusing on the issues outlined above. 4 On the other hand, the aim of this study is primarily to highlight the relationships that exist between the different results. Since the modification related to the change of the hyperparameters is substantially linked to the training methods, the relationships between them can be rightly considered independent from the corpus (to which only a modification of the absolute accuracy values, which are secondary here, will be linked).
Learning rate
The first important consideration to make, also to better understand the tests performed later, concerns the learning rate. A typical W2V training using the SGD is in fact based on a variable learning rate, where a starting value (generally in the order of 10 −2 ) and a final value (generally in the order of 10 −4 ) are defined with a step size decaying linearly as a function of the number of epochs used. This classical machine learning technique [8,31] should aim to decrease the loss, allowing a better approach to the minimum compared to a fixed learning rate. Figure 2a shows the behavior of the average loss, with a linear and a fixed rate, as the number of epochs progresses. Note that already after a few epochs, and contrary to what one would expect, the fixed rate (here 10 −2 ) finds a better minimum than a typically used degrowth rate (here from 10 −2 to 10 −4 ). This may be due to the highly non-convex nature of the cost function, which should therefore lead to preferring a different choice from the one commonly used. The surprising result on the analogy test set shown on Fig. 2b is that exactly the opposite happens with respect to the loss function: a substantial increase in the accuracy (from 27.3 to 32.2%) is obtained for the variable learning rate.
Our interpretation of the results is that W2V maximizes accuracy not only by minimizing the loss function (which means mapping the co-occurrence matrix in the best possible way), but also by trying to reduce the link between words and their distribution as the connection between them increases. Probably, the linear decrease of the learning rate allows to fix the rarer words within the embedding space, giving them a more and more reduced possibility of movement; because the second matrix Z is gradually less "conditioned" by these words. In addition to minimizing the loss, the use of many epochs is therefore also necessary to make the learning rate decrease smoother, allowing a gradual stop of the movement of vectors within the embedding space.
Simulations and comparisons between hyperparameters
In order to understand what happens when the number of epochs changes, you cannot therefore simply train W2V over a large number of epochs and see how the training proceeds at each step. In fact, in order to have a correct computation of the decreasing learning rate for the current trial, we must ensure that the learning rate reaches its minimum.
Using this different way of looking at learning outcomes over different epochs, extremely interesting behaviors (never been highlighted before, to our knowledge) are observed. Each test presented below, therefore, was performed respecting this rule, and the results were averaged over more simulations. 5 In addition, the tests shown were performed avoiding to parallelize the code within the single epoch, 6 since the SGD would strictly not allow parallelization and an attempt was made to avoid possible influences of uncontrollable elements.
Considerations about learning Word2Vec
Following this way, Fig. 3 shows the trend of a W2V training with: learning rate from 0.025 to 0.0001, negative samples N = 5 , maximum window W = 5 and embedding size M = 300 . The analogies used for the accuracy test have been divided into the two categories syntactic and semantic as described in Sect. 2.4. The graph reports the average percentages on the two categories, so that the incidence is assessed regardless of the absolute number of elements present in each category.
From the curves, it can be observed that, relating to the syntactic part, the quality of the embedding is essentially independent from the number of epochs. This element is also present in all the other simulations and highlights an extremely important fact: W2V does not seem to be able to learn syntactic. This means that the various comparisons between W2V and different word embeddings cannot take into consideration datasets mainly based on syntactic, because this would induce a significant bias in the evaluation of the results.
Furthermore, W2V does not really seem to go overfitting, as in fact the trend on the test set does not decrease but stabilizes. This means that there is a "saturation" value for learning W2V which should always be reached in order to perform a correct comparison with any other possible word embedding method.
Negative sampling
The results of other tests for different choices of the parameter for negative sampling ( N = 2, 5, 10, 15 ) are shown in Fig. 4a. It can be observed that, except for a few epochs with larger values, as the epochs progress, the various configurations tend to be almost identical (especially beyond 300 epochs). Moreover, the speed of convergence to the steady state value for values N > 10 does not seem to undergo variations, allowing this choice to be relaxed (for example for computational reasons).
Size of the embedding space
In Fig. 4b, a comparison with variable size of the embedding space is reported ( M = 100, 200, 300, 500 ). The results quite clearly reveal how the quality of embedding is very tied to its size.
The peculiarity, however, is that the achievement of better performances under the semantic profile (reached with dimensions approximately equal to the square root of the vocabulary) does not coincide with the performance of the syntactic part, which is always worse. This shows how the accuracy of the syntactic part is actually determined only by the compression level of the intermediate space, confirming even more how the W2V training is not able to condition it. It should be noted that a larger space makes things worse from every point of view, probably because this makes the network able to better map the matrix of co-occurrences (paradoxically managing to reduce the loss better).
Window size
On the other hand, the change in the window size between small values (from 2 to 5), shown in Fig. 5a, seems to be of little importance. In fact, neglecting window size 2, which in 50% of cases involves a single word to the right and left (showing a clear inability to approach the performance of the other windows), it can be observed that small differences on small windows tends, at increasing epochs, to converge towards similar accuracy. Different is the case of the results reported in Fig. 5b, and obtained for large window sizes ( W = 5, 10,15,20 ). Here, in fact, a fixed (and sufficiently large) size increment for the window leads, in steady state conditions, to an equally rigid increase in performance; which are translated upwards both as regards the semantic and the syntactic part (albeit in a reduced way).
Larger windows also tend more rapidly to high accuracy, almost contradicting the distributional hypothesis. In reality, remembering the paragraph Sect. 2.3 and given a window of size m, the probability of forming a pair for a word placed at a distance of d words from the considered one, turns out to be equal to: and therefore the increase in the size m of the window also increases the probability of the closest words to form a pair with it. It seems to underlie the need for a Gaussian window, which weighs more the neighboring words. Nevertheless, the use of a larger maximum window W also leads to the creation of a larger training dataset, which allows to find a better connection between words. This could also be the reason for the improvement of the accuracy on the syntactic part, which is probably only linked to the relationship between the pairs to be mapped and the space available.
Finally, the "positive" conditioning of a very common distant word will certainly be canceled by the many "negative" conditioning that will occur, while the less common words will create exceptions, fortifying connections even if placed at a greater distance. In fact, it should be noted that a typical W2V training (mainly to avoid high computational costs) does not consider shuffling all the training pairs, but at the most it mixes sentences. In other words, SGD training on the word pairs often takes place in the order in which the words occur within the sentence, and therefore even if a distant word falls within the window it would also be conditioned by the words between them.
Analogy-enhanced Word2Vec
In this section, we report the results on training W2V directly on analogies. The structure used (shown in Fig. 6) reflects the test phase through a neural network with linear activation (identity) in the hidden layer, but adding a softmax activation at the output. Note how the connections have been amended. The softmax function tends to focus the backpropagated gradient mainly on the vectors "closest" to the vector of interest, while modifying the other vectors (which however produce some relevance) as little as possible.
The loss is calculated through the cross-entropy function (Eq. 4) and assuming that the desired output is the one-hot vector b ⋆ relative to the fourth word w b ⋆ : Due to the relatively few analogies present, training of the W2V cannot be based solely on them. Therefore, the starting matrix is taken from an already trained network on 40 epochs, with standard configuration parameters. Further training on analogies was performed for only 15 epochs, using a subset of 20% of them with a fixed learning rate equal to 0.01 and normalizing all the vectors at the beginning of each training step (i.e., at each matrix modification). Although the analogies are randomly permuted before being chosen, even such a simple configuration leads to results around 97% of accuracy on the whole set. This result is however conditioned by the structure of the dataset, which always uses the same words and permutes them within the various analogies. Despite this, it is important to note that at the end of this training process only about 450 vectors relating to the searched words undergo an angle shift, while all the other vectors, remain practically immobile. In other words, the network better positions only the vectors that do not provide the correct solution, and the fact of having amended the output matrix allows this shift to be made by fixing the other three points in space. This indicates that the analogies present in the test set are well characterized by embedding, and although the solution does not appear in the first position, it is still represented (in most cases) in the top ones. The embedding generated by this structure can therefore certainly be used as a better basis (since the analogies themselves characterize its goodness) for subsequent NLP problems, especially if the number and variety of analogies are increased.
In this case, however, the interest in using this network to generate better embedding is related solely to highlighting the limits of W2V.
One might actually expect that, given the relatively low number of modified vectors and the better position of the vectors obtained (relatively to the analogies), another embedding training (through the classic W2V scheme) will not excessively alter the advantages introduced by the second training. Instead, even if you set the learning rate to a very low value ( = 10 −4 ) and lock H by training Z alone for a certain number of epochs (in order to adapt the second part to the changes introduced in the first), further training gradually destroys all the advantages obtained (Fig. 7).
This attitude confirms that the W2V methodology always leads to a point of stability that depends on the dataset used, and that therefore the choice of a better starting point is not able to improve the final solution. On the other hand, the function that W2V tries to minimize is not very connected to the analogy test, which basically leads it not to recognize a better situation from that point of view. In this work, we have analysed Word2Vec in the Skip-gram mode looking at different issues related to learning. Through a careful analysis, it has been noted that the model demonstrates better performance on the analogies mainly through the relationship it creates by contrasting the minimization of the loss function. The way in which the learning rate descends at each epoch, which goes in an opposite direction with respect to the classic objective of minimizing the loss, seems in fact to be fundamental in ensuring the creation of relationships between word vectors. This led to training the model by evaluating each epoch independently, in order to observe the results without being conditioned by the learning rate. The observation of learning as the number of epochs increases has also clearly shown that Word2Vec is unable to learn syntactic relationships, which instead seem to be mainly due to the link between the size of the training set and the available space. Furthermore, the quality of the embedding on the test set stabilizes on a maximum value, which therefore (regardless of computational and memory costs) should always be achieved if W2V is to be assessed against other methodologies. We have also shown in our analysis how the various hyperparameters influence learning differently. The trend with varying negative sampling, for example, represents a further reason for the benefit of training over many epochs. Similarly, the analysis of the window size has shown that performance improves for higher values, and this happens even if the training is performed for a few epochs (compensating in some way the cost). On the contrary, the choice of the embedding size requires extreme care since a significant reduction in performance is due to both too small and too large embedding.
Finally, we have proposed to further train a given embedding directly on analogies. The use of an adequate structure, in fact, allows to obtain performances in the order of 97% by modifying only a few vectors. Changing only some vectors could result in better "semantic" embedding, which could be used as a basis for the resolution of further NLP problems. In any case, through this better solution, it is shown how W2V cannot maintain the advantage obtained. That is, the structure of W2V proves to be extremely dependent on the corpus, making semantic proximity only a side effect of its true objective function.
Funding Open access funding provided by Università degli Studi della Campania Luigi Vanvitelli within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 7,343 | 2021-04-06T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Human-Centered Design Science Research Evaluation for Gamified Augmented Reality
As augmented reality (AR) and gamification design artifacts for education proliferate in the mobile and wearable device market, multiple frameworks have been developed to implement AR and gamification. However, there is currently no explicit guidance on designing and conducting a human-centered evaluation activity beyond suggesting possible methods that could be used for evaluation. This study focuses on human-centered design evaluation pattern for gamified AR using Design Science Research Methodology (DSRM) to support educators and developers in constructing immersive AR games. Specifically, we present an evaluation pattern for a location-based educational indigenous experience that can be used as a case study to support the design of augmented (or mixed) reality interfaces, gamification implementations, and location-based services. This is achieved through the evaluation of three design iterations obtained in the development cycle of the solution. The holistic analysis of all iterations showed that the evaluation process could be reused, evolved, and its complexity reduced. Furthermore, the pattern is compatible with formative and summative evaluation and the technical or human-oriented types of evaluation. This approach provides a method to inform the evaluation of gamified AR apps. At the same time, it will enable a more approachable evaluation process to support educators, designers, and developers.
INTRODUCTION
Currently, there is fragmentation in how educators and designers analyze and evaluate immersive gaming experiences. Most educational game studies focus solely on the applied use of the game (e.g., usability or motivation) in the classroom and not on the design methodology and application evaluation (Sommerauer, 2021). Therefore, most educators and developers are left to start from scratch in the design journey citing a lack of reflective research and published methodology. However, as Nelson and Ko (2018) recommend, "the community should wholeheartedly commit to focusing on design and not on refining general theories of learning." The purpose of design is a translation of existing situations into preferred ones (Simon, 2019). Moreover, "design science . . . creates and evaluates IT artifacts intended to solve identified organizational problems" (Hevner et al., 2004). Emerging from design thinking, Design Science Research Methodology (DSRM) is an iterative methodology aimed at rigorous development of solutions to problems, mainly in the Information Systems (IS) discipline. DSRM solution results in an artifact or multiple artifacts (Peffers et al., 2007). An artifact is commonly known as something created by human beings for a particular purpose (Geerts, 2011). There are four different types of artifacts: concepts, models, methods, and instantiations. DSRM holds that the artifact must be able to solve an important problem.
DSRM has been used as a viable design method for implementing AR solutions (Vasilevski and Birt, 2019a;Vasilevski and Birt, 2019b), where it was used in designing and developing an educational AR gamified experience to bring people closer to the indigenous community. DSRM has also been proposed as the best-suited framework for implementing gamification as an enhancement service (Morschheuser et al., 2018). DSRM has also been used in the literature on education in designing gameful educational platforms useful to educators and students (El-Masri et al., 2015). Differentiating DSRM from the regular design, Hevner et al. (2004) hint that design science should address an unsolved relevant problem in a unique and novel way or provide a more effective or efficient solution to an already solved problem. "DSRM is intended as a methodology for research; however, one might wonder whether it might also be used as a methodology for design practice" (Peffers et al., 2007).
To design and create compelling and engaging learning, human-centred design is crucial. The benefits of integrating students, teachers, experts in education and technology, and the designers and developers in a collaborative creation process cannot be ignored (Bacca et al., 2015). As education steadily moves from lecture-based to more experiential learning approaches, games can be beneficial, providing hands-on experiences and real-world environments. However, achieving this and measuring its success presents a practical problem to the academics (El-Masri et al., 2015).
DSRM is a systematic process of developing a solution to a known problem, a model that consists of a nominal sequence of six iterative activities (phases) (Peffers et al., 2007). In simple terms, these are 1) Problem identification and motivation phase, which defines the problem and justifies the solution. The definition of the problem should be as concise and straightforward as possible. Next, the problem is split into small solvable chunks that can carry the complexity of the solution in the form of an artifact (s). 2) Define the objectives for a solution phase, infers the objectives of the solution from the problem, and inquires what is possible and feasible. 3) Design and development phase uses the design paradigm to establish the functional and structural requirements for the artifact (s), followed by the actual creation of the artifact (s) that was specified in the past phases. 4) Demonstration phase uses techniques as simulation, experiment, case study, proof, or any appropriate technique to demonstrate the capability of artifact (s) to solve the problem(s). 5) Evaluation phase, through observation and measurement, evaluates "how well the artifact supports a solution to the problem" (Peffers et al., 2007). During this activity, the objectives of the solution are compared against the observed results from the artifact's use during the demonstration. Therefore, measuring success, or the ability of the artifact to solve the problem, is paramount. 6) Communication phase involves disseminating the inquired knowledge about the artifact and its design, effectiveness, novelty, and utility to researchers and relevant audiences.
Design science addresses the research by creating and evaluating artifacts designed to solve the identified problems in an organization. Evaluation is crucial in providing feedback information and a more in-depth understanding of the problem. This is especially important in education, where feedback and artifact design are core activities. Subsequently, the evaluation would improve the quality of the design process as well as the product. "Evaluation provides evidence that a new technology developed in DSR 'works' or achieves the purpose for which it was designed" (Venable et al., 2012). We think that all solutions in education should be design outcomes that follow the best practices and apply rigor in the process of design. Therefore, using design thinking and testing the solution capability to solve the problem should be paramount.
The reasoning and strategies behind the evaluation can be distinguished in terms of why, when, and how to evaluate. The equally important question is what to evaluate in regard to which properties of the evaluand should be investigated in the evaluation process (Stufflebeam, 2000). When considering the approaches, the evaluation can be approached quantitatively or qualitatively (or both). In terms of techniques can be objective or subjective (Remenyi and Sherwood-Smith, 2012). Regarding the timing, the evaluation can be ex-ante or ex-post (Irani and Love, 2002;Klecun and Cornford, 2005), or before a candidate system is conceptualized, designed or built, or after, respectively. Considering the functional purpose of the evaluation, there are two ways to evaluate: formative and summative (Remenyi and Sherwood-Smith, 2012;Venable et al., 2012). The primary use of formative evaluation is to provide empirically-based interpretations as a basis for improving the evaluand. At the same time, summative focuses on creating shared meanings of the evaluand considering different contexts. In other words, "when formative functions are paramount, meanings are validated by their consequences, and when summative functions are paramount, consequences are validated by meanings" (Wiliam and Black, 1996). The evaluation can also be sorted by its settings, where there are artificial and naturalistic evaluations as two poles of a continuum. The purpose and settings classification of the evaluation is combined in the Framework for Evaluation in Design Science Research (FEDS) (Venable et al., 2016). FEDS extended revision of the extant work by Pries-Heje et al. (2008), Venable et al. (2012).
The aim of this study is to test the application of DSRM to support the production of human-centered design approaches for AR games, thus addressing the research gap, which is the lack of design methodology and application evaluation for the purpose of immersive games. Our study provides a robust, published evaluation approach available to educators and design researchers, particularly novice ones, which can simplify the research design and reporting. This supports designers and researchers to decide how they can (and perhaps should) conduct the evaluation activities of gamified augmented reality applications.
METHODOLOGY
Below, we highlight the iterative DSR methodology (Peffers et al., 2007) process that was used to produce the evaluation pattern by using FEDS (Venable et al., 2016). We conceptualized, developed, and evaluated a solution in the form of an indigenous artworks tour guide mobile app ( Figure 1). The app incorporated three major components, AR component, gamification component, and micro-location component from which the evaluation pattern was derived. The app used the components together to replicate an existing indigenous traditional tour on a BYOD mobile device (see).
Through DSRM, we demonstrated and evaluated three iterations (I1, I2, and I3) of the solution capability to solve a problem. The first DSRM iteration (I1) is presented in Vasilevski and Birt (2019b), where we performed a comprehensive analysis of previously published relevant applications to better understand the problem. We focused on the initial development and testing of the indigenous educational experience and usability focusing on AR and user interface. The solution only partially met the predefined objectives resulting in a second iteration. The second iteration (I2) is presented in Vasilevski and Birt (2019a), where the focus was on optimizing the implementation of the AR component and learning experience. The solution partially met the objectives resulting in a third iteration, yet to be published. The third iteration focused on the implementation and evaluation of th gamification component and its interplay with AR. The data are presented in the supplementary materials and online repository (DOI 10.17605/OSF.IO/CJX3D). Each iteration uses a specific methodological approach which is highlighted in the demonstrations section below. All phases of the study were conducted under ethical guidelines in accordance with institutional ethics.
Demonstrations
The demonstration of I1 took place in artificial settings, using qualitative data collection with a small population sample of five experts (n 5). The experts' group consisted of an indigenous culture expert, user-experience expert, service-marketing expert, sense of place expert, and exhibition organization expert. The AR component was tested by following objective measurements, qualitative analysis, and usability evaluation techniques (Billinghurst et al., 2015). The usability testing activity was conducted following the guidelines for usability testing by Pernice (2018) from the Nielsen-Norman group (www. nngroup.com). The data was collected via observation and semi-structured interviews. The questionnaire from Hoehle and Venkatesh (2015) was adapted to generate ten focal points for the data collection regarding the artifact's interface. It incorporated ten user interface concepts as objective measurements: Aesthetic graphics, Color, Control obviousness, Entry point, Fingertip-size controls, Font, Gestalt, Hierarchy, Subtle animation, and Transition. Venkatesh and Davis (2000) TAM2 Technology Acceptance Model was used to derive data focal points concerning the level of performance of the solution and how helpful, useful, and effective it was. We analyzed the data by using thematic analysis (Braun and Clarke, 2006).
We performed the demonstration of the I2 part in-situ and part in artificial settings. For the in-situ demonstration, we approached the same group of five experts (n 5) from I1. We selected six participants (n 6) from the student population at an Australian university campus for the simulated environment demonstration. We used the same methodology from I1 (Vasilevski and Birt, 2019b) and built upon it in terms of the settings and the recorded details during observation and the interviews. As a result, there was more data gathered, and the quality of data improved.
The I3 demonstration took place in situ, and the number of participants was significantly higher than the previous iterations. Forty-two participants (n 42) used the provided smartphone devices to experience the educational artwork tour on their own. The data collection methods were also updated. We collected the quantitative data via pre and post-activity questionnaires (Supplementary Data Sheet). The questionnaire consisted of adapted questions from Hoehle and Venkatesh (2015) original questionnaire with seven-item Likert scales. These were the same concepts from I1 and I2. Relating to AR specifically, the extended reality (XR) user experience questions were also added to the instrument as an adapted version of Birt and Cowling (2018) instrument, validated in Birt and Vasilevski (2021), measuring the constructs of utility, engagement, and experience in XR. Concerning the gamification service, we measured several other constructs such as social dimension, attitude towards the app, ease of use, usefulness, playfulness, and enjoyment, adapted from Koivisto and Hamari (2014). Finally, the quantitative data were subjected to parametric descriptive and inferential analysis in the SPSS software package.
The collected qualitative data was in the form of reflective essays that the participants submitted within 2 weeks after the activity and reflective comments embedded in the postquestionnaire. We used Nvivo software to analyze the qualitative data following the thematic analysis methods (Braun and Clarke, 2006). Observation was also part of the data collection. The above was in line with Billinghurst et al. (2015) AR evaluation guidelines, looking at Qualitative analysis, Usability evaluation techniques, and Informal evaluations.
To choose the appropriate evaluation approach specific to the project, Venable et al. (2016) developed a four-step process: 1) explication of the evaluation goals, 2) choosing the strategy or strategies of evaluation, 3) determination of the properties to be evaluated and 4) designing the subsequent individual evaluation episode or episodes.
We used the Hierarchy of evaluation criteria Prat et al. (2014) to select the properties to be evaluated (see Figure 2). All demonstration activities were evaluated concerning these criteria.
We used the FEDS framework (Venable et al., 2016) to evaluate the artifacts through the strategy of why, when, how, and what to evaluate. FEDS is two-dimensional in nature. The first dimension can be formative to summative and concerns the functional purpose of the evaluation. The second dimension can be artificial to naturalistic and concerns the paradigm of the evaluation. FEDS design process of evaluation follows four steps: 1) explicating the goals of the evaluation; 2) choosing the evaluation strategy(s); 3) determining the properties for Frontiers in Virtual Reality | www.frontiersin.org September 2021 | Volume 2 | Article 713718 evaluation; 4) designing the individual evaluation episode(s). While incorporating the above features, FEDS provides comprehensive guidance on conducting the evaluation. The evaluation trajectory depends on the circumstances of the DSRM project. As mapped on the two dimensions of FEDS (see Figure 3), Venable et al. (2016) propose four trajectories for evaluating with a strategically different approach. These are Quick and Simple, Human Risk and Effectiveness evaluation, Technical Risk and Efficacy evaluation, and Purely Technical Artefact. All four strategies rely on the balance concerning speed, quality, cost, and environment.
Evaluations
The results presented below are concerning the strategies and the techniques that we developed and implemented to evaluate the solution in three cycles. The empirical evaluation results from the demonstration activity are outside the scope of this study.
All iterations included development, refinement, feature addition, and upgrades of the artifacts. The nature of the solution required the inclusion of the human aspect since the inception and followed throughout the process. Regarding the map of the evaluation, the process took a path closest to the Human Risk and Effectiveness evaluation strategy. This trajectory is illustrated in Figure 3, highlighted with green color. At the early stage, the strategy relied more on the formative evaluation and was more artificial in nature. As the artifacts evolved, the strategy path progressed toward a balanced mixture of the two dimensions. After the process was past two-thirds, the evaluation transitioned into almost entirely summative and naturalistic, allowing for more rigor in the evaluation.
As we evaluated the instantiation artifact following the Human Risk and Effectiveness (see Figure 3), the other two model artifacts were evaluated via the instantiation evaluation. In essence, the evaluation compares the objectives and the inquired results from the Demonstration activity. The properties for evaluation were selected from the Hierarchy of evaluation criteria developed by Prat et al. (2014), which were goal, environment, structure, and activity dimensions as relevant for this project. We evaluated the Environment dimension via all the sub-dimensions, consistency with the environment, consistency with the organization, and consistency with the technology. For the Structure dimension, we evaluated the completeness, simplicity, clarity, style, level of detail, and consistency criteria. For the Activity dimension, we evaluated the completeness, consistency, accuracy, performance, and efficiency criteria. Evaluation of all dimensions and respective criteria included the methods explicated in the methodology section. The corresponding methods and the respective criteria that we used for all three iterations, as well as the proposed ones for a fourth future iteration (i4), are presented in Figure 4. In Figure 4 we present the AR component evaluation, however, this can be generalized to other application components, such as VR, gamification and location-based services.
The evaluation process for all iterations is summarized in the five points below: 1. We conducted interviews to investigate if the design meets the requirements and the expectations of the users and the stakeholders. These interviews were timed before, during, and after the artifact development and were vital in collecting feedback from experts and the target users. 2. To evaluate the components and technologies of the artifact, we conducted experiments and simulations throughout all iterations. 3. At the beginning of the development process, we tested the artifact's performance and the ability to meet the requirements in a closed simulated environment. As the process evolved, we refined the artifacts and introduced new features with every iteration, which depended on the previous evaluation episode and the objectives of the solution. In addition, we gathered observation and interview data before, during, and after every activity and used it to debug, refine and upgrade the artifacts. 4. When the artifact had matured, providing sufficient performance, and implemented the key functionalities, we migrated the whole tested testing process in a real environment and with real users. We collected quantitative, observation, and qualitative data, shifting to more summative settings. 5. The evaluation showed that to provide a complete solution and meet the objectives in this project, a fourth iteration was required, where the artifact was to be deployed for the parent study experimental intervention.
FIGURE 2 | Hierarchy of criteria for IS artifact evaluation (Prat et al., 2014). These dimensions and criteria are used to evaluate the solution capability to meet objectives.
DISCUSSION
Regarding the human-centered first evaluation dimension in most occurrences, formative evaluation should be conducted during the beginning of the design and development process. The summative evaluation should be introduced after an artifact is mature enough and passed the basic evaluation. However, this does not exclude summative during the early evaluation and vice versa. As per Venable et al. (2016), the strength of formative evaluation is the reduced risks when the design uncertainties are significant, which in most cases is at the beginning of the process. On the other hand, summative evaluation provides the highest rigor and, consequently, the reliability of the inquired knowledge. Artificial evaluation can include processes such as laboratory experiments, mathematical proofs, and simulations and can be empirical (Hevner et al., 2004), or as in this case, preferably non-empirical.
The benefits of artificial evaluation could be stronger reliability, better replicability, and falsifiability. Moreover, it is inherently simpler and less expensive. However, it has limitations such as reductionists abstraction and unrealism that can produce misleading results regarding the users, systems, or problems. On the other hand, naturalistic evaluation probes the solution capabilities in natural, authentic settings, including real people and real systems. As the naturalistic evaluation is empirical in nature, it relies on case studies, field studies, field experiments, action research, and surveys. The benefits range from stronger internal validity to rigorous assessment of the artifact. The major limitations present the difficulty and cost of conducting the demonstration and evaluation. This could lead to the exclusion of some of the variables, which might negatively impact the realistic artifact efficacy.
The maturity of the artifact allows for more rigorous evaluation by moving to summative and naturalistic evaluations that would include larger sample sizes and realistic and in-situ evaluation environments, in line with Venable et al. (2016). This is an opportunity that should be used to introduce as much diversity as possible in terms of technology as well as the human aspect. As Hevner et al. (2004) suggest, the artifact should be implemented in its "natural" settings, in the organizational environment, surrounded with the impediments of the individual and social battle for its acceptance and use. All this would provide more objective and detailed insight on the performance and the capability of the artifact to solve the problem.
Continuous evaluation of the experiments and simulations is crucial to determine the optimums and the limits of the implementations and the symbiotic fit of the components. The mixed-type data used in the evaluation showed a holistic view of the state of the system and provided a pragmatic base for refinements and upgrades to the artifact as per Hevner et al. (2004). Furthermore, the evaluation of each iteration enabled and informed the next cycle. Thus, the process was cyclic in as many iterations as required to meet the required performance and objectives and ready to provide a solution to the problem in line with Peffers et al. (2007).
To evaluate the evaluation process used here, we look at the three objectives of DSRM. The evaluation in DSRM should: 1) be consistent with prior DS theory and practice, 2) provide a nominal process for conducting DS research in IS, and 3) provide a mental model for the research outputs characteristics. All three objectives are addressed below.
First, the evaluation process is consistent with the extant literature on the subject and best evaluation practices in IS. The evaluation approach was derived from multiple sources that converge on the subject (Pries-Heje et al., 2008;Prat et al., 2014;Venable et al., 2016). It is also consistent with the best practices, using the latest research and practice on usability, technology, and user experience testing (Billinghurst et al., 2015;Hoehle et al., 2016;Pernice, 2018).
Second, the evaluation followed the nominal process of DSRM (Peffers et al., 2007) and the evaluation guidelines. We showed how we evaluated the artifacts throughout the iterations following the process consistent with DSRM. In each iteration, the process worked as intended and was effective for its purposes.
Third, it provides a mental model for the presentation of the Design Science Research output. The evaluation process is explicated to a level that is comprehendible, relevant, and replicable. The steps, activities, and methods were described. The way we designed the evaluation can be used as a design pattern for the evaluation of similar or a variety of design projects. (Venable et al., 2016). The panel shows the possible paths that the evaluation process could take regarding sample size and the settings.
CONCLUSION
In conclusion, our study highlights a robust method of evaluation of gamified AR applications that can be used in education, design, and development. This is supported through our project solution case study for the development of a gamified AR micro-location indigenous artworks-tour mobile app. To the best of our knowledge, we are one of the first to provide a way of evaluating human-centered design artifacts through DSRM. We presented the six activities of DSRM throughout three development cycles, focusing on the evaluation. Our recommendations are based on the latest literature and best design practices, as well as the experiences gained throughout the process. DSRM showed to be an irreplaceable toolkit for designing and developing solutions to complex problems that emerge in intricate environments, such as education. Evaluation, as a critical part of the DSRM process, provides the rigor and robustness that can cope with solving highcomplexity problems. Our research had some limitations in the form of technological limitations and the ability to test the other trajectories for evaluation and provide a broader picture, which had to be noted and could provide a foundation for future research. The evaluation path we showed is compatible with both formative and summative evaluation, as well as the technical or human risk and effectiveness types of evaluation. In the spirit of DSRM, the biggest strength of this study is the knowledge and experience shared, which is novel and provides support for educators and developers looking to design cutting edge solutions. We hope that this is a step towards a structured use of design patterns and the evaluation of gamified AR apps that should inform the artifact evaluation and the design process in a holistic manner in the fields of gamification and immersive technology.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by NV00009 at Health and Medical Research Council (NHMRC). The patients/participants provided their written informed consent to participate in this study. | 5,790.4 | 2021-09-30T00:00:00.000 | [
"Education",
"Computer Science",
"Engineering"
] |
Considerations on the Electrode-Spacing-to-Electrode-Diameter Ratio in Electrical Resistivity Tomography (ERT): An Operational Approach
To develop small-scale, shallow, and high-resolution electrical resistivity surveys (e.g., for archeological or agricultural purposes), the available literature highlights few requirements in terms of distance between two adjacent electrodes or electrode’s length embedded in the soil. Nevertheless, there are no studies about the influence of the electrode’s diameter and/or the electrode’s diameter-to-electrode spacing ratio. Thus, this work proposes to investigate this ratio in relatively small-scale surveys (electrode spacing from 10 to 100 cm) to define an operational approach. The analysis has been conducted comparing the apparent resistivity data acquired by means of electrodes different in terms of diameter and materials. The apparent resistivity was chosen to avoid the introduction of further errors/approximations caused by the inversion procedure. Overall, six different types of electrodes have been employed and tested. The results of the data analysis emphasize the necessity of taking into account the electrode’s diameter-to-electrode spacing ratio in the case of small-case electrical resistivity tomography (ERT) surveys.
resistivity variation [2], [3].Resistivity (or its inverse called conductivity) is the capacity of the rock/soil materials to resist (facilitate, in the case of the conductivity) the passage of a current and it is influenced, among the other, by the degree of saturation and the fluid content, by temperature, lithology, and porosity [1].
This technique is commonly applied in different fields, e.g., hydrogeology [4], environmental investigation [5], natural hazard assessment [6], waste/residual investigation [7], agronomy [8], oil and gas [9], civil engineering [10], dredging engineering [11], and archeology [12].Furthermore, ERT is commonly used also for the reconstruction of the resistivity of other objects instead of the soil.Few examples of these applications are two-phase flow measurements [13], [14] (where the measurand is a fluid in a pipe), vertical flow in pipeline measurements [15], biomedical engineering applications such as anatomical atlas [16], and conductive measurement of thin-film electronic devices [17].
To collect current and voltage data, four electrodes are commonly used arranged on the surface according to different layouts called "arrays" [18].In the last 20 years, many authors have proven that the electrical potential in the subsoil, and thus, the acquired apparent resistivity data and, as a cascade effect, the inverted model are sensitive to the positions of the following: 1) the receivers, i.e., the voltage electrodes, which are usually called M and N (see for more details [12]); 2) the source, i.e., the current electrodes, which are usually called A and B (see for more details [12]).
In particular, a wrong deployment of the electrodes along the ERT line or the finite distance of the remote pole in the case of a pole-dipole (PD) array can generate artifact that can be interpreted as subsoil anomalies (see for instance [19], [20], [21], [22], [23]).
Another typology of error that can affect ERTs is the systematic one that cannot be removed by the averaging/staking of the data.These errors, e.g., the cable leakage or the active electrode length, in fact, are generated by the nonideal procedures or by the measurement systems [24], [25], [26].Thus, this category of errors includes those caused by the electrodes themselves and in particular by the following: 1) the length of electrode embedded in the soil and thus considered "active" with respect to the current generation [26], [27]; 2) the electrode material [2], [24]; 3) the soil-electrode contact [25], [28].The image reconstruction processes (also known as inversion algorithms) usually assume that the size of the electrodes is negligible compared to the distance "a" between two adjacent electrodes or the geometrical parameters of the employed model (i.e., electrodes are usually assumed to be ideal points).However, this assumption is not true, and artifacts can be generated if the part of the electrode embedded in the ground, i.e., the active electrode length, is too long compared to the "a" distance.In the case of long active electrodes, in fact, the model resolution decreases, and the signal-to-noise ratio increases with depth.When ERTs are applied to small-sized targets such as civil engineering and/or cultural heritage artifacts (e.g., diagnostic, management, and restoration or conservation of ancient handworks such as columns, walls, statues, pottery, and so on [2], [29]), or agricultural applications (e.g., understand the root-soil interactions or the temporal soil moisture variation in the first 30-40 cm of soil [8]), they are called small-scale ERT because of the miniaturized dimension of the targets [29].In these applications, a high resolution at shallow depth is needed and it can be achieved thanks to a miniaturization of the instrumentation, which means electrodes with a diameter of few millimeters (e.g., steel nails) and the interelectrode distance "a" of few centimeters.In these applications, where the investigated volume is very small, the nonpunctiform shape of the electrode (i.e., its dimensions compared to the investigated volume itself), if not correctly considered, can generate artifacts [26], [30].In literature, studies can be found, which suggests that if the ratio of active electrode length to "a" spacing is higher than 0.2 [27], a 3-D modeling of the electrodes is needed (e.g., the shunt-electrode model (SEM) or the complete electrode model (CEM) or the conductive cell model (CCM) as in [26] and [31] and references therein).Nevertheless, in [26], this has been demonstrated that, in micro-ERT profiles characterized by a high ratio of the active electrode length to electrode spacing "a," the 3-D modeling of the electrode can be avoided in favor of the approximation of the active electrode shape with an equivalent electrode point (EEP) located at 73% of the depth of the total electrode length.They also demonstrated that "a" should be higher than twice the active length and lower than the characteristic dimension of the shallow heterogeneity divided by 0.75 [26].Ronczka et al. [31] investigate the use of boreholes as long electrodes and, thus, the influence of both different borehole's diameters.They proved that for electrodes with a high length-to-diameter ratio, the diameter to "a" spacing ratio should be lower than 1% to have numerical error less than 1%.Moreover, they demonstrated that combining electrodes of different lengths (e.g., boreholes and surface electrodes), it is possible to increase the reliability of results.
As it is well known, electrodes can be metal stakes or plates, and these last employed when/where it is difficult or not recommended (e.g., on archeological sites) to insert stakes in the soil/structure.Usually, they are made of stainless steel and rarely other metals or graphite [32], [33].In the past, nonpolarizable electrodes were widely used to carry out induced polarization (IP) surveys to reduce the electric noise generated by the subsoils' self-potentials, but nowadays, they are commonly replaced by metal stakes that are more user-friendly with multielectrode tomography acquisitions [34].Nevertheless, according to recent literature [24], [25], the research questions still open in ERT measurements that regard the relationship between systematic errors and the electrode type material, the history of use, and the voltage/current applied.
This work takes its cue from some needs that can be encountered in a geophysical measurement campaign that is not fully addressed in the available literature: first, the necessity to develop a parallel setup, but because of technical issues, there is not the possibility to use the same electrodes in terms of dimensions and materials, and on the other hand, to develop a shallow and high-resolution survey (e.g., for archeological or agricultural purposes) using available material.The latter means that electrodes specifically developed for the purposes of the small-scale survey (i.e., with a very small diameter [27] or gels or sponges) cannot be employed, while those with a diameter of some millimeters have to be used and placed at a relatively small distance (less than 50 cm) to each other.
Thus, the purpose of this work is to investigate the ratio between the interelectrode distance "a" and the electrode's diameter ϕ (the a/ϕ ratio) in a relatively small-scale survey to define an operational approach.The major contributions brought by this research are the following.
1) The introduction of an operational approach to estimate the impact of the electrode's diameter in small-scale ERT surveys.
2) The definition of a range of applicability of electrodes as a function of the electrode spacing to electrode's diameter a/ϕ ratio to avoid possible artifacts in the presence of resistive targets.
3) The proof that the materials used (different types of stainless-steel and carbon electrodes) have no particular effect in the measured resistivity.4) The analysis also pointed out the major impact of inadequate a/ϕ that can be seen in the case of subsoil anomalies characterized by high resistivity.The electrodes employed and tested are those available at the Laboratory of Engineering Geology, University of Florence.To try to answer the research questions above mentioned, the study has been carried out in terms of analysis of the apparent resistivity (ρ a ) data to avoid introducing further errors/approximations caused by the inversion procedure applied to reconstruct the real subsoil resistivity.
Section II illustrates the tested electrodes and the measurement campaign, while Section III describes the obtained results.The discussion of the results and the conclusions of the work are provided in Sections IV and V, respectively.
II. MATERIALS AND METHODS
The goal of ERT method is to assess the subsurface resistivity of the soil through measurements taken on the ground surface.The acquired resistivity values provided by the instrument do not yet represent the true resistivity of the subsurface.Instead, resistivity data are "apparent" values representing the "global" complex mean resistivity of the ground.The acquired measurements depend on the electrode configuration during the measurement campaign.The "apparent" resistivity should then be postprocessed by adequate "inversion" algorithms to reconstruct a 2-D or 3-D model of the subsurface resistivity.The "inversion" procedure involves complex algorithms (including, for instance, convolutional neural network [35], U-Net deep neural network [36], Gauss-Newton method [37], and algebraic reconstruction technique [38]) and several approximations.For this reason, the following analysis deals only with the acquired values of the apparent resistivity in order to avoid the introduction of further uncertainties, dealing only with those caused by the different a/ϕ ratio tested.
Six distinct types of electrodes were tested having diameters (ϕ) from 4 to 16 mm (see specific data in Table I).Five of them were made of stainless steel and one of graphite.The electrical resistance (R el ) of each one was measured by using a calibrated 6(1/2) -Digit Bench Multimeter by Keithley (model DMM6500).Because of the extremely low resistance value of the electrodes, a four-wire resistance measurement method was implemented, with an instrument resolution of 1 µ and a measurement range of 1 .
For every type of electrode under test, five samples have been randomly selected to measure their electrical resistance, with 100 consecutive readings acquired for every sample under repeatability conditions.Considering the random electrode j of type #k, the instrument provides R k_ j and σ k_ j , which represent the mean value and the standard deviation of the electrical resistance, respectively.The average of the mean and standard deviation values for each electrode type is summarized in Table I, while all the measured resistances with associated expanded uncertainty u k_ j are reported in Fig. 1 in the case of 95% confidence level.More specifically, the expanded uncertainty has been evaluated in compliance with the ISO Guide to the Expression of Uncertainty in Measurement (GUM) [39] procedure according to the following steps: where u comb k_ j is the combined uncertainty in the resistance measurement in the case of random electrode j of type #k, which depends on s A k_ j , Type A uncertainty arises from multiple measurements, and s B k_ j , Type B uncertainty due to systematic errors such as calibration errors and instrument inaccuracy.The latter is calculated based on the multimeter accuracy, which, according to the manufacturer and under the specified operating conditions, and it is given by acc k_ j = ±(0.0085%Reading+ 0.02%range).
Then, the expanded uncertainty u k_ j has been calculated considering a coverage factor k = 1.96 due to the assumptions of standardized normal distribution and 95% confidence level as the best tradeoff between precision and width of the confidence interval All the evaluated uncertainties are reported in Fig. 1 as the length of the vertical error bar for each of the five random electrodes and for each type.Furthermore, the average uncertainty on all the electrodes is reported in Table I for each type.As can be noted in Fig. 1, all the stainless-steel electrodes show compatible resistance values, while the variability of the graphite electrodes is much higher.I), to avoid a too long active electrode [26] possibly generating artifacts as discussed in Section I, each electrode was inserted in the soil for few centimeters (2-3 cm) so that the active length was the same in all the ERT acquisitions.
Two different array topologies [the PD and the dipole-dipole (DD)] and four different interelectrode distances "a" (10, 30, 50, and 100 cm) were tested for each electrode type for a total of 48 ERT surveys.For more information about DD and PD arrays, see [12].Each PD-ERT and DD-ERT allowed to measure 986 and 806 values of apparent resistivity (ρ a ), respectively.The spatial distribution in the subsoil of the PD-ERT acquired measurements for an "a" distance of 10, 30, and 50 cm is shown in Fig. 2(b).The area investigated using an "a" distance of 1 m was not shown to avoid losing resolution in the figure.The subsoil investigated by the DD-ERT has the same shape as the PD-ERT one, but lower depth.
In Fig. 2(c), the 806 apparent resistivity acquisitions for the DD-ERT with "a" = 10 cm are shown, and each line of dots represents the so-called pseudo-depth as in [11].It is important to remember here, in fact, that an increase in the electrode spacing "a" allows to reach a higher depth of investigation.but losing resolution (i.e., the distance between two depths of acquisition is higher).
Table II shows the electrode's spacing-to-diameter ratio (a/ϕ) and the electrode's diameter-to-spacing ratio (ϕ/a) for each electrode type and interelectrode distances "a" value.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.The minimum and maximum ϕ/a tested are 0.4% (for electrode type #1 and "a" = 100 cm) and 16% (considering electrode type #6 and "a" = 10 cm), respectively.
During field measurements, the electrode resistance (R el ) becomes a part of the soil-electrode contact resistance (R s−el ), i.e., the resistance that affects the input voltage and thus the input current.R s−el is an indicator of the goodness of the soilelectrode coupling, i.e., lower the values better the coupling.The instrument employed in this study acquires, at each acquisition, the contact resistance between the two current electrodes (A and B, so in the following, it is called R AB ) that is the sum of the two soil-electrode coupling resistances (R s−elA and R s−elB ) and the resistance of the soil in between the two electrodes.Thus, according to [25], it is possible to write the following overdetermined linear system: where X is an (m, n) matrix of the form with m (rows) the number of acquisitions and n (columns) the number of electrodes (equal to 24 for the DD-ERT in this application).In each row of ( 7), there are only two x i j = 1, and all the others are equal to 0, i.e., 1 is assigned to those positions associated with the two electrodes that in that acquisition are working as current electrodes (e.g., in the first row of the matrix X , the elements x 11 = x 12 = 1 mean that electrodes 1 and 2 are working as A and B, while in the second row, x 22 = x 23 = 1, so electrodes 2 and 3 are the current electrodes, and so on).R s−el is the vector (1, n) of the soil-electrode contact resistances, expressed in [k ], to be determined, of the form and R AB is the vector (1, m) of the contact resistances, expressed in [k ] and acquired by the SyscalPro instrument at each acquisition (m = 806 in this application), of the form Being X and R AB known, it is possible to solve the system with the last squares methods and obtain R s−el as Therefore, according to [25] and (10), R s−el of the 24 electrodes involved in each DD-ERT was calculated.It was not possible to calculate R s−el for the PD-ERTs because all the measures have in common the electrode 25.Thus, the X ′ * X matrix is a 25 × 25 matrix with a determinant equal to 0 having values along the diagonal, the last column, and the last row.
As said in Section I, it is known that the resistivity is dependent on the temperature of both the soil and the pore fluid [1], [40].Nevertheless, there are studies (see, for example, [41]) that show as a soil temperature variation of few degrees results in negligible resistivity variations at shallow depth (up to 40 cm) and do not have effects at higher depth.Therefore, in the following analyses, ERT data were not corrected for the soil temperature variation.
III. RESULTS
Apart for electrode type #3 (the one in graphite), R s−el of the stainless-steel electrodes involved in each ERT was in the range 0.1-0.6 k , as shown in Fig. 3. Considering that R el of each electrode type as illustrated in Fig. 1 is approximately five orders of magnitude lower than R s−el , it is possible to state that R s−el is linked only to the local soil conditions and the differences in the electrode materials do not influence the coupling, and therefore, all the other considerations will be drawn in the following.In Figs. 4 and 5, for each of the four tested interelectrode distances "a" (the four panels) and for each electrode type expressed as its a/ϕ value (different colors), the acquired ρ a values are shown as a function of the pseudo-depth [11] for the DD and PD arrays, respectively.The global legend above [Fig.4(a)-(d)] is common to each of the four panels and it illustrates electrode type #.A similar meaning has the global legend located above (Fig. 5).Figs. 4 and 5, in agreement with [30], show that the a/ϕ value does not influence the acquired apparent resistivity ρ a when the interelectrode distance "a" is set equal to 1 m.This means that all the tested electrodes, apart electrode type 6, has a/ϕ value higher than 100 (i.e., the condition of ϕ/a ≤ 1% suggested in [30] for long electrodes is satisfied).This is clearer in the case of PD array (see Fig. 5), but it can be easily appreciated also in the case of DD array (see Fig. 4).In the DD array (see Fig. 4), the influences of the a/ϕ ratio is visible in the first four to five pseudo-depths when "a" is set equal to 30 cm (i.e., all the a/ϕ are lower than 100) and 50 cm (i.e., only one a/ϕ is higher than 100), and up to the last ten pseudo depths when the interelectrode distance is set equal to 10 cm (i.e., all the a/ϕ values are lower than 100).
For the PD array (see Fig. 5), the behavior is comparable, with a higher variation at shallow depth and when the a/ϕ ratio is lower than 80 (that corresponds to ϕ/a ≤ 1.25%).
IV. DISCUSSION
To understand the influence of the electrode spacing to electrode diameter ratio a/ϕ on the acquired apparent resistivity ρ a , the standard deviation of all the measurements acquired with the five different stainless-steal electrodes and for each distance "a," and thus for each point of the subsoil, has been calculated.As recalled in the introduction and shown in Fig. 2, different values of the interelectrode distance "a" allow to obtain ERT profiles of different lengths and thus to reach different depths, i.e., the subsoil distribution of the acquisition is that shown in Fig. 2(c), but the distances between acquisition increase with the increase of "a."Thus, data acquired with different "a" cannot be directly compared because they are not referred to the same subsoil portion.Nevertheless, the number of pseudo-depth levels is influenced by the integer parameter "n" that indicates how many interelectrode spacings "a" there are between the current electrodes and voltage electrodes (e.g., for a DD with A = El 1 , B = El 2 , M = El 3 , and N = El 4 , "n" is equal to 1, i.e., the distance between B and M is equal to "a," while for a DD with A = El 1 , B = El 2 , M = El 4 , and N = El 5 , "n" is equal to 2, i.e., the distance between B and M is two times "a."For more specific information, see also [20], [21], and [22]. Therefore, the pseudo-depth level [i.e., each dot line in Fig. 2(c)] can be seen as a relative depth that allows to compare data measured with a different "a" value.Fig. 6 shows that the standard deviations as a function of the relative depth (from 0, i.e., the surface, to 1, i.e., the maximum depth reached by the DD and PD arrays that are about 5 and 9 m, respectively), which means as a function of the pseudo-depth level.In particular, chosen a value of "a" (different colors in Fig. 6), each point of the graphs in Fig. 6 represents a It is possible to observe the high variability of the standard deviation when "a" = 10 cm (blue dots) up to a relative depth of 0.6 for both DD and PD arrays.This means that the a/ϕ ratio lower than 25 (that corresponds to ϕ/a ≥ 4%) has a major influence on the measures up to a real depth of three times the interelectrode distance.The variability in the standard deviation is still visible when "a" = 30 cm and "a" = 50 cm and reach its minimum (it seems to disappear) for "a" = 1 m, i.e., when a/ϕ value is higher than 100, and thus, the condition of ϕ/a ≤ 1% suggested in [30] for long electrodes is satisfied.In general, for both DD and PD arrays, the standard deviation does not seem to be influenced by the interelectrode distance (and thus by the a/ϕ ratio) at a relative depth higher than 0.6.
Moreover, considering "a" = 10 cm [the blue trends in Fig. 6(a) and (b)] and fixing a relative depth, it is possible to observe that the standard deviation has a great variability (e.g., for the first relative depth, the standard variation ranges from 4 to 34 m).This means that some acquisitions at the same depth are more subject to the a/ϕ ratio variation, and thus, the acquired apparent resistivity values are more spread.Checking for the acquisitions with a greater standard deviation, it is possible to note that they are located at the beginning of the ERT in correspondence with a shallow resistive anomaly [higher acquired apparent resistivity values shown in green in Fig. 2(c)].This result, in addition to what highlighted previously, indicates also that the a/ϕ ratio lower than 25 (that corresponds to a ϕ/a ≥ 4%) has a major influence in those applications where the targets are resistive anomalies (e.g., in achaeo-geophysics).
To better emphasize this concept, Table III summarizes some of the measurements characterized by the highest and lowest standard deviation values for a certain interelectrode distance "a" and a specific depth of investigation.The table includes the acquisition number, the measuring electrodes, and the measured resistivity considering the five stainlesssteel types.The mean value and the standard deviation of the measured apparent resistivity are also included.Looking at the table, it is clear how the highest standard deviation values are always linked to high measured resistivity, while the lowest variability occurs when the measured apparent resistivity is low.This is true regardless of the interelectrode distance "a," the depth of investigation, and the array type.As a matter of fact, similar values are also obtained for all the other acquisitions, the other depths, and the PD array.Thus, they are not included for the sake of brevity.Table III also verifies the concept that emerged from Fig. 6 that the greater variability between the electrode types (and thus the greater impact of the electrode's diameter and a/ϕ ratio) is shown at shallow depth, while the effect tends to decrease when the depth of investigation increases.
Nevertheless, from Fig. 6 and Table III, the information about the a/ϕ ratio is lost.Therefore, to better understand the a/ϕ ratio influence, the probability distributions of the measured apparent resistivity ρ a with respect to different interelectrode distances (each subplot), considering various electrode's spacing-to-diameter ratio (a/ϕ) with different colors, is shown for the DD and PD arrays in Figs.7 and 8, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.respectively.In each subplot, the same color indicates the same electrode type # according to the legend common to all the subplots above (Figs.7 and 8).Results for DD and PD arrays are in accordance and do not seem to show significant differences.
If the a/ϕ ratio does not influence the acquired ρ a , what is expected is that the probability distributions of data acquired by means of different a/ϕ values are comparable, which means that the dataset has the same median and standard deviation.This is what is shown in Figs.7(d) and 8(d), i.e., for "a" = 1 m.The probability distributions of all the tested a/ϕ are perfectly comparable.This result suggests that the relation ϕ/a ≤ 1% suggested in [30] for long electrodes can be increased up to ϕ/a ≤ 1.6%, which corresponds to a/ϕ ≥ 62.5.Moreover, considering the results in Figs.7(c) and 8(c), it seems that the a/ϕ ratio can be reduced up to 31.5 (i.e., ϕ/a ≤ 3.2%).The probability distribution of the measured data when "a" = 50 cm, in fact, shows negligible differences and can be considered in agreement for the different a/ϕ tested.
An a/ϕ = 31.5means that in the case of electrodes with a diameter of 4 mm, the "a" distance should be more than 12.6 cm, and in the case of electrodes with a diameter of 16 mm, it should be more than 50.4 cm.It is not unusual to employ these "a" distances in agro-geophysics [8] or achaeogeophysics [21], [29], where the target is shallow, and a high resolution is needed [29].Moreover, considering that in micro-geophysics the electrode's diameter range between 1.5 and 2.0 mm [29], an a/ϕ = 31.5means that the interelectrode distance "a" should range between 4.7 and 6.3 cm, respectively.
These results are also confirmed by the probability distribution of the data acquired with "a" = 30 cm (i.e., Fig. 7(b) in the case of DD array and Fig. 8(b) in the case of PD array): the first differences in the probability distribution, i.e., the influence of the a/ϕ ratio, are evident for a/ϕ values lower than 30 (i.e., the red and light blue curves).Finally, the probability distribution of the data acquired with "a" = 10 cm [that corresponds to ϕ/a ≥ 4% and it is shown in Figs.7(a) and 8(a)] shows the highest variability (different mean values and standard deviations) and it is not possible to assess which of them is not affected by the a/ϕ value.On the contrary, it is possible to assess with a quite high degree of confidence that this variability is not influenced by the active electrode length higher than the maximum suggested by [27].These authors, in fact, assessed that in the inversion of micro-ERT profiles, electrodes cannot be approximated as point electrodes but must be considered with their real geometry if the ratio between the active electrode and "a" is not kept well below 0.2.This limit means 2 and 6 cm for "a" = 10 cm and "a" = 30 cm, respectively.
Nevertheless, for operational reasons (i.e., to avoid introducing differences in the different ERTs) in this study, the active electrode length was kept equal for all the ERTs (i.e., about the minimum "a" tested) and the analysis was conducted in terms of apparent resistivity and not real resistivity, assuming that, if an effect of the active electrode was really present, it has to be the same in all the acquisitions.
V. CONCLUSION
This study was carried out to fill a gap in the literature about small-scale ERT and, in particular, about the influence on the acquired data of the ratio between the interelectrode distance "a" and the electrode's diameter (the a/ϕ ratio).Overall, six distinct types of electrodes were employed.The tested electrodes were stainless-steal or graphite stakes with a diameter ranging from 4 to 16 mm.To avoid considering the active electrode length and the generation of possible artifacts induced by not accounting for the real electrode shape, the analysis was conducted in terms of apparent resistivity, and the electrodes were inserted in the soil for few centimeters.
First, this study shows that the soil-electrode resistance (R s−el ) is influenced only by the local soil conditions and the differences in the electrode materials do not influence the coupling.Thus, the differences in the acquired data are not linked to the electrode material but to other factors.Moreover, the analyses of the acquired data with respect to the depth, to their standard deviations, and to their probability distributions highlighted how the a/ϕ ratio has to be ≥31.5 (i.e., ϕ/a ≤ 3.2%) to avoid artifacts in the acquired data and, thus, in the inverted models.
A potential bias of the work could be seen in not having repeated the test in different environments.Acquired data are of course site-dependent, but the purpose of the work was not to investigate the specificities of the site, but to see possible effects of the a/ϕ ratio used.Thus, the analysis was conducted in terms of apparent resistivity and not in terms of inverted resistivity model, and an almost homogeneous site was chosen.Nevertheless, the presence of an unknown shallow resistive anomaly has demonstrated the need to use the correct a/ϕ ratio to avoid possible artifacts, especially in the presence of resistive targets.This result is of particular interest for those applications, such as the archaeo-geophysics, which are conducted primarily to identify resistive anomalies.To evaluate the real influence of resistivity anomalies, possible future analyses could be conducted in a controlled (i.e., artificial) environment as well as numerical forward modeling.Another potential bias could be seen in the selection of the instrumentation, but according to the results [25], acquisition by means of different instruments is comparable.
A limitation of the proposed method lies in having tested only two arrays, the DD and PD ones.Further studies will therefore have to be conducted, considering other commonly used arrays such as the Wenner, the Wenner-Schlumberger, and the gradient [2], [18].Another drawback of the proposed methodology could be linked to the tested materials: even if stainless steel is the most employed one [2] and the results of this study show that the differences in the electrode materials do not influence the acquired apparent resistivity, other materials could be investigated to better generalized the obtained results.
Fig. 2 .
Fig. 2. (a) Photograph of the experimental setup with the acquisition system, the ERT line, and the different electrodes tested.(b) Schematic representation of the subsoil investigated by the PD-ERT with an "a" = 10 cm (blue), 30 cm (orange), and 50 cm (dark yellow).The area investigated using "a" = 1 m was not shown to avoid losing figure resolution in the first meters of the ERT.(c) Real distribution of the acquired apparent resistivity data for the DD-ERT with "a" = 10 cm: each line of dots is placed at the so-called pseudo-depth (see the text for more details).
Fig. 6 .
Fig. 6.Standard deviation of the ρ a values, acquired by the five different stainless-steal electrodes, shown as a function of the relative depth of investigation for different interelectrode distances in the case of (a) DD and (b) PD array.
Fig. 2 (
Fig. 2(c)] and acquired by means of the five stainless-steal electrodes.It is possible to observe the high variability of the standard deviation when "a" = 10 cm (blue dots) up to a relative depth of 0.6 for both DD and PD arrays.This means that the a/ϕ ratio lower than 25 (that corresponds to ϕ/a ≥ 4%) has a major influence on the measures up to a real depth of three times the interelectrode distance.The variability in the standard deviation is still visible when "a" = 30 cm and "a" = 50 cm and reach its minimum (it seems to disappear) for "a" = 1 m, i.e., when a/ϕ value is higher than 100, and thus, the condition of ϕ/a ≤ 1% suggested in[30] for long electrodes is satisfied.In general, for both DD and PD arrays, the standard deviation does not seem to be influenced by the interelectrode distance (and thus by the a/ϕ ratio) at a relative depth higher than 0.6.Moreover, considering "a" = 10 cm [the blue trends in Fig.6(a) and (b)] and fixing a relative depth, it is possible to observe that the standard deviation has a great variability (e.g., for the first relative depth, the standard variation ranges from 4 to 34 m).This means that some acquisitions at the same depth are more subject to the a/ϕ ratio variation, and thus, the acquired apparent resistivity values are more spread.Checking for the acquisitions with a greater standard deviation, it is possible to note that they are located at the beginning of the ERT in correspondence with a shallow resistive anomaly [higher acquired apparent resistivity values shown in green in Fig.2(c)].This result, in addition to what highlighted previously, indicates also that the a/ϕ ratio lower than 25 (that corresponds to a ϕ/a ≥ 4%) has a major influence in
Fig. 8 .
Fig. 8. Probability distribution of the acquired data in the case of PD array, considering that various electrode's spacing-to-diameter ratio (a/ϕ) and different interelectrode distances are (a) 10 cm, (b) 30 cm, (c) 50 cm, and (d) 1 m.
TABLE I TESTED
ELECTRODES: THEIR DIAMETER (ϕ), MATERIAL, MEAN RESISTANCE VALUE, AND RESISTANCE STANDARD DEVIATION
TABLE II ELECTRODE
'S SPACING-TO-DIAMETER RATIO AND ELECTRODE'S DIAMETER-TO-SPACING RATIO FOR EACH TESTED ELECTRODE."a" IS THE ELEC-TRODES' SPACING AND ϕ IS THE DIAMETERS OF EACH ELECTRODE TYPES AS LISTED IN
TABLE III SUMMARY
OF SOME OF THE ACQUIRED VALUES IN THE CASE OF DD-ERT CHARACTERIZED BY THE HIGHEST AND LOWEST STANDARD DEVIATION VALUES FOR A CERTAIN DEPTH AND A CERTAIN "a" | 7,989.8 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Doubled Hilbert space in double-scaled SYK
We consider matter correlators in the double-scaled SYK (DSSYK) model. It turns out that matter correlators have a simple expression in terms of the doubled Hilbert space $\mathcal{H}\otimes\mathcal{H}$, where $\mathcal{H}$ is the Fock space of $q$-deformed oscillator (also known as the chord Hilbert space). In this formalism, we find that the operator which counts the intersection of chords should be conjugated by certain ``entangler'' and ``disentangler''. We explicitly demonstrate this structure for the two- and four-point functions of matter operators in DSSYK.
Introduction
To describe a black hole in AdS, it is useful to consider the doubled (two-sided) Hilbert space of boundary CFT.In particular, the eternal black hole in AdS corresponds to the thermo-field double state [1] which is closely related to the idea of ER=EPR [2,3].Recently, the doubled Hilbert space in JT gravity and the double-scaled SYK (DSSYK) model has been extensively studied in the literature (see e.g.[4][5][6][7][8] and references therein).
In this paper, we consider matter correlators of DSSYK in the doubled Hilbert space formalism.As shown in [9], the correlators of DSSYK reduce to the counting problem of chord diagrams, which is exactly solved in terms of the q-deformed oscillator A ± .The Fock space H of the q-deformed oscillator, also known as the chord Hilbert space, can be thought of as the Hilbert space of bulk gravity theory [10].It turns out that matter correlators of DSSYK have a simple expression in the doubled Hilbert space H ⊗ H.We find that the operator which counts the intersection of chords is conjugated by the "entangler" E and the "disentangler" E −1 (see (4.4) and (4.11)).This structure is reminiscent of the tensor network of MERA [11,12].This paper is organized as follows.In section 2, we briefly review the known result of matter correlators in DSSYK.In section 3, we define a mapping of the operator X on H to the state |X⟩ in the doubled Hilbert space H ⊗ H and rewrite the matter correlators as the overlap ⟨0, 0|X⟩.In section 4, we perform this rewriting explicitly for the two-and four-point functions of matter operators.We find that the intersection-counting operator is conjugated by the entangler and the disentangler as in (4.4) and (4.11).Finally we conclude in section 5 with some discussion on the future problems.In appendix A we summarize some useful formulae used in the main text.In appendix B we explain the derivation of (4.8).In appendix C we prove the crossing symmetry of the R-matrix of U q (su(1, 1)).
Review of DSSYK
In this section we briefly review the result of DSSYK in [9].SYK model is defined by the Hamiltonian for N Majorana fermions ψ i (i = 1, • • • , N ) obeying {ψ i , ψ j } = 2δ i,j with all-to-all p-body interaction where J i 1 •••ip is a random coupling drawn from the Gaussian distribution.DSSYK is defined by the scaling limit As shown in [9], the ensemble average of the moment Tr H k reduces to a counting problem of the intersection number of chord diagrams with q = e −λ .This counting problem is solved by introducing the transfer matrix T where A ± denote the q-deformed oscillator acting on the chord number state |n⟩
.5)
Note that A ± satisfy the q-deformed commutation relations where N denotes the number operator Then the moment in (2.3) is written as The transfer matrix T becomes diagonal in the θ-basis and the overlap of ⟨n| and |θ⟩ is given by the q-Hermite polynomial H n (cos θ|q) where (q; q) n denotes the q-Pochhammer symbol (see appendix A for the definition).|θ⟩ and |n⟩ are normalized as and the measure factor µ(θ) is given by µ(θ) = (q, e ±2iθ ; q) ∞ . (2.12) As discussed in [9], we can also consider the matter operator with a Gaussian random coefficient K i 1 •••is which is drawn independently from the random coupling J i 1 •••ip in the SYK Hamiltonian.In the double scaling limit (2.2), the effect of this operator can be made finite by taking the limit s → ∞ with ∆ = s/p held fixed.Then the correlator of O ∆ 's is also written as a counting problem of the chord diagrams Note that there appear two types of chords in this computation: H-chords and O-chords coming from the Wick contraction of random couplings The O-chord is also called matter chord.
(2.15) Using the relation (2.15) is rewritten as As shown in [9], this bi-local operator commutes with T T, O ∆ e −βH O ∆ = 0. (2.18) The two-point function of matter operator O ∆ is given by Note that only the ℓ = 0 term in (2.17) contributes to the two-point function since Similarly, the uncrossed four-point function is given by (2.20) In the first equality we have used the relation (2.18) and the last equality follows from the fact that only the ℓ = 0 term in (2.17) contributes in this computation when sandwiched between ⟨0| and |0⟩.
The crossed four-point function is given by [9] ⟨0|e (2.21) Here we have suppressed the overall factor q ∆ 1 ∆ 2 coming from the intersection of the O ∆ 1chord and the O ∆ 2 -chord.
Let us take a closer look at the two-point function (2.19).Inserting the complete set 19), the two-point function becomes As discussed in [9,10], |n⟩ represents the state at a constant time-slice of the bulk geometry with n H-chords threading that slice.The factor q ∆n comes from the intersection of matter chord and n H-chords.Thus q ∆ N in (2.19) can be thought of as the operator counting the intersection of O ∆ -chord and H-chords.This operator q ∆ N plays an important role in what follows.
Doubled Hilbert space
As we reviewed in the previous section, the matter correlator of DSSYK takes the form ⟨0|X|0⟩, where X is a linear operator on the chord Hilbert space H spanned by the chord number states |n⟩ In order to study the matter correlators in DSSYK, it is useful to consider the doubled Hilbert space H ⊗ H and regard the operator X as a state |X⟩ in H ⊗ H In terms of the basis {|n⟩} n=0,1,••• , this mapping (3.2) is given by |n, m⟩⟨n|X|m⟩, where |n, m⟩ is the natural basis of In particular, the identity operator 1 corresponds to the state where E is given by (see (2.16) and (A.2)) Note that the state |1⟩ is the maximally entangled state and the operator E generates the entanglement when acting on the pure state |0, 0⟩.Similarly, the operator q ∆ N corresponds to the state with Note that we can append and/or prepend strings of operators as1 where X, Y, Z ∈ End(H) and t Z denotes the transpose of Z We should stress that we do not take the complex conjugation of Z on the right hand side of (3.9); we simply reverse the order of multiplication and take the transpose of Z in (3.9).
As an example of (3.9), let us consider the relation
.11)
Using t A ± = A ∓ and (3.9), we find 2 We can also show that (3.15) From (2.11), the state |1⟩ is written in terms of the |θ⟩-basis as and the state corresponding to the operator e −βT is given by This state |e −βT ⟩ is known as the thermo-field double state.
Matter correlators in the doubled Hilbert space formalism
In this section, we consider matter correlators of DSSYK in the doubled Hilbert space formalism.In general, the matter correlator of DSSYK takes the form ⟨0|X|0⟩ with some operator X ∈ End(H).In the doubled Hilbert space formalism, ⟨0|X|0⟩ is expressed as
Two-point function
Let us first consider the bi-local operator in (2.17), which is the basic building block of the two-point function and the uncrossed four-point function.The state |O ∆ e −βH O ∆ ⟩ corresponding to the operator in (2.17) is given by where we used the summation formula in (A.3).Using the relation 2 The state q ∆ N in (3.7) is reminiscent of the boundary state |Ba⟩ of the end of the world brane [15] |Ba⟩ = 1 (aA+; q)∞ |0⟩. (3.12) As shown in [15], the boundary state |Ba⟩ is a coherent state of the q-deformed oscillator where the parameter a is related to the tension of the brane.
(4.2) is rewritten as where E is defined in (3.6).The appearance of the operator q ∆ N ⊗ q ∆ N in (4.4) is natural since it counts the number of intersections between the H-chord and the matter chord.The important point is that this operator q ∆ N ⊗ q ∆ N should be conjugated by E This conjugation guarantees that the β → 0 limit of the state (4.4) reduces to |1⟩ in (3.5) In other words, the conjugation (4.5) is necessary for the following operator identity to hold3 Following the language of tensor networks, we call E and E −1 as "entangler" and "disentangler", respectively.Our result (4.4) shows that we have to insert the disentangler E −1 before acting the intersection-counting operator q ∆ N ⊗ q ∆ N .In the context of MERA [11,12], disentanglers are usually assumed to be unitary operators.However, our E and E −1 are not unitary.Thus, (4.5) is a similarity transformation, not a unitary transformation.See appendix B for the derivation of this expression.
Crossed four-point function
Next, let us consider the crossed four-point function (2.21) In the doubled Hilbert space formalism, this is written as where E ∆ 1 is defined in (3.8).Again, the operator q ∆ 2 N ⊗ q ∆ 2 N is conjugated by E ∆ 1 in (4.11); E ∆ 1 and (E ∆ 1 ) −1 can be thought of as the entangler and the disentangler associated with the state |q ) is schematically depicted as where the red line and the blue line correspond to the O ∆ 1 -chord and the O ∆ 2 -chord, respectively.In this picture, the bra and the ket are treated asymmetrically and some of the symmetries of G 4 are not manifest in our representation (4.11).In particular, the crossing symmetry ( 12) ↔ (34) of G 4 is not manifest in (4.11).
Conclusion and outlook
In this paper we have studied the matter correlators of DSSYK in the doubled Hilbert space formalism.In our formalism, a matter correlator of the form ⟨0|X|0⟩ is expressed as the overlap between ⟨0, 0| and the state |X⟩ ∈ H ⊗ H corresponding to the operator X, where the relation between X and |X⟩ is given by (3.3).We find that the intersection-counting operator q ∆ N ⊗ q ∆ N should be conjugated by the entangler E and the disentangler E −1 as in (4.4) (or the entangler E ∆ 1 and disentangler (E ∆ 1 ) −1 in the case of crossed four-point function (4.11)).In our representation of a matter correlator ⟨0, 0|X⟩ (4.1), the bra and the ket are treated asymmetrically and hence some of the symmetries of the correlators are not manifest.Nevertheless, the bra-ket exchange symmetry (or crossing symmetry) of the four-point function (4.16) can be shown rather non-trivially by using the Bailey transformation (C.5) of 8 W 7 .
We should stress that our formalism is different from that in [8].The authors of [8] introduced the two-sided chord Hilbert space in the presence of the matter operator, spanned by the states {|n L , n R ⟩} where n L and n R denote the number of H-chords to the left and right of the matter chord.Our |n, m⟩ in (3.4) is not equal to |n L , n R ⟩ in [8].According to the discussion in [14], our |n, m⟩ can be expanded as a linear combination of |n L , n R ⟩ in [8].It would be interesting to find a precise relation between our |n, m⟩ and |n L , n R ⟩ in [8].
The construction of the two-sided chord Hilbert space in [8] is based on a picture of cutting open the "bulk path integral".On the other hand, our formalism is based on a honest, direct rewriting of the known result of matter correlators in [9].At present we do not understand clearly how these two approaches are related.In particular, in our formalism we do not need to introduce the co-product of q-deformed oscillator A ± , which played an important role in the discussion of symmetry algebra in [8].Perhaps, (3.14) and (3.15) might be a good starting point to consider the relationship between the two approaches.We leave this as an interesting future problem. | 2,940 | 2024-01-15T00:00:00.000 | [
"Physics"
] |
Asymptotic analysis of a size-structured cannibalism population model with delayed birth process
In this paper, we study a size-structured cannibalism model with environment feedback and delayed birth process. Our focus is on the asymptotic behavior of the system, particularly on the effect of cannibalism and time lag on the long-term dynamics. To this end, we formally linearize the system around a steady state and study the linearized system by $C_0$-semigroup framework and spectral analysis methods. These analytical results allow us to achieve linearized stability, instability and asynchronous exponential growth results under some conditions. Finally, some examples are presented
and simulated to illustrate the obtained stability conclusions.
1.
Introduction. Population dynamics has been a central fixture in mathematical biology for more than two centuries, starting with Malthus' exponential model of population growth. The main focus of population dynamics has been a characterization of alterations in the numbers, sizes and age distribution of individuals and of potential internal or external causes provoking these changes. Traditionally, structured population models are typically formulated as partial differential equations for population densities. In the last three decades, linear or nonlinear age/size-structured population models have attracted a lot of interest among both theoretical biologists and applied mathematicians. Diekmann et al. have developed a general mathematical framework to study analytical questions for structured populations (see [6,7]), including those pertaining to linear/nonlinear stability of population equilibria. In this context it was recently proven for large classes of structured population models, formulated as integral (or delay) equations, that the nonlinear stability/instability of a population equilibrium is completely determined by its linear stability/instability. In the recent years, Farkas and Hagen successfully applied linear semigroup methods to formulate biologically interpretable conditions for the linear stability/instability of equilibria of size-structured population models (see [11,12,13]). In these problems they assumed that any effect of intraspecific competition between individuals of different sizes on individual behavior is primarily due to a change in population size and that every individual in the population can influence the vital rates of other individuals.
It is well known that, cannibalism, or intraspecific predation, is a phenomenon which occurs in a wide variety of organisms, including many species of protozoa, rotifers, gastropods, copepods, insects, fish, and some species of amphibians, birds, and mammals (see [10]). Sophisticated population models are capable to elucidate a potentially stabilizing effect of cannibalism, underscoring that certain populations may benefit from cannibalism when resources are limited. Consequently, the effects of cannibalism on the long-term dynamics of population have attracted considerable interest and have been analyzed for various structured population models, see [4,14,19] for instance.
The above mentioned research was all based on the assumption that newborns in the population are fertile from birth on. This assumption is justified for a large population of primitive species, but is unrealistic in many other populations of multicellular organisms. Hence the population models with delay in birth process are considered naturally. Some equations, in which the birth process depends on the history of the population appear, for instance, in the models of host-parasite interactions (see [6,7]). Here the delay is given by the time lag between laying and hatching of the parasite eggs. In general size-structured population equations with delay in the birth process occur in models where there is a time lag between conception and birth. Such problems have also been studied in the non-distributed delay case by Swick [30,31], in the nonlinear case by Di Blasio [5], and in the linear case by Guo and Chan [22]. Recently, linear age-dependent model with delay in birth process was treated in [15,28,29], they applied Perron-Frobenius techniques (see [20]) and theory for positive semigroups to establish some stability criteria. In [16,17] the authors have adoptted the similar methods to study the linearized stability problems for nonlinear population models. We would like to mention the well-known books [25,32] for references here.
The model equation involves the vital rates µ = µ(s, E(s, t))-mortality, β = β(s, σ, E(s, t + σ))-fertility and γ = γ(s, E(s, t))-growth rate of individuals. All the vital rates are size dependent. Moreover, it is assumed that the mortality, growth and birth rates of individuals depend on the extra size-specific energy intake due to cannibalism at time t which is given by E(s, t) = m 0 c(y)α(y, s)n(y, t)dy. (
1.2)
Consequently, the model (1.1) is nonlinear. Here, as in [14], we assume that individuals eat their conspecifics and that this cannibalistic behaviour is modelled through the size-specific attack rate α(y, s), which is the rate at which individuals of size s kill and eat individuals of size y. Usually the victim of cannibalism is smaller than the attacker, so α(y, s) should be zero for y > s, but we will make no explicit use of this assumption in what follows. We assume that all individuals are born with the same size 0. While c(y) is the energetic value of an attacked individual of size y. We assume that E is channelled into growth and affects ordinary mortality, that is, mortality not due to cannibalism but for example, due to starvation. Cannibalism leads to an extra mortality in the population. The extra size-specific mortality rate due to cannibalism at time t is given by (Note the switch of variables in α in contrast to (1.2).) In addition, for the remainder of this work we impose the following conditions on the model ingredients: For clarity in later developments we will write D 2 α for the derivative of α with respect to its second argument. The regularity assumptions above are tailored toward the linear analysis of this work. They might, however, not suffice to guarantee the existence and uniqueness of solutions of Eqs. (1.1), even in the steady-state case. Well-posedness of structured partial differential equation models with infinite dimensional environmental feedback variables is in general an open question. It has recently been shown in [1] that population models with infinite dimensional interaction variables may exhibit a more complicated dynamical behavior than the simple size-structured model of scramble competition.
Of interest in this work is to investigate the linearized stability and instability of stationary solutions of the system (1.1) by using semigroup techniques and spectral methods based on the characteristic equation, and meanwhile, the asynchronous exponential growth property (AEG, for short) of solutions is studied based on the spectral analysis as well. We mainly employ the Perron-Frobenius approaches to carry on our discussions and some results on linearized stability and AEG are obtained under proper conditions. It can be easily seen that these obtained results extend and develop the corresponding ones mentioned above. Moreover, the conditions for stability and AEG addressed in this paper are concrete and more applicable than those in paper [14] where the conditions are somewhat abstract. This paper is organized as follows. In Section 2, we propose the nonlinear model (1.1) with delay in birth process and linearize the system. In Section 3, we set the linear system in the framework of semigroup theory, and prove the existence and uniqueness of solutions for the simplified system by showing that the related abstract Cauchy problem gives rise to a strongly continuous semigroup. In Section 4, some regularity properties are derived for the linear system when the attack rate is separable, following that we deduce the characteristic equation in Section 5. Then we discuss in Section 6 the linearized stability and instability of the stationary solution under some conditions. And in Section 7 we devote to discussing the AEG property for the linearized system and present some conditions for AEG. Finally, in Section 8, we provide some simple examples to illustrate the obtained stability results through numerical simulations.
2. The linearized system. The system (1.1) admits obviously the trivial solution n * ≡ 0. Realistically we also expect some additional positive (continuously differentiable) stationary solutions n * for (1.1). In the following portion we formulate a necessary condition for the existence of a positive equilibrium solution of problem (1.1). (2.9) Observing that and substituting (2.9) into (2.8), we obtain Eq. (2.3), that is R(n * ) = 1. Finally, integrating (2.9) from 0 to m, we have (2.10) Taking (2.10) into (2.9), we get the form of (2.6). If, on the other hand, n * is defined by (2.6), then n * is readily seen to be a positive stationary solution.
Remark 1.
Here and later on, starred quantities are stationary counterparts of the time dependent functions in Eqs. (1.1). For obvious reasons, we shall exclusively consider positive stationary solutions of the form (2.6) or the trivial solution n * ≡ 0 in the following. Moreover, to be consistent with later developments, we shall assume throughout that stationary solutions n * have the regularity W 1,1 (0, m).
Given any stationary solution n * of the system (1.1), we linearize the governing equations by introducing the infinitesimal perturbation u = u(s, t) and making the ansatz n = u + n * . Hence u has to satisfy the equations (2.11) Now we linearize the vital rates. To this end, we note that the functional dependence of the vital rates on E rather than on n requires the linearization about E * . Thus by the approximations µ(s, E(s, t)) =µ(s, E * (s)) + µ E (s, E * (s))(E(s, t) − E * (s)) + h.o.t., in system (2.11) and dropping all the nonlinear terms, we arrive at the linearized problem where we have set 3. C 0 -semigroup for linear system. To analysis the asymptotic behavior for linearized system (2.12), we establish in this section the C 0 -semigroup framework for this system and through which rewrite it into an abstract evolution equation. Suppose n * is any positive stationary solution of problem (1.1). We denote the Banach space X = L 1 (0, m) with the usual norm · and on this space we introduce the following operator the boundary operator, which is used to express the boundary condition (see [21,28]). Define the bounded operator B m as On this space we introduce the operator Φ ∈ L (E, C), by setting, for g ∈ E, Then with these operators the linearized system (2.12) can be cast in the form of an abstract boundary delay problem: where u 0 (t) := u 0 (·, t), u : [0, +∞) → X is defined as u(t) := u(·, t) and u t : [−τ, 0] → X is the history segment defined in the usual way as In order to apply the C 0 -semigroup theory, we rewrite (3.1) as an abstract Cauchy problem. For this, on the space E, we consider the differential operator . In addition, we introduce another boundary operator Q : D(G m ) → X defined as Finally, we consider the product space X := E ×X, on which we define the operator matrix where With these notations, we obtain the following abstract Cauchy problem associated to the operator (A, D(A)) on the space X . Here the function U : [0, +∞) → X is given by To obtain the well-posedness of solutions for the abstract Cauchy problem (3.2), in the sequel we will verify that (A 1 , D(A 1 )) generates a C 0 -semigroup on X firstly, then we show that the operator A = A 1 + A 2 generates a C 0 -semigroup by the perturbation theorem.
In the first step, we consider the Banach space X := E × X × X × C and the matrix operator Lemma 3.1 (see [9,27]). Let (A, D(A)) be a Hille-Yosida operator on a Banach space X and B be a bounded linear operator on X, then the sum C = A + B is also a Hille-Yosida operator.
By this lemma we can prove that Proof. The operator A can be written as the sum of two operators on X as A = The restriction (G 0 , D(G 0 )) of G m to the kernel of Q generates the nilpotent left shift semigroup (S 0 (t)) t 0 on E given by the formula Similarly, the restriction (A 0 , D(A 0 )) of A m to the kernel of P generates a strongly continuous positive semigroup (T 0 (t)) t 0 on X given by where We claim that A 1 is a Hille-Yosida operator. In fact, for any λ ∈ C and And similarly, σ(G 0 ) = ∅. So for every λ ∈ C, its resolvent is given by Let then (g f 1 f 2 x) T ∈ X and λ > 0. It is true that Therefore, we have λR(λ, A 1 ) 1, and A 1 is a Hille-Yosida operator.
Since the perturbation operator A 2 is clearly bounded, A is a Hille-Yosida operator as well by Lemma 3.1.
Any Hille-Yosida operator gives rise to a strongly continuous semigroup on the closure of its domain, that is, [27]). Let (A, D(A)) be a Hille-Yosida operator on the Banach space X and set . Then the operator (A 0 , D(A 0 )), called the part of A in X 0 , is the generator of a strongly continuous semigroup on X 0 denoted by (T 0 (t)) t 0 .
According to this lemma, we have actually obtained by Proposition 2 that the operator (A 0 , D(A 0 )), the part of the operator (A , D(A )) in the closure of its domain, generates a C 0 -semigroup on the space E × {0} × X × {0}. Now we show that the operator (A 1 , D(A 1 )) generates a strongly continuous semigroup on X by the following theorem.
In particular, (A 1 , D(A 1 )) generates a C 0 -semigroup on the space X .
Proof. From the arguments following Lemma 3.2, we know that the part (A 0 , D(A 0 )) of (A , D(A )) in the closure of its domain generates a strongly continuous semigroup.
Observe that , Therefore, the operator (A 1 , D(A 1 )) is isomorphic to (A 0 , D(A 0 )) and thus generates a strongly continuous semigroup on the space X .
We can now formulate the main result of this section as follows. Proof. Since the operator B m is bounded on X, we know by the definition of A 2 that A 2 is also bounded on X . Thus it is clearly seen that A = A 1 + A 2 generates a strongly continuous semigroup (T (t)) t 0 on X due to Theorem 3.3 and the perturbation theory.
The following well-posedness result for (3.1) is then a direct consequence of Theorem 3.4. Corollary 1. For initial data u 0 ∈ E the linear boundary delayed problem (3.1) has a unique solution u in C([−τ, +∞), X), given by u(s, t) = u 0 (s, t) for t ∈ [−τ, 0) and where Π 2 is the projection operator of T (t) on the space X.
4. Spectral analysis and regularities. In this section, we will prove two regularity results about the C 0 -semigroup generated by (A, D(A)) when the attack rate α is assumed to be a special form. The first result implies that the spectrally determined growth property holds true and that the linearized stability of the steady-state solution is governed by the location of the leading eigenvalue, while the second result establishes that under certain assumptions on the vital rates the leading eigenvalue will be real rather than complex, which enable us to achieve the concrete conditions for AEG. As mentioned above, to carry on our further discussion in what follows, we make the assumption that the attack rate is separable, i.e.
where α 1 , α 2 ∈ C [0, m], R + . We can interpret that α 1 (s 1 ) represents the likelihood of being attacked at size s 1 , while α 2 (s 2 ) as a measure for the likelihood that individuals of size s 2 attack the others. The particular choice of the attack rate (4.1) allows us to cast the operator B m in the form Proof. Since, by the definition (4.2) of operator B m , we know the operator A 2 is compact on X , it suffices to prove the claim for the operator A 1 .
Next we show that the semigroup generated by A 1 is eventually compact. To this end, we observe the abstract Cauchy problem (3.2) with A 1 becomes With the definition of A 1 , we have With the equation (4.5), v satisfies Therefore, if t > Γ(m) + τ , u is continuous in s and t. Consequently, Eq. (4.5) implies that u is continuous differentiable if t > 2(Γ(m) + τ ). Hence the semigroup generated by A is differentiable for t > 2(Γ(m) + τ ). Since W 1,1 (0, m) is compactly imbedded in X, the claim follows. Theorem 4.1 has the following immediate and noteworthy consequence (see [9,27]). Corollary 2. The spectrum of the semigroup generator (A, D(A)) consists of isolated eigenvalues of finite multiplicity only and the Spectral Mapping Theorem holds true, i.e., σ(T (t)) = {0} ∪ e σ(A) , t > 0. Moreover, the semigroup is spectrally determined, i.e. the growth rate ω(T (t)) of the C 0 -semigroup (T (t)) t≥0 and the spectral bound s(A) of its generator coincide.
Because of Corollary 2 the linear stability of the steady-state solution is spectrally determined (see [9,27]). Hence in the sequel it suffices to investigate the location of the leading eigenvalue of the generator of the C 0 -semigroup (T (t)) t≥0 .
In order to state and prove the second main theorem of this section, we need state several existing lemmas and theorems. For this we introduce two operators as follows. For λ ∈ ρ(G 0 ) ∩ ρ(A 0 ), we define K λ : X → E by K λ := 1 • λ , and L λ : E → X by L λ := (1 • ϕ λ )Φ, or more precisely, K λ (f ) = f · λ for f ∈ X, and L λ (g) = Φ(g)ϕ λ for g ∈ E. The next result was formulated in [26].
The decomposition of the operator λ − A 1 below has also been proved in ( [26], Lemma 2.6).
Then one has the implications (a) ⇐ (b) ⇔ (c).
If, in particular, K λ and L λ are compact operators, then the statements (a), (b) and (c) are equivalent.
Since the operator L λ here has one-dimensional range, it is compact, and hence K λ L λ and L λ K λ are compact too. Then, from Theorem 4.4 we obtain immediately that (2) Furthermore, if λ ∈ ρ(A 1 ) (equivalently 1 ∈ ρ(L λ K λ )), then Proof. We only need to verity (4.7). By the equation of (4.6) in Lemma 4.3, the inverse of (λ − A 1 ) is By the definition of B λ , we have Then the expression (4.7) follows.
Lemma 4.6 (see [9], Theorem VI.1.8). A strongly continuous semigroup (T (t)) t 0 on a Banach lattice X is positive if and only if the resolvent R(λ, A) of its generator A is positive for all sufficiently large λ.
Now we conclude this section by formulating conditions for the positivity of the semigroup (T (t)) t 0 .
To this end, we consider the operator K λ L λ first. By the definitions of K λ and L λ in Lemma 4.2, it is easy to see that lim Reλ→+∞ K λ L λ = 0.
Therefore K λ L λ < 1 for Reλ sufficiently large. Thus the operator (1 − K λ L λ ) is invertible and its inverse (1 − K λ L λ ) −1 is given by the Neumann series. Obviously K λ L λ is a positive operator by the condition (4.9), and hence (1 − K λ L λ ) −1 is positive as well for Reλ big enough. With the resolvent representation of A 1 in (4.7), R(λ, A 1 ) is nonnegative for such λ. Thus, using Lemma 4.6 above, we infer that the operator (A 1 , D(A 1 )) generates a positive semigroup on the Banach lattice E × X. Then we get the assertion.
The positivity and eventual compactness of the C 0 -semigroup (T (t)) t≥0 enable us to draw the following important conclusion.
The characteristic equation.
In the light of Corollary 2, the linearized stability of stationary solutions of the system (1.1) is entirely determined by the eigenvalues of the semigroup generator (A, D(A)) when the attack rate α is assumed to be a special form in the section 4. Hence, in this section, we will derive a characterization of the eigenvalues in the form of zeros of a characteristic equation when the attack rate α takes the form (4.1).
To determine the spectrum of the generator of the semigroup, we substitute u(s, t) = e λt U (s) into the linearized system (2.12). This ansatz gives the equations We multiply equation (5.4) by c(s)α 1 (s) and α 2 (s), respectively, and integrate from 0 to m to arrive at U (0)a 4 (λ) + U 1 a 5 (λ) + U 2 (1 + a 6 (λ)) = 0, (5.7) Meanwhile, inserting the solution (5.4) into the boundary condition (5.2) we also have here the limit is taken in R, then we can formulate the above simple instability criterion, which follows immediately from the Intermediate Value Theorem.
Stability results of nonzero stationary solutions for this kind of model discussed here are much harder to obtain than instability results since a rigorous linear stability proof requires to show that all zeros of the characteristic equation are in the left half-plane of C. Hence it lies in the nature of the stability problem that any answer is generally hard to get by and is usually available for rather special or restricted cases only. Especially, when the birth process here contains time lags, it may bring about much more difficulty for us to deal with this situation. Therefore, only one special case is discussed in the following, in which stability conditions can be achieved relatively easily and simply.
Let us assume that the rate of an individual of size s to attack another individual is proportional to the product of the probability for an individual of size s to be attacked and its energetic value. Mathematically, this condition is modeled by the relation The constant p denotes here the proportionality factor. This condition is biologically relevant since environmental pressure conceivably makes individuals of higher energetic value, which are usually larger in size, not only easier to be attacked, but also more aggressive. In this case, the formulas (5.9)-(5.11) in the characteristic equation ( Proof. First, from the condition (4.8) and (4.9), we see that Theorem 4.7 and Corollary 3 apply now. Thus we can restrict ourselves to λ ∈ R as above. From the representations (5.9)-(5.17) in this case, K(λ) in the characteristic equation (5.18) turns out to be Then, it is easy to compute that lim λ→+∞ K(λ) = 1. Thus if K(0) < 0, n * is not linearly asymptotically stable as stated in Theorem 6.2. If now K(0) > 0, then n * will be linear asymptotically stable as long as we can show that K(λ) > 0 for every λ > 0. In fact, we observe that, for all λ > 0, (1 + a 7 (λ)) > 0, (6.9) and 1 + a from which and making use of conditions (6.7) and (6.8), we deduce the following relations 1 + a 7 (λ) > 0, (6.11) 1 + pa 5 (λ) + a 6 (λ) > 0, pa 8 (λ) + a 9 (λ) − pa(λ) > 0. (6.12) Therefore, from (6.11)-(6.12) and the fact a 4 (λ) < 0, it yields readily that, for every λ > 0, Because of K(0) > 0, we obtain that K(λ) > 0 for each λ ≥ 0, then the result follows.
If the fertility rate β depends only on the environment feedback size at time t rather that at t+σ, that is, β = β(s, σ, E(s, t)), then we can derive similarly that the characteristic equation is (5.18) with a(λ) replaced by (while a 1 (λ) − a 9 (λ) remain the same) More precisely, we have that Theorem 6.4. If β in the second equation of (1.1) is given by β(s, σ, E(s, t)). Let n * be any nontrivial stationary solution of Eqs. (1.1). Suppose that the conditions (4.8), (4.9), (6.7) and (6.8) are fulfilled for all 0 ≤ s ≤ m and −τ ≤ σ ≤ 0. Then n * is linearly asymptotically stable if and only if K(0) > 0. 7. Asynchronous exponential growth. The purpose of this section is to gain a deeper insight into asymptotic properties of solutions of the linearized system (2.12). That is, we will use semigroup techniques and spectral analysis methods to obtain the property of asynchronous exponential growth (AEG for short) for (2.12) which is defined in the framework of semigroup theory as below. The semigroup (T (t)) t≥0 is said to exhibit asynchronous exponential growth if it exhibits BEG with a rank one projection Π.
The phenomenon of AEG appears frequently in age/size-structured populations models (see [2,18,32]). It describes the situation when the population grows exponentially in time but the proportion of individuals within any range of age compared to the total population tends, as time tends to infinity, to a limit which just depends on the chosen range. This is an important characteristic of solutions of population equations both from the theoretical and application point of view.
For positive semigroups there exists the well-known characterization of AEG (see [3]). Our analytical approach will be guided toward the result as follows.
Let us recall here that, a positive C 0 -semigroup (T (t)) t≥0 on X is called irreducible if {0} and X are the only {T (t)}−invariant closed ideals. A useful characterization of irreducibility on L 1 (Ω, µ) lies in Lemma 7.3 (see [9], Theorem VI.1.12). Let T = (T (t)) t≥0 be a positive strongly continuous semigroup with generator A on a Banach lattice space X = L 1 (Ω, µ). If for any f ∈ X with f 0, there has (λI − A) −1 f (s) > 0, for almost all s ∈ Ω and some λ > s(A) sufficiently large, then the semigroup T = (T (t)) t≥0 is irreducible.
Using this lemma we obtain immediately that Theorem 7.4. Suppose that the positivity conditions (4.8) and (4.9) in Theorem 4.7 are fulfilled. Then the semigroup (T (t)) t≥0 generated by A is irreducible.
Proof. From the proof of Theorem 4.7, A 2 and R(λ, A 1 ) (for λ large enough) are all positive due to the conditions (4.8) and (4.9). Since it suffices to prove the irreducibility of the semigroup generated by (A 1 , D(A 1 )). This fact, however, follows immediately from the expression (4.7) as it indicates that the operator R(λ, A 1 ) verifies the condition of Lemma 7.3 on X exactly.
Before we formulate the main result of this section, let us review the notions of essential norm, growth bound, and essential growth bound, and some of their properties (see [9] for more details). Suppose that A is the infinitesimal generator of the strongly continuous semigroup (T (t)) t≥0 on a Banach space X. Then the growth bound of the semigroup is defined by For a linear operator L on X, the essential growth bound ω ess (L) is defined by where α is the measure of noncompactness defined as where K (X) denotes the set of compact linear operators on X. It is readily seen that, for S ∈ K (X), ω ess (A) = ω ess (A + S). (7. 2) The significance of the essential growth bound lies in the central fact that Based on the C 0 -semigroup and the spectral analysis, we are on the position now to show the system (2.12) has AEG under some conditions, namely, γ * (y) dy dσ ds < 1, (7.4) then the semigroup (T (t)) t≥0 generated by A exhibits AEG.
Proof. First we note that A 1 has nonempty spectrum and generates a positive semigroup by the proof of Theorem 4.1 and Theorem 4.7. By similar computation it is easy to obtain the characteristic equation of operator A 1 as follows: Clearly, if restricted to R, ∆ 1 (λ) is a continuous, strictly decreasing, real function having that lim λ→−∞ ∆ 1 (λ) = +∞ and lim λ→+∞ ∆ 1 (λ) = −1.
Therefore, ∆ 1 has a unique real zero λ 0 which is the spectral bound s(A), from which, together with (7.4) and Derndinger's Theorem (see [9]), it follows that On the other hand, we infer from (7.3), Theorem 4.7 and Derndinger's Theorem that (also see the proof of Theorem 6.2 ) ω 0 (A) = s(A) > 0. (7.6) Consequently, in the light of (7.2), (7.5) and (7.6), we have Hence by Theorem 4.7 and Theorem 7.4, the semigroup (T (t)) t≥0 is positive and irreducible with essential growth bound strictly smaller than its growth bound. The assertion follows now immediately from Lemma 7.2.
Remark 2. We point out that, all the results on stability and AEG of solutions for the system (1.1) established in Section 6 and Section 7, as well as the characteristic equation, involve the time delay τ , which shows clearly the dependence of these long time behavior of solutions on the time lag.
8. Examples and simulations. In this section, we present some examples and the corresponding simulations to demonstrate the stability/instability results of the trivial and nontrivial solutions given in Theorems 6.1, 6.2 and 6.3 respectively.
Then K(0) > 0. According to Theorem 6.3, the nontrivial stationary solution n * is linearly asymptotically stable as simulated in Figure 4. 9. Conclusion. In this work we have given a careful analysis of an important linearized size-structured cannibalism population model with delayed birth process. The vital rates in this model depend on a structuring variable (size), which takes values in a bounded set, and on the interaction variable (environment), describing the environmental feedback on individuals. Population models of this type are notoriously difficult to analyze. We would like to point out that the emphasis in the present work was to demonstrate how analytical techniques can be developed and used to treat qualitative questions of physiologically structured population models. In the analysis, we used C 0 -semigroup theory and spectral methods that allowed us to give a rigorous characterization of the linearized dynamical behavior of initially small perturbations of steady state via roots of the associated characteristic equation when the attack rate is separable. The positivity result for the semigroup was based on the decomposition of the operator matrix and the discussion of the resolvent operator under certain conditions for the vital rates. Here, we have formulated linear stability and instability criteria for equilibrium solutions of the model. Besides Theorem 6.1, we have obtained for non-zero stationary solutions an instability result (Theorem 6.2) and two stability criteria (Theorem 6.3 and 6.4) in the case of a separable attack rate. The spectral analysis of the linearized operator allowed us to gain successfully deeper insights into the asymptotic behavior of solutions of the linearized system. In particular, we investigated for the system whether the solutions of the linearized problem exhibit AEG property and gave concrete sufficient conditions as an affirmative answer.
The size-structured population model with delayed birth process studied in this paper improves and extends the earlier problems given in [7], [14] and [28] and elsewhere for simpler population models. And obviously, the effects of delay on asymptotic behaviors of the systems can be explored readily in these obtained results. | 7,490 | 2016-06-01T00:00:00.000 | [
"Mathematics"
] |
Regime-Switching Model on Hourly Electricity Spot Price Dynamics
A robust time-varying regime-switching model for price dynamics of hourly spot price of electricity on the electricity market is developed. We propose a two-state Markov Regime Switching (MRS) model that gives weight to the existence of different variance for each regime. Our model is tractable as it integrates the main features exhibited in the hourly spot price dynamics on the electricity market. The parameters of our hourly spot price of electricity market model are estimated using the Expectation Maximization algorithm. Based on this model, an efficient and tractable pricing technique can be developed to price the dynamics of the hourly spot price of electricity.
Introduction
Electricity, among other commodities, is one of the most important blessings science has given to the world.It is an essential commodity for social and economic development of developing countries.Most small household and manufacturing industries depend on electricity for their activities.According to the World Banks Global Tracking Framework (GTF), released in April 2017, 1.06 billion people live without electricity-a negligible improvement since 2012 (http://www.worldbank.org/en/topic/energy/overview,accessed on 02/09/2017) and this impedes the growth of countries economy due to the over-reliance of most activities on electricity.Crousillat, Hamilton, and Antmann (2010) stated that "eventhough electricity alone is not sufficient to spur economic growth, it is certainly necessary for human development" [1].
The electricity market facilitates the purchase of electricity through bids to buy; sales, through offers to sell; and short-term trades.In the early 1990s, the deregulation of the energy market (electricity market in our case) started in some countries (among others were the United Kingdom, Australia, and Norway) and gradually spread out to the European Union and the United States.This has created competitive markets that boost wholesale trading in most countries.This deregulation instigated substantial elements of risk such as uncertain demand, price risk, and volumetric risk; the principal of them being electricity price volatility.In result of this, there is the need to understand and model the spot price dynamics of the electricity market accurately to aid in an efficient pricing of electricity spots.
Electricity Spot Prices: Markets and Models
The spot price dynamics of electricity show signs of strong seasonality, high volatility, and generally unexpected extreme changes known as "spike" or "jumps" [2] [3].Electricity spot prices (the underlying) show forms of nonlinear dynamics.Among the nonlinearities of the series of the price dynamics are the clustering of large shocks, and non-constant variance.These distinctive characteristics make it difficult for practitioners and researchers to model accurately the spot price of electricity on the electricity market.Most researchers have modelled electricity spot price dynamics using single regime stochastic models where it is assumed that there is no changes in state of the underlying spot price dynamics.But a single stochastic model may not be able to incorporate the dynamics of these electricity spot price dynamics accurately [4] [5] [6].In reality, the underlying can go through different unobservable (latent) states in a particular period of interval.The underlying exhibits switching mechanism that needs different stochastic model for each switching state.Hence, the need to formulate appropriate models that can capture efficiently the electricity price dynamics to help in the proper pricing of spot and futures contract.From Figure 1, it is clear that mean-reversion is the optimal choice for electricity spot price dynamics.To build up the efficiency of Markov Regime Switching (MRS) models to electricity spot price dynamics, we proposed a model that gives weight to the existence of different variance for two Markov Regime Switching (MRS) models.The dynamics of electricity are more complex than normal spot price models allow and it can be noted that in the deregulated market, the dynamics of electricity prices are characterized with a combination of low price behavior and sharp price spikes as illustrated in Figure 1.
Ethier and Mount (1998) presented MRS models to electricty prices [7].Huisman and Mahieu (2003) presented a three regime-switching model that separates price spikes from normal price [8].They indicated that power spikes are short-lived and that stochastic jumps process cannot adequately model the electricity price behaviour.To substantiate their proposed model, they stated 2008) also presented a model that decreases the computational time induced by independent regimes [11].
In this paper, we develop a robust two state regime-switching model with time-varying volatility for the price dynamics of the electricity spot price on the electricity market.The model is mathematically tractable to represent well the characteristics of the spot price dynamics of the electricity market.
Regime-Switching Brownian-"Jump" Model
Suppose in a two independent state regime switching, each state undergoes discrete shifts between states t S of the process.Then t S follows a first order Markov chain with the transition matrix:
P
The transition matrix P contains the probabilities Keeping the stylized features of the spot price dynamics of electricity in mind, we propose a two-state Markov regime-Switching model with base regime driven by a mean-reverting process and a shifted regime driven by a Brownian-"Jump" process.In both regimes, we assume that the volatility of the current spot price is dependent on the current spot price level t X .The "jump" behaviour is as a result of an "extreme" Brownian motion with a greater extreme drift and volatility than the standard mean-reverting regime.The "jump" regime is modelled with a simple Itô process.Given a time interval [ ] 0,T at a finite time horizon [ ] T < ∞ , assume there is trading activities in the electricty market.
Suppose, given a probability space ( ) where λ is the mean-reversion rate of the base regime, 1 α λ is the long-term mean for the spot price reverting to, [12].There is a strong positive correlation between the price level and the price change for a psoitive γ .Assume 1 γ = , then model (1) can be reformulated as By the application of Itô's lemma to model (2), the integral form to the base regime and shifted regime is explicitly given in integral form as
Parameter Estimation
The EM algorithm was first introduced by Dempster, Laird, and Rubin (1977) [13].Estimating the parameters in a MRS models is not trivial since the regime is not directly observable(latent).Hamilton (1990) was the first to apply Expectation-Maximization (EM)-algorithm [14].The EM-algorithm is a comprehensive procedure for finding the maximum-likelihood estimate of the parameters of a distribution from a specified data given that the data is either incomplete or have missing (or hidden) values.The optimal set of unknown parameters to be estimated in the base and shifted regime are , , ,P , ,P θ α σ = respectively.The optimal set of the unknown parameters in the model to estimate is , θ θ Θ = .
Discretization
The discretized version of model 1 in the base and shifted regime is respectively given as ( ) where ( ) be the vector of past 1 k + last values of ( 5) and ( 6), i.e.
(
) . Also, let 1 H + be the size of the past data and Ψ be the equivalent increasing pattern of time at which the data is recorded, i.e.
E-Step
As stated earlier, the regime switching model is latent, hence the inference of the regimes are given by the equations below: , and n = number of iterations.
( ) ( ) is the density process at time k t , conditional on the process in regime i.From ( 2) and ( 5), the base regime has a conditional Gaussian distribution with mean ( )
M-Step
We compute the maximum likelihood estimates ( )
The transition probabilities 1 j P and 2 j P ( ) are estimated based on the formula below From ( 11) and ( 12), the log-likelihood functions of the base and shifted regimes are given respectively as , From (15), each of the parameter in the base regime can be estimated by differentiating the log-likelihood with respect to that parameter.
From ( 16), each of the parameter in the shifted regime can be estimated by differentiating the log-likelihood with respect to that parameter.
Data Description, Results, and Discussion
Historical electricity hourly spot price on the NordPool market is used, specifically we took an hourly data of Oslo.The data set consist of 764 hours spanning from 01/03/2017-31/03/2017.From Table 1, the kurtosis of the data was found to be 24.6572 which is by far greater than the kurtosis of a gaussian distributed data, hence the existence of extreme data points in hourly spot price of the NordPool Electricity.These extreme data points can be described as "jumps".The presence of extreme data points in the hourly electricity data are clearly illustrated in Figure 2, as the normal curve was not able to fit well on the histograph.This shows that the consumption of electricity depends largely on the peak hours and normal hours in a day, hence the need to model electricity spot price dynamics hourly.Also from Table 1, with a skewness of 3.9056, the data is skewed to the right.This also shows that the hourly spot price of the NordPool Electricity is not normally distributed.The parameter estimates for both regimes depends on the hourly spot price of the NordPool Electricity market from 01/03/2017-31/03/2017.The estimated results of the model is found in Table 2.The probability of hourly price remaining in the base regime is very high, 0.8471.With a lower but significant probability of 0.1529, the hourly price will remain in the shifted regime.
Conclusion
In this paper, a two-state Markov Regime switching model for the dynamics of the hourly spot price of electricity is developed.It is clear from the illustrated Figure 3 and Figure 1 that electricity hourly spot price exhibits mean reversion, heteroscedastic volatility in both regimes, price spikes and jumps.Our model is tractable as it integrates the main features exhibited in the hourly spot price on the electricity market.The parameters of our hourly spot price electricity market model are estimated using the EM algorithm.Based on this model, an efficient and tractable pricing technique can be developed to price the dynamics of the hourly spot price of electricity.To the best of our knowledge, our proposed model is the first to consider hourly spot price of electricity.From Figure 2, it is evident that the distribution of hourly spot price of the NordPool Electricity is not normal; hence it will be appropriate to use other distributions like the Normal Inverse Gaussian (NIG) or the Gamma distribution to capture this effect.
Figure 3 .
Figure 3. Calibration of two state MRS model with independent regimes fitted to deseasonalized hourly electricity spot price from the NordPool Electricity market.
Table 2 .
Parameter estimates for the two state MRS model.The parameters are estimated using the EM algorithm based on the hourly spot price of the NordPool Electricity market for 01/03/2017-31/03/2017. | 2,482.2 | 2018-01-18T00:00:00.000 | [
"Engineering",
"Economics",
"Mathematics"
] |
Multi-Criteria Decision Making in the PMEDM Process by Using MARCOS, TOPSIS, and MAIRCA Methods
: Multi-criteria decision making (MCDM) is used to determine the best alternative among various options. It is of great importance as it hugely affects the efficiency of activities in life, management, business, and engineering. This paper presents the results of a multi-criteria decision-making study when using powder-mixed electrical discharge machining (PMEDM) of cylindrically shaped parts in 90CrSi tool steel. In this study, powder concentration, pulse duration, pulse off time, pulse current, and host voltage were selected as the input process parameters. Moreover, the Taguchi method was used for the experimental design. To simultaneously ensure minimum surface roughness (RS) and maximum material-removal speed (MRS) and to implement multi-criteria decision making, MARCOS (Measurement of Alternatives and Ranking according to Compromise Solution), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution), and MAIRCA (Multi-Attributive Ideal–Real Comparative Analysis) methods were applied. Additionally, the weight calculation for the criteria was calculated using the MEREC (Method based on the Removal Effects of Criteria) method. From the results, the best alternative for the multi-criteria problem with PMEDM cylindrically shaped parts was proposed.
Introduction
Multi-criteria decision making (MCDM) is a common problem in practice when it is necessary to analyze different options to come up with the best alternative. This problem is posed not only for engineering but also for medicine, business, social sciences, and everyday life. In particular, it has been widely applied in mechanical processing because the machining process is often required to meet many criteria, such as the minimum machined surface roughness (SR), maximum material-removal rate (MMR), minimum cutting force, maximum tool life, or minimum machining cost. In fact, the criteria of a machining process often contradict each other. The requirement to increase the MMR will involve an increase in the depth of the cut and the feed rate, and it will lead to a growth in the surface roughness and a decrease in tool life. In addition, the requirement for a minor surface roughness will lead to a reduction in the depth of the cut and feed rate, and in turn, it will reduce the MMR. Therefore, solving MCDM problems through different methods has attracted many researchers.
The MAIRCA (Multi-Attributive Ideal-Real Comparative Analysis) method was proposed in 2014 [17] for the selection of railway crossings for investment in safety equipment. This method has the advantage that the objective function can be both qualitative and quantitative [6]. In [18], the MAIRCA method was used to rank and select the appropriate location for the construction of ammunition depots. Recently, it has been applied to MCDM in the turning process [6].
The MARCOS approach was recently proposed by Stević, Ž. et al. [19] when choosing sustainable suppliers in the healthcare industry in Bosnia and Herzegovina. This method is used for supplier selection in steel production [20]. It has been used for MCDM for three methods of processing, including milling, grinding, and turning [21]. In [22], this method was used to select suitable gear material and cutting fluid.
A PMEDM process is understood as an EDM process with a dielectric solution mixed with metal powder in order to limit some disadvantages of the EDM process, such as low machined surface quality and small MMR. Like EDM, this type of machining is very effective when processing difficult-to-machine conductive materials and concave parts such as stamping dies and plastic molds. Therefore, there have been many studies on optimization or MCDM of PMEDM processes. The results of MCDM when PMEDM using titanium powder when machining SKD11 tool steel using the Preferred Selection Index (PSI) method have been shown in [23], in which the minimum SR and maximum MRR are selected as criteria. J. Jayaraj et al. [15] presented the selection of the best option when machining Inconel 718 using PMEDM with titanium powder. In this study, two criteria were SR and MRR, and the MCDM method was the TOPSIS method. The TOPSIS method was also applied in [24] when solving the MCDM problem in PMEDM with mixed Si powder when processing EN-31 tool steel.
From the above analysis, it is obvious that there have been quite a few studies on MCDM for mechanical machining processes, including PMEDM, so far. Nevertheless, all of the studies on PMEDM have been carried out when machining concave parts or holes. Up to now, there has been no research on MCDM when machining cylindrically shaped parts, which are commonly used in shaped punches for stamping steel plates or tablet-shaped punches.
This paper introduces the results of an MCDM study when using PMEDM cylindrically shaped parts. In the study, minimum RS and maximum MRS were selected as the criteria for the investigation since RS and MRS are the two most important output parameters and the most popular subjects for the optimization study of mechanical machining processes [25]. Additionally, three methods, including MARCOS, TOPSIS, and MAIRCA, were used for MCDM, and the MEREC method was used to determine the weights for the criteria. The evaluation of the results when solving the MCDM problem with different methods was performed. In addition, the best alternative to obtain minimum RS and maximum MRS simultaneously was suggested.
MARCOS Method
Multi-criteria decision making using the MARCOS method is carried out according to the following steps [19]: Step 1: Forming an initial decision-making matrix: In which m is the alternative number, n is the criteria number, and x mn is the value of the criterion n in the alternative m.
Step 2: Making an extended initial matrix by adding an ideal (AI) and anti-ideal solution (AAI) into the initial decision-making matrix.
In the above equation, AAI = min x ij and AI = max x ij if the criterion j is bigger is better; AAI = max x ij and AI = min x ij if the criterion j is smaller is better; i = 1, 2, . . . , m; j = 1, 2, . . . , n.
Step 3: Normalizing the extended initial matrix (X). The normalized matrix N = n ij m×n can be determined by: (3) Step 4: Determining the weighted normalized matrix C = c ij m×n by using the following equation: where w j is the weight coefficient of criterion j.
Step 5: Determining the utility degree of alternatives K i − and K i + by: In (6) and (7), S i is determined by: Step 6: Calculating the utility function of alternatives f(Ki) by: where f(K i − ) is the utility function related to the anti-ideal solution; f(K i + ) is the utility function related to the ideal solution. These functions can be found by: Step 7: Ranking the alternatives based on the final values of the utility functions to find an alternative with the highest possible value of the utility function.
TOPSIS Method
To apply this method, the following steps need to be taken [3]: Step 1: Using step 1 of the MARCOS method.
Step 2: Determining the normalized values k ij by using the following equation: Step 3: Identifying the weighted normalized decision matrix by: Step 4: Finding the best alternative A + and the worst alternative A − by the following equations: where l + j and l − j are the best and worst values of the j criterion (j = 1, 2, ..., n).
Step 5: Determining the values of better options D + i and worse options D − i by: Step 6: Calculating values R i of each alternative by: Step 7: Ranking the order of alternatives by maximizing the value of R.
MAIRCA Method
To apply the MAIRCA method, it is necessary to perform the following steps [17]: Step 1: Forming the initial matrix in the same way as in the MARCOS method.
Step 2: Calculating the preferences according to alternative selection P A j . It is assumed that there is no preference for alternatives. Then, the priority for the criteria remains the same and it can be found as follows: Step 3: Determining the elements t pij of the theoretical rating matrix by: in which w j is the weight of the jth criterion.
Step 4: Determining the elements of real rating matrix t rij by: Step 5: Calculating the total gap matrix g ij by using the following equation: Step 6: Determining the final values of the criterion functions (Qi) using the alternatives. These values are found by the sum of the gaps (g ij ) according to alternatives or by the following formula:
Calculating the Weights of Criteria
In this work, the MEREC method was employed to find the weights of the criteria through the following steps [26]: Step 1: Forming the initial matrix as in the MARCOS method.
Step 2: Determining the normalized matrix elements by: Step 3: Finding the performance of the alternatives S i according to the following equation: Step 4: Calculating the performance of ith alternative S ij concerning the removal of the jth criterion by using the following equation: Step 5: Calculating the removal effect of the jth criterion E j by: Step 6: Determining the weight for the criteria by using the following equation:
Experimental Setup
To solve the proposed MCDM problem, an experiment was performed. The input process parameters of this experiment are given in Table 1. In addition, the Taguchi method with L18 (2 1 + 3 4 ) design was chosen for the experiment. Figure 1 depicts the experimental setup using a Sodick A30 EDM machine (Japan), graphite electrodes (TOKAI Carbon Co., LTD, Tokyo, Japan), workpieces with 90CrSi tool steel (China), 100 nm SiC powder (China), and Total Diel MS 7000 dielectric solution (France). After conducting experiments, the surface roughness (Ra) was measured, and the material-removal speed (MRS) was calculated. The experimental plan, the measurement values of workpieces SR, and the calculated values of the material-removal speed (MRS) (the average result of three measurements) are given in Table 2.
Multi-Criteria Decision Making
In this section, the MCDM problem is performed using the MARCOS, TOPSIS, and MAIRCA methods, where the weights of the criteria are determined by the MEREC method.
Calculating the Weights for the Criteria
The determination of the weights for the criteria according to the MEREC method is carried out according to the steps mentioned in Section 3. Accordingly, the normalized values hịj are determined by Equations (25) and (26). Additionally, the alternative
Multi-Criteria Decision Making
In this section, the MCDM problem is performed using the MARCOS, TOPSIS, and MAIRCA methods, where the weights of the criteria are determined by the MEREC method.
Calculating the Weights for the Criteria
The determination of the weights for the criteria according to the MEREC method is carried out according to the steps mentioned in Section 3. Accordingly, the normalized values h ịj are determined by Equations (25) and (26). Additionally, the alternative performance Si can be found using Equation (27). Next, Si and S ij are calculated by (28). The criterion-removal effect is then obtained by Equation (29). Finally, the weight of the criteria w j is determined by Equation (30). It is reported that the weights of Ra and MRR are 0.5692 and 0.4308, respectively.
Using MARCOS Method
The application of the MARCOS method to multi-objective decision-making is carried out according to the steps outlined in Section 2.1. First, the ideal solution (AI) and the antiideal solution (AAI) are determined by Formula (2). The obtained values of Ra and MRR are 1.6743 (µm) and 7.1046 (g/h) with AI, and 7.704 (µm) and 0.7306 (g/h) with AAI, respectively. Then, the normalized values u ij are determined according to Formulas (3) and (4). Additionally, the normalized value taking the weight c ij into account is found by Formula (5). In addition, coefficients K − i and K + i are determined by Equations (6) and (7). The values of f K − i and f K + i are determined by Equations (10) and (11). It was found that f K − i = 0.49 and f K + i = 0.51. Finally, the values of f (K i ) are calculated by Formula (9). Table 3 shows the calculated results of several parameters and the ranking of the alternatives. From this table, it is obviously recognizable that alternative A13 is the best alternative.
Using TOPSIS Method
The application of TOPSIS method to multi-objective decision making is guided in Section 2.2. Accordingly, the normalized values of k ij are calculated by Equation (12), and the normalized weighted values l ij are determined by Equation (13). Similarly, the A + and A − values of Ra and MRS are achieved by Equations (14) and (15). It is noted that Ra and MRS are equal to 0.0583 and 0.2058 for A + and 0.2681 and 0.0212 for A − . In addition, the D i + and D i − values were found according to Formulas (16) and (17). Finally, the ratio R i was calculated by Equation (18). Table 4 illustrates the results of the calculation of several parameters and the ranking of the alternatives when using the TOSIS method. It was found that alternative A13 is the best alternative among the given options.
Using MAIRCA Method
Multi-objective decision-making under the MAIRCA method is carried out based on the steps outlined in Section 2.3. After the initial matrix is set up, the priority or the criteria P A j is calculated by Formula (19). Since the criteria are considered equal, the priority for both Ra and MRS is equal to 1/18 = 0.0556. In addition, the value of parameter t p ij is found by Equation (20), with the note that the weight of the criterion is determined in Section 3. The t p ij values of Ra and MRS obtained are 0.0316 and 0.0239, respectively. Then, the values of t r ij is calculated by Equations (21) and (22), and the values of g ij is determined by Equation (23). Finally, the values of criterion functions Q i can be found using Formula (24). Table 5 shows the calculated parameters and ratings of the rating options when using the MAIRCA method. From this table, it can be seen that option A13 is the best alternative. Table 6 presents the ranking results of options when applying three methods, MAR-COS, TOPSIS, and MAIRCA. Moreover, Figure 2 shows a chart used for comparing the results of MCDM by using different methods. The vertical axis represents the values of the quantities when ranking the alternatives by different methods. Specifically, these methods include f (K i ) (when using the MARCOS method), R i (the TOPSIS method), and Q i (the MAIRCA method). From the results, the following observations are proposed:
The ranking order of alternatives is different when using the three methods, MARCOS, TOPSIS, and MAIRCA. 2.
All the three above-mentioned methods lead to the same result, i.e., A13 is the best option, which indicates that determining the best alternative does not depend on the decision-making method used. This observation is also consistent with the results when applying these MCDM methods to the turning process [6].
3.
The MAIRCA and TOPSIS methods have up to 14/18 alternatives rated the same (except for options A5, A7, A8, and A18), which proves that these two methods have given quite similar results and can be used interchangeably.
Conclusions
This article shows the results of a multi-criteria decision-making study when PMEDM cylindrically shaped parts. In this work, 90CrSi tool steel was chosen workpiece material, and 100 nm SiC powder was mixed into the Diel MS 7000 die Moreover, five process factors, including the powder concentration, the pulse-on the pulse-off-time, the pulse current, and the server voltage, were investi Additionally, the Taguchi method with the L18 (2 1 + 3 4 ) design was used to desi experiment, and three methods, including MARCOS, TOPSIS, and MAIRCA applied for multi-criteria decision making. Moreover, the determination of the w for the criteria was performed using the MEREC method. The following conclusion drawn from the research results: 1. This is the first time that MARCOS, TOPSIS, and MAIRCA methods have bee for the MCDM of a PMEDM process when processing cylindrically shaped pa 2. Using all three above-mentioned methods identified the same best alternative 3. The MAIRCA and the TOPSIS methods give quite similar ratings, proving tha two methods can be used interchangeably for MCDM when using PMEDM.
Conclusions
This article shows the results of a multi-criteria decision-making study when using PMEDM cylindrically shaped parts. In this work, 90CrSi tool steel was chosen as the workpiece material, and 100 nm SiC powder was mixed into the Diel MS 7000 dielectric. Moreover, five process factors, including the powder concentration, the pulse-on-time, the pulse-off-time, the pulse current, and the server voltage, were investigated. Additionally, the Taguchi method with the L18 (2 1 + 3 4 ) design was used to design the experiment, and three methods, including MARCOS, TOPSIS, and MAIRCA, were applied for multi-criteria decision making. Moreover, the determination of the weights for the criteria was performed using the MEREC method. The following conclusions were drawn from the research results:
1.
This is the first time that MARCOS, TOPSIS, and MAIRCA methods have been used for the MCDM of a PMEDM process when processing cylindrically shaped parts.
2.
Using all three above-mentioned methods identified the same best alternative. 3.
The MAIRCA and the TOPSIS methods give quite similar ratings, proving that these two methods can be used interchangeably for MCDM when using PMEDM.
4.
It was noted that the optimum set of the input factors for obtaining the minimum Ra and the maximum MRS simultaneously when processing cylindrically shaped parts was C p = 0.5 (g/l), T on = 8 (µs), T off = 12 (µs), IP = 15 (A), and SV = 5 (V).
5.
To further strengthen the reliability of the conclusions of this study, it is necessary to conduct multi-criteria decision-making studies with different weighting methods. Informed Consent Statement: Not applicable. | 4,190.2 | 2022-04-07T00:00:00.000 | [
"Engineering"
] |
Highlights from the ISCB Student Council Symposium 2013
This report summarizes the scientific content and activities of the annual symposium organized by the Student Council of the International Society for Computational Biology (ISCB), held in conjunction with the Intelligent Systems for Molecular Biology (ISMB) / European Conference on Computational Biology (ECCB) conference in Berlin, Germany, on July 19, 2013.
The Student Council (SC), part of the International Society for Computational Biology (ISCB), aims at nurturing and assisting the next generation of computational biologists. Our membership and leadership are composed of volunteer students and post-docs in computational biology and related fields. The main goal of our organisation is to offer networking and soft skill development opportunities to our members.
Meeting format
The Student Council Symposium is a one-day event. As is now a tradition, SCS 2013 began with a scientific speed dating session. During this session our delegates have to find a partner to introduce themselves to, and they must learn a bit about each other's scientific background. They must switch partners and repeat the process every two minutes until the allotted time runs out. The traditional scientific component of the meeting consisted of three oral presentation sessions, each with a keynote talk and several student presentations. During coffee breaks and the dedicated evening poster session participants had the opportunity to network and discuss their work.
At SCS2013, Dr. Alex Bateman (European Bioinformatics Institute, UK), Prof. Satoru Miyano (University of Tokyo, Japan) and Dr. Gonçalo Abecasis (University of Michigan, US) generously agreed to deliver the keynote addresses. In addition, Dr. Cheng Soon Ong gave a short presentation about the research activities at our institutional partner NICTA (Australia).
SCS 2013 received 97 submissions from students, which were peer-reviewed by 24 independent reviewers. Approximately 50 abstracts were accepted for poster presentations, and 10 abstracts were invited to deliver an oral presentation. Abstracts of oral presentations are included in this report. All abstracts are available online in the SCS 2013 booklet (http://iscbsc.org/scs2013/ content/booklet.html).
Keynotes
Dr. Alex Bateman's keynote opened the day, introducing us to the world of molecular biology databases in the 21 st century. Dr. Bateman made a case for the importance of the biocurators' role in modern biological research, considering them to be the "unsung heroes of biology who order and make sense of the immense primary literature for us". In the second part of his talk he gave our delegates an overview of his career path, which included some invaluable advise for young researchers hoping to make their way into a successful research career.
Prof. Satoru Miyano kicked off his presentation after the lunch break by establishing an amusing parallelism between Japanese politics and the development of cancer. He then went on to describe his efforts on shedding light on cancer gene networks by using the impressive capabilities of the Human Genome Center's supercomputer, located at the University of Tokyo.
For the last keynote presentations we were proud to present the 2013 Overtone Prize winner, Dr. Gonçalo Abecasis. Dr. Abecasis gave an insightful overview of the advances achieved during the last 10 years of developments in human genetics, and the role that computational biologists played in such endeavours. He then reviewed the challenges and opportunities that the future of the field holds, putting emphasis on the contributions that young computational biologists are in the position to make.
Workshop
Dr. Thomas Abeel, from the Broad Institute of MIT and Harvard, presented a workshop entitled "Presenting your science visually". During the workshop, Dr. Abeel gave some insight on the available techniques to make data presentations more effective. With illustrative examples, which applied concepts ranging from the Gestalt principles and colour theory to simple tips on how to stand in front of an audience, the presenter brought awareness to the young researchers in the audience about the importance of attending to detail when communicating your results.
Student presentations
In the first student presentation, Shanmugasundram et al.
[7] introduced the LAMP database of apicomplexan metabolic pathways (http://www.llamp.net). The resource makes available to the community the annotated metabolic pathways of eight apicomplexan species, together with a comparative analysis of their metabolisms.
Saccharomyces cerevisiae's gene regulation undergoes major changes in order for the organism to adapt to environmental stress. In their work, Chasman et al. made use of integer linear programming to try to understand the complex signalling network underlying these changes.
Aiming towards a better understanding of the Nonsense mediated mRNA decay (NMD) surveillance pathway, Kahles et al. created a pipeline which constitutes a transcriptome wide, splicing sensitive analysis of NMD in plants. Their results suggest that NMD plays a major role in shaping the transcriptome.
Currently, many computational methods for the conformational analysis of proteins are mesh-based. This makes them incur in the so-called "curse of dimensionality", since their computational cost increases exponentially with the number of atoms in the analysed molecule. Lie [8] proposed a mesh-free method which proves to be on-par with its mesh-based alternatives, while greatly lowering the computational cost involved.
Affeldt et al.
[9] analysed the expansion of "dangerous" gene families implicated in a variety of genetic diseases. They argue that this expansion is a consequence of the families' susceptibility to deleterious mutations and of the purifying selection of post-whole-genome duplication species, and perform a robust statistical analysis to back their claims.
Striving towards a better understanding of the molecular mechanism of the metastatic process in breast cancer, Engin et al. used protein structure and networks at the system level to unravel the complexity of the disease's genotype-phenotype relationship. Their findings may help improve clinical methods for approaching this disease, which the American Cancer Society ranks as the second cause of cancer death among women.
Far from being static entities, proteins often exist in an equilibrium of conformations. Narunsky et al. [10] presented ConTemplate, an automated method to propose putative alternative conformations for a target protein with a single known conformation. The method works under the assumption that pairs of structurally similar proteins may also undergo similar conformational changes. Even though the concept itself is not novel, ConTemplate represents the first automated implementation of the idea.
Medina Rodriguez et al [11] developed an algorithm, alleHap, to reconstruct unambiguous haplotypes from parent-offspring pedigree databases with missing family members. Through simulations, they have demonstrated that the algorithm is both fast, achieving optimum performance even with a large number of families, and robust, tolerating inconsistencies of the genotypic data.
To tackle the challenge of detecting low disease risk variants in genome wide association studies (GWAS), Chimusa et al. developed ancGWAS, an algebraic graph based method to identify significant sub-networks underlying ethnic differences in complex disease risk. Using their method, they were able to replicate previous tuberculosis loci, and to introduce novel genes and subnetworks underlying ethnic differences in tuberculosis risk.
In the last oral presentation, Sreedharan et al.
[12] presented Oqtans: A Multifunctional Workbench for RNAseq Data Analysis. To assist the investigation of the abundance of RNA transcripts and their potential differential expression, the workbench enables researchers to set up a computational pipeline for quantitative transcriptome analysis, which can be integrated into the Galaxy framework.
Award winners
Based on the votes of the SCS delegates, a judging committee awarded four speakers with best oral and poster presentations awards. In the best oral presentation awards the first place went to Han Cheng Lie, for his work "Towards breaking the curse of dimensionality in computational methods for the conformational analysis of molecules" [8]. Second place was for Severine Affeldt, for her work "On the expansion of dangerous gene families in vertebrates" [9].
The first place in the best poster presentation awards was for Maribel Hernandez-Rosales for her work "Simulation of Gene Family histories" [13]. Second place went to Nadezda Kruychkova, for her work "Determinants of protein evolutionary rates in light of ENCODE functional genomics" [14].
Student Council activities at ISMB
In addition to the ones carried out during the SCS day, the Student Council organizes additional activities aimed at all students and young researchers participating in ISMB/ECCB.
Career Central and interactive job postings
The Student Council Career Central, reaching in ISMB/ ECCB 2013 its third edition, aims to expose students to the experiences and success stories of senior researchers. Dr. Jaap Heringa of Vrije Universiteit Amsterdam gave an overview of the development of his career to the attending young researchers, providing advise based on his experience, and answering questions from the audience.
To facilitate the interaction between job seekers and job advertisers, the Student Council organises an interactive job postings board during ISMB/ECCB. Job offers can be attached to the posting board available at the SC booth in the exhibitors hall, and the advertiser can optionally offer time slots where they would be available for short interviews. Great interest was shown on the interactive system, and as a result many short interviews were carried out.
Social activities
On the of Student Council's goals is to help its members develop their scientific social network. During ISMB/ ECCB, the SC organises two types of social events to achieve this goal. The SC Social Headquarters takes place daily in a location close to the main conference venue. It gives young researchers the opportunity to meet members and leaders of the Student Council and interact in a friendly and relaxed environment. The main social event of the Student Council Symposium took place in a typical "biersalon" in central Berlin, where drinks were offered by the SC to attendees. Both SCS delegates and other young researchers joined the social event, resulting in an attendance of over 70 people.
Conclusions
This year's number of submissions and participants were roughly equivalent to those of last year's edition in Long Beach. Given the financially turbulent times for science in particular and for the global economy as a whole, we consider this numbers to have been a great success. Without a doubt, the appeal of SCS 2013 is greatly boosted by the outstanding keynote addresses, the high-quality oral presentations, and the broad poster session. Coupled with the rest of the SC-organised activities during ISMB/ECCB, we are happy to report another successful edition of this now long-running event.
Next year's edition, to be held in Boston, United States, will mark the 10 th anniversary of the Student Council Symposium. Building on our past and present achievements, we will strive to make the next edition of the Student Council Symposium the best one yet. For further information regarding the Student Council, its events, internships and community, please visit http://www.iscbsc.org. | 2,438.8 | 2014-02-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
A full reference APOLLO3 ® deterministic scheme for the JHR material testing reactor
, ABSTRACT JHR is a new material testing reactor under construction at CEA Cadarache. Its high flux core contains 37 fuel assemblies loaded along concentric rings into alveolus of an aluminum matrix. For the operation of the reactor, twenty-seven of these fuel assemblies host hafnium rods in their center while the other ones but also the beryllium radial reflector can accommodate experimental devices. In order to accurately predict its operating core characteristics but also its irradiation performance, a recently developed scheme based on the APOLLO3 ® platform is being developed which uses the sub-group method for spatial self-shielding, the 2D method of characteristics and the 3D unstructured conform MINARET S n transport solver. A 2D model of JHR has been built and optimized for calculating, at the lattice step, the self-shielded and condensed cross sections thanks to the sub-group method and the method of characteristics. Results are benchmarked against a TRIPOLI-4 ® stochastic reference calculation. A more refined spatial mesh gives better results on fission rates and reactivity compared to the ones of the former APOLLO2 scheme. The classical 2-step calculations use the hypothesis of infinite lattice configuration, which is reasonable for the assemblies close to the center but not for peripheral ones. Hence, a new approach is being set up taking into account the surrounding of each assembly. The newly 3-step scheme uses the Sn solver MINARET and gives better results than the traditional 2-step scheme. This approach will be applied to a 3D modelling of the heterogeneous JHR core configurations incorporating experimental devices and enabling burn up
INTRODUCTION
The Jules Horowitz Reactor (JHR) [1] is a new material testing reactor under construction at CEA Cadarache in the south of France. The main objective of this research reactor is to test advanced materials and to demonstrate their ability to withstand proper characteristics under operation conditions and irradiation. This concerns the safety whether it is for new generations (GEN-III and GEN-IV) nuclear reactors or for current generation (GEN-II). Another goal is to produce 99 Mo for medical diagnostics. Neutronics calculations on JHR are routinely performed with a 2-step deterministic scheme [2].
This neutronic scheme is currently based on the APOLLO2 [3]/CRONOS2 [4] codes to carry out respectively lattice and core calculations. The aim of this scheme is to predict JHR neutronics characteristics on the real 3D geometry over time. It will help operators to anticipate the behavior of the core, and to operate it safely. Results of the scheme are benchmarked at the beginning of life against stochastic calculations using the Monte-Carlo method [5] as implemented in the TRIPOLI-4 ® [6] code. Results at burn up conditions are benchmarked against a 1-step deterministic calculation using the method of characteristics.
Currently, the industrial route of the scheme computes the lattice step with depletion for fuel elements with the TDT solver [7] using the Method of Characteristics (MOC) on a refined 2D geometry after a resonance self-shielding treatment. The second step uses the CRONOS2-PRIAM solver based on diffusion theory using condensed/homogenized cross-section generated at the first step. This second step is applied on a homogenized geometry with a 6-group condensed energy mesh. It allows full core depletion calculations.
The new APOLLO3 ® [8] code in development at CEA brings advanced options for deterministic calculations. New solvers are available, such as the unstructured conform MINARET [9] Sn solver, a 2D/3D transport solver based on the discrete ordinates method (Sn) whose spatial discretization is relying on a Discontinuous Galerkin Finite Element Method (DGFEM). A subgroup method for calculating resonance self-shielding is implemented as it was done in the ECCO code. The method is coupled with the TDT flux calculation. It is also possible to create complex core geometry thanks to the SALOME platform [10].
The goal of this work is to design a new full reference deterministic scheme based on APOLLO3 ® to perform neutronics calculation on the Jules Horowitz Reactor. This new scheme will be a 3-step scheme in which condensed/homogenized cross-sections from a MOC-2D JHR models will be used to compute with Sn solver a 3D full core model (2 rd and 3 nd steps). Recent studies have shown that APOLLO3 ® and its advanced options significantly improve predictions, more specifically on the reactivity worth of Hafnium control rods [11]. This requires specific energy and spatial meshes. Cross-sections once self-shielded are condensed on a lattice geometry (1 st step) prior to the MOC-2D calculation. This approach helps to better take into account the surroundings of each fuel element.
In this document, the improvements on MOC-2D core step at beginning of life with the new APOLLO3 ® solver options are presented. Results are compared with TRIPOLI4 ® Monte Carlo simulations. The impact of the 3-step scheme against a classical 2-step scheme is being illustrated.
THE JULES HOROWITZ REACTOR
JHR is a 100MW pool research reactor. The core is made of 37 fuel assemblies loaded along concentric rings into alveolus of an aluminum matrix. Fuel assemblies are made of U3SiO2 fuel enriched up to 27 %.
Physics of Reactors Transition to a Scalable Nuclear Future
Proceedings of the PHYSOR 2020, Cambridge, United Kingdom Three of these assemblies can be removed and replaced by in-core experimental devices. Twenty-seven hafnium rods are introduced at the center of fuel assemblies to control the reactivity. It is also possible to place experimental devices at the center of the remaining fuel assemblies or into the beryllium radial reflector. JHR reaches high flux, up to 5.10 14 n.cm -2 .s -1 .
The JHR fuel assemblies consist of 24 curved concentric plates held together thanks to an aluminum stiffener. Light water flows into these fuel assemblies to cool and moderate the fuel (figure 2).
NEW 3-STEP NEUTRONIC SCHEME
Today, the only way to compute JHR core in 3D in depletion, is a 2-step calculation (figure 3). The first step solves the Boltzmann neutrons transport equation for each type of fuel assembly with specular boundary conditions. Space and energy are finely described, cross-sections are taken from the nuclear data library and a resonance self-shielding treatment is implemented to take into account the deeply resonant structure of cross-sections. The neutron flux calculated at each time step creates homogenized/condensed cross-sections stored into Multi-Parameters-Outputs (MPO) files. During the second step, a flux calculation on a coarser energy mesh is now performed on the 3D homogenized geometry for different burn up steps thank to the information stored in MPO. The core calculation solvers are based on diffusion theory or discrete ordinate method. They allow computing macroscopic core parameters for safety studies, and operation. This scheme allows the calculation of the core whereas a 1-step calculation would be too demanding in computing resource. This methodology is particularly suited to large power nuclear core in which an important number of similar fuel assemblies are present. For the JHR, an important quantity of different types of devices can be inserted into a small core. Depending on its neighbors, and their types, each assembly depletes differently according to the local neutron flux. The infinite lattice description of the fuel assemblies is no longer suitable for predicting the behavior of the core.
The new 3-step deterministic reference scheme is developed specifically for the JHR. The main objective of this scheme is to better take into account the heterogeneous structure of the JHR compared to the current APOLLO2/CRONOS2 scheme. In the first step, the neutron flux is calculated as for the lattice step of the 2-step scheme, using JEFF3.1.1 [12] nuclear data for the resonance self-shielding treatment. Cross-sections are condensed from 383 groups [13] to 41 groups at a fixed burn-up. In the second step, a full core APOLLO3-MOC2D calculation is performed in depletion using cross sections calculated at the first step. This new step provides a 6-group MPO to use in the 3 rd step corresponding to the 3D core calculation. In our application, the core calculations at this 3 rd step is performed with the MINARET-Sn and MENDELdepletion solvers at different burn up steps.
The main difference between this scheme and a 2-step scheme is the intermediate step that calculates the flux on the 2D core. Indeed, in each element of the core, flux shape is different according to its nature and its neighbors. The 2-step scheme takes into account the different natures of the fuel elements, because the MPO is computed for each type of fuel assembly. The 3-step scheme also takes into account the surrounding assemblies. It attempts to describe as precisely as possible the structures around the fuel element with witch it exchanges neutrons. The flux used for homogenization/condensation depending on these exchanges, it is therefore important to consider this surrounding. The global scheme is detailed figure 4. The 2 nd step MOC2D calculation has been presented in [11]. This step has been optimized to improve the results on k-eff and fission rates shape.
APOLLO3® MOC-2D CORE CALCULATION
The core configuration studied in this paper consists of 37 fuel assemblies and 5 Hf control rods. It is a 2D study at time step 0 conducted with the solver TDT-MOC using the Method of Characteristics. The calculation options are summarized in figure 4.
Previous work has revealed two main sources of fission rates bias in the preparation of the condensed cross sections (now treated in the two first steps) compared to TRIPOLI4 ® . This bias is located on the one hand near the hafnium rods, on the other hand near the Zr screen. The analysis have shown that the first part of
Physics of Reactors Transition to a Scalable Nuclear Future
Proceedings of the PHYSOR 2020, Cambridge, United Kingdom the bias is mainly due to a too coarse description of the large resonance of 178 Hf at 7.8 eV in the 383-energy mesh. Investigation of the second part of the bias have shown that it partly comes from the water sheet between the Zr screen and the reflector.
Due to its strong anisotropy, it is difficult to predict the neutron flux near the water sheet. To improve the prediction near the reflector, a study of the sensitivity of k-eff on the number of meshes in the reflector has been conducted. It shows the need for a more refined mesh in the reflector as shown in figure 5. The refinement is done mainly for the water sheet and the Be reflector. In these areas, the flux variations are the biggest because they make the transition between the high flux in the core and the low flux in the pool.
Figure 5. Comparison between the former and the new spatial mesh for the MOC2D core calculation.
The results on fission rates for each fuel plates are compared for the two meshes on figure 6 to the stochastic reference ones calculated with TRIPOLI-4 ® . We can see an important improvement of the prediction of the fission rates per fuel assemblies. This better agreement is particularly viewable at the surrounding of Zirconium shield. The more refined radial mesh allows a better description of the flux variation in the water sheets and in the Be reflector, which impacts the fission rate calculation. In fact, 6 rings per water sheet are necessary to describe accurately the flux in these regions. The new spatial mesh for the MOC calculation gives also a better prediction of the reactivity. For the former mesh, the reactivity bias compared to TRIPOLI-4 ® is -45 pcm (keff TR4 = 1.39490). For the new scheme, the bias in reactivity is -31 pcm.
APOLLO3® SN CALCULATION
As explained on figure 4, the MOC-2D core calculation is used to generate homogenized/condensed MPO for a Sn 3D core calculation. The APOLLO3 ® Sn solver is MINARET. The performances of the 3-step scheme are tested with a MINARET 2D calculation on the same configuration than the MOC calculation. In this part, the biases with the stochastic reference between the 2-step and the 3-step scheme (table 1) are compared. This will lead in the future to MINARET 3D calculations.
The MINARET solver uses a conformal triangular mesh to discretize the geometry as shown in figure 7. The cross-sections are obtained from the lattice step for the 2-step scheme, and from the MOC-2D core calculation for the 3-step scheme. The cross-sections for the reflector structures are homogenized/condensed in both cases from the MOC-2D core calculation.
To reach an acceptable running time, cross sections are condensed into six groups. Homogenization is limited, because we want to keep separate each fuel plate (888 volumes). This is a requirement for the scheme design, because we want to compute the power of each plate to obtain the power shape.
To estimate the separate effects of condensation and solver options, the results are compared without condensation at 41 groups with a MOC2D calculation and a MINARET2D Sn calculation. (1) and (2) shows a small bias between the Sn and MOC solvers for our model. The bias on each plate is under 4%. The 6-group condensation slightly degrades the prediction, but divides by 13 the execution time (3).
Taking into account the neighboring of each assembly in condensed cross sections with the 3-step calculation limits the degradation, but keeps the same execution time (4). The improvement is greater for fuel assemblies close to the reflector, because the real surrounding of these fuel assemblies is the most different from the infinite lattice one. For assemblies at the core center, the differences are s between the two calculations. Figure 8 shows the normalized flux used to condense the 41-group cross sections into 6 groups for the 8 th plate of the center assembly (001), and of an assembly close to the reflector (315). These two assemblies are identical, but the flux strongly depends of the surrounding, especially for thermal energies. For assembly 001, the infinite lattice approximation seems to be justified, but for assembly 315, the real flux is much more thermal than predicted by the infinite lattice calculation.
Figure 8. Flux in the eighth fuel plate
The 6-group energy mesh is also shown in figure 8. The fissions occur mainly in the group [1.0E-5eV, 1.378E-1eV]. The bias observed on the flux in this group is passes on the 235 U condensed fission cross section used for the MINARET calculation. Compared with the cross section obtained with the infinite lattice model, the bias for element 315 reaches 5 % but only 0.2% for element 001 (table II). This bias contributes to explain the difference between the 2-step and 3-step calculations.
CONCLUSION
To deal with the very heterogeneous JHR core, a new 3-step deterministic scheme based on the APOLLO3® code has been designed. Compared to the traditional 2-step scheme, the new 3-step scheme takes into account the surrounding of each fuel assembly. This new methodology uses a MOC-2D core calculation to generate homogenized and condensed cross-sections stored in a multiparameter reactor database. The 3-step scheme improves the fission rates predictions at the periphery of the core compared to the 2-step one. It limits the impact of the energy mesh condensation and gives a much more accurate and faster computation of fission rates with MINARET solver. The reduction of the bias on fission rates (vs TRIPOLI4 Monte Carlo results) near to the Zr screen of the reflector reaches ~1.4% against more than 3% without the new spatial mesh.
Finally, this fast Sn calculation with MINARET enables calculating 3D core configurations with experimental devices and control rods. It paves the way to a full reference APOLLO3 ® deterministic scheme for the JHR material testing reactor.
ACKNOWLEDGMENTS
APOLLO3 ® and TRIPOLI-4 ® are registered trademarks of CEA. We gratefully acknowledge Framatome and EDF for their long-term partnership and their support. | 3,649.8 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Realization of Virtual Human Face Based on Deep Convolutional Generative Adversarial Networks
At present, the generative adversarial networks research that generates a high confidence image for a large number of training samples has achieved some results, but the existing research only performs image generation for known training samples, but does not use the training parameters for image generation other than training samples. This paper uses the TensorFlow deep learning framework to build deep convolutional generative adversarial networks to complete the generation of virtual face images. From the experimental results, it can better generate virtual face images similar to real faces, which provides new ideas and methods for the research of generating virtual images.
X Other: The authors' request.
Results of publication (only one response allowed):
are still valid.X were found to be overall invalid.
Author's conduct (only one response allowed): honest error academic misconduct X none
Introduction
In 2014, the scholar Ian Good fellow proposed the concept of generative adversarial networks [1].Once proposed, generative adversarial networks quickly became a research hotspot in academia.The learning style of the collaborative confrontation network is unsupervised learning, but it avoids the difficulty of unsupervised learning very well, because each of the generative confrontation networks contains such a pair of models: the generation model and the discriminant model.Because of the discriminant model, the generated model can learn to approximate the real data without using the prior knowledge for complex modeling, and finally make the generated data reach a false degree.
Radford et al. published a paper in 2015 that presented deep convolutional generative adversarial networks and used them for LSUN scene recognition challenges, Mnist handwritten numbers, and SVNH data sets [2].In the LSUN dataset, the indoor scene image is successfully generated.On the Mnist and SVNH datasets, in order to verify the validity of the feature representation of the deep convolutional generative adversarial networks, the feature representation is input into the L2-SVM.Comparing the obtained classification results with other unsupervised algorithms, the results show that the highest accuracy of deep convolutional generative adversarial networks is 82.8%.At the same time, the paper points out that deep convolutional generative adversarial networks can be used to control the generation and disappearance of specific objects in a picture.
The principle is that in the hidden space, if we can know which variables control an object, by blocking these variables, we can make a specific object in the picture disappear.Zhang et al. [3] used deep convolutional generative adversarial networks to create a text-to-image application in 2016.Generative adversarial networks generated related images based on specific sentences entered.The input of the generator is not only random noise, but also it includes specific statement information, and the discriminator identifies whether the sample is true or not and whether it matches the specific statement information.Isola et al. [4] applied generative adversarial networks to various existing deep neural network projects, such as segmenting images to restore original images, coloring black and white images, and coloring according to textures.Coutinho et al. [5] used generative adversarial networks for privacy protection, and Odena et al.In 2015, Google Inc. opened up its internal deep learning framework Tensor-Flow [7], which has been applied to scenes such as image recognition, image segmentation, speech recognition, and natural language processing, and has achieved good industry results.The TensorBoard function in TensorFlow makes it easy to collect and visualize data such as loss values during training.This paper is based on the TensorFlow platform for the research and implementation of deep convolutional generative adversarial networks [2].Adding a convolutional layer, a de-convolution layer, and using batch normalization based on the original generative adversarial networks makes the learning process faster and more stable.
Theoretical Basis for Deep Convolutional Generative
Adversarial Networks (DCGAN)
Generative Adversarial Networks
The generative adversarial networks are composed of a generation model and a
R E T R A C T E D
discriminant model and are also known as generators and discriminators.The generator and discriminator compete with each other with the common goal of generating data points that are very similar to the data points in the training.
The training process of generative adversarial networks is: the generator generates samples similar to real data from random noise or latent variables, and the discriminator judges the generated data and the real data, and feeds the result back to the generator.The generator and the discriminator are trained at the same time until a Nash equilibrium is reached, that is, the generated data generated by the generator is almost the same as the real sample, and the discriminator cannot accurately distinguish the generated data from the real data [8].The structure of generative adversarial networks is shown in Figure 1.
In the process of training, the goal of the generator is to generate real data as much as possible to deceive the discriminator, and the target of the discriminator is to distinguish between the generated data and the real data.If it is true output 1, if it is false output 0, the two constitute a process of mutual game.The final result of the game is that the generator can generate false data, and the discriminator cannot distinguish between the real data and the generated data, that is, the discriminator output is 0.5, which is equivalent to guessing for distinguishing between true and false data.
The above is the core principle of generative adversarial networks, and the mathematical expression [8] is shown in Formula (1) below: In Formula (1), x represents the actual data of the input, z represents the noise added to the generator, and
( )
G z represents the data generated by the gene- rator.
R E T R A
C T E D networks, using the random gradient descent method [9] training, the first training generator, using the gradient method, the larger the expected ( ) , V G D , the better, the second step is to train the discriminator, use subtracting the gradient method, it is expected that the smaller ( ) , V G D is the better.When the most desirable result is reached,
Deep Convolutional Generative Adversarial Networks
Although generative adversarial networks solve the problem of unsupervised learning that needs to consider prior knowledge, distribution, etc., there are some problems with generative adversarial networks.First of all, the generative adversarial networks have no convergence problem.When the generator and the discriminator use the neural network representation, when there is no equilibrium, the two are always in the process of adjusting the parameters.Deep convolutional generative adversarial networks have the following changes compared to generative adversarial networks: 1) It removes the fully connected layer, adopts a full convolutional network structure, replaces the space pooling with a step size convolution on the discriminator, allows the identification of network learning space down-sampling, and uses micro-step convolution on the generator, allowing learning to generate spatial up-sampling of the network; 2) It introduces the batch normalization, batch standardization in the generator and discriminator to make the data more centralized, do not worry about the data being too large or too small, improve the learning efficiency, solve the training problem of poor model initialization, and make the gradient spread deeper and prevent the builder crashes at a certain point during training; 3) In
R E T R A C T E D
terms of the activation function, the generator output layer is activated using the Tanh function, the other layers are activated using the ReLU function, and all layers of the discriminator are activated using the Leaky ReLU function.
Deep convolutional generative adversarial networks are initialized with random weights, so random input will produce a completely random image, but the network has a lot of parameters that can be adjusted.Therefore, our goal is to set the parameters so that the images generated according to the random input can be very similar to the real training data, that is, the purpose is to match the generated data with the distribution of real training data in the image space.
Training Model
The TensorFlow deep learning framework provides a variety of APIs for convolution, de-convolution, and activation functions for neural network construction.Using the neural network library provided by TensorFlow, we can quickly and easily construct the neural network we need.
Preparation of TF Records Format of Data
Data preprocessing is the first step in this experiment.The link provided by the official website downloads the CelebA data set.The size of each face image is 178 × 218 × 3, and the input of the discriminator neural network is 64 × 64 × 3.
Therefore, the face image needs to be cropped and then converted into a dataset in the TF Records format.In addition, since the number of face images in the CelebA dataset is as high as 200,000, the dataset needs to be divided into multiple TF Records format files.In this paper, it is divided into 12 TF Records files.
The data preprocessing flowchart is shown in Figure 3.
Definition of Network Layer
In deep convolutional generative adversarial networks, the generator uses a deconvolutional neural network as the network layer, and the discriminator uses a convolutional neural network as the network layer.
The implementation of the deconvolution neural network in the generator is achieved by calling the conv2d_transpose method in the TensorFlow neural network library to achieve weight multiplication, and using bias_add to implement offset addition.
The implementation of the convolutional neural network in the discriminator is to achieve weight multiplication by calling the conv2d method in the Tensor-Flow neural network library, and bias addition is implemented using bias_add.
Definition of Generator
Use the deconvolution layer defined above, you can build the generators needed for deep convolutional generative adversarial networks.The final output is 64 × 64, so set OUTPUT_SIZE to 64.The recursive moving step size 2, the output of each layer is expanded by four times than the input, so that the output size of each layer can be 32 × 32, 16 × 16.8 × 8, 4 × 4, respectively.The value of BATCH_SIZE is set to 64; the GF is set to 64, and the number of feature maps is 512, 256, 128, 64, respectively.Finally, the structure of the generator is shown in
Definition of Discriminator
Using the convolutional layer defined above, you can construct the discriminator required by deep convolutional generative adversarial networks, the picture input convolutional layer, the number of convolutional layer output feature maps, and the convolution kernel moving step size is 2.After the convolution process, the output is reduced to the original quarter, so the output sizes of the convolutional layers are 32 × 32, 16 × 16, 8 × 8, 4 × 4, respectively, and the number of feature maps is 64, 128, 256, and 512, respectively.The resulting discriminator structure is shown in Figure 5.
Training
The training process code is written primarily for the definition of the training
R E T R A C T E D
In this paper, the experiment activation function uses the sigmoid function to calculate the loss value generated during the training process by calling the sig-moid_cross_entropy_with_logits method in the Neural Network of TensorFlow.For the discriminator, the prediction from the real input should be 1 and the input generated from the generator is 0; for the generator, the discriminator should be predicted to generate data for it, and d_loss_real is the real data input to the discriminator.The cross entropy between the result and the expected result of 1; d_loss_fake is the cross entropy between the result of the generator generating data input to the discriminator and the result expected to be 0; d_loss is the sum of d_loss_real and d_loss_fake; g_loss is the result of the generator generating data input to the discriminator and expected to be 1.The result is the cross entropy between the two.The optimization algorithm selects AdamOptimizer [10], and its adaptive non-convex optimization feature is suitable for modern deep learning, without the need to manually adjust the learning parameters and other hyper-parameters.
Data collection includes Loss values, discriminator histograms, generator histograms, etc., mainly using TensorFlow's summary operations, including sca-lar_summary, histogram_summary, etc.Once you have completed these definitions, you can write data traversal code.Each epoch is sampled according to the specified batch, and the optimization algorithm is run to update the network until the end of the process.In this paper, set the epoch value to 5 and the batch value to 3072.The current Loss value is output every 20 steps.The current generator is tested every 100 steps to generate 64 virtual faces and save the generated virtual face image.The model is saved once every 500 steps using TensorFlow's Saver() method.
Virtual Human Face Generation Effect
The data used in this experiment is the CelebA dataset.The CelebA dataset is a large face attribute dataset developed by the Chinese University of Hong Kong.It contains more than 200,000 celebrity face images, each with 40 attribute annotations.Training a total of 5 epoch, each epoch has 3072 training pictures, each 100-step sampling generator, and finally can get a total of 155 generator generated test charts, a partial screenshot of the test chart shown in Figure 6.Each test chart contains 64 virtual faces.
You can see that the generator test chart with epoch is 0 and the number of steps is also 0. As shown in Figure 7, you can see the generation of 64 100-dimensional random data after processing by the generator.It failed to generate a face image.
Looking at the test chart with epoch 0 and 3000 steps, as shown in Figure 8, it can be seen that after a round of training, the generator has been able to generate the basic face of humans.However, it is obvious that the generated face is relatively deformed, and the "eyes" have not yet been correctly generated.Most of the faces of the faces are replaced with black shadows.For example, Chinese researcher Gou Chao proposed using simulated images and real images as sample data for neural network training, which is used to implement human eye detection [10].
Conclusion
Based on the TensorFlow deep learning framework, this paper studies the deep convolutional generative adversarial networks and realizes the generation of virtual face images to obtain better results.In addition to image generation, the application of deep convolutional generative adversarial networks in the field of image super-resolution can be considered in the future.The mechanism of image super-resolution is that high-resolution images with clear and rich detail are output by transforming low-resolution images.The problem with current image super-resolution is that high-frequency details are lost during the conversion process, which is contrary to its purpose.However, generative adversarial net-
R E T R A C T E D
[6] used deep convolutional generative adversarial networks for image synthesis.Gene Kogan used deep convolutional generative adversarial networks for the generation of Chinese calligraphy characters and achieved certain results.Some scholars have used deep convolutional generative adversarial networks for super-resolution image reconstruction.The basic idea is to input low-resolution images and then output high-resolution versions.
Figure 1.Basic structure of generative adversarial networks.
based on deep convolutional generative adversarial networks consists of two main parts.One is data preprocessing, which converts the CelebA face data set into 12 TF Records format files.The second is to define the relevant network layer, build the generator and discriminator, and write the training code to start the training of deep convolutional generative adversarial networks.Figure 2 is a flow chart of the experiment in this paper.After getting our training model, you can use Tensor Board to view our model training loss value, neural network structure and other visual content, and you can also call the trained model to generate virtual face and view the model training effect.
process and the data acquisition process.The training process definition includes calling the network model to generate data, defining activation functions, and defining optimization algorithms.The data acquisition process definition includes data graph type selection, definition of acquisition time, and the like.
In this paper, the model obtained by deep convolutional generative adversarial networks training shows that the virtual face similar to the real face can be generated better, and the most direct application in the future is face modeling.The model data generated by the distribution of the real data of the face is generated by the model, and the face model database is constructed to solve the problem of insufficient annotation data.Secondly, thanks to the confrontation training mechanism of generative adversarial networks, virtual face images generated by deep convolutional generative adversarial networks can solve the problem of insufficient data sources faced by traditional machine learning.Therefore, virtual face images can be used in the field of semi-supervised learning in the future.
[2]ondly, generative adversarial networks are difficult to train.Because there is no loss function, it is difficult to judge whether progress has been made during training, and the training process may have a collapse problem, causing the generator to degenerate, continuously generating the same sample points and unable to continue learning, and then affect the training of the discriminator.Finally, because generative adversarial networks do not require prior knowledge to model, the process of training the model is too free and uncontrollable.In order to solve the problem of unstable construction training against network training, Yang Yu et al. attempted to extend the generative adversarial networks using a supervised learning convolutional neural network architecture[2].Convolutional neural networks have good performance in deep learning tasks with supervised learning, but are less used in unsupervised learning.After the use of generative adversarial networks to convolutional neural network architecture, the construction of generative adversarial networks can be achieved by using existing supervised learning tools to build deep convolutional generative adversarial networks, which greatly shortens the time of network construction of generative adversarial networks.And it improves the model construction and operational efficiency.Eventually they found an architecture that was able to train steadily on multiple datasets and generate higher resolution images, which were eventually named deep convolutional generative adversarial networks. | 4,130 | 2018-08-22T00:00:00.000 | [
"Computer Science"
] |
Printing Double-Network Tough Hydrogels Using Temperature-Controlled Projection Stereolithography (TOPS)
We report a new method to shape double-network (DN) hydrogels into customized 3D structures that exhibit superior mechanical properties in both tension and compression. A one-pot prepolymer formulation containing photo-cross-linkable acrylamide and thermoreversible sol–gel κ-carrageenan with a suitable cross-linker and photoinitiators/absorbers is optimized. A new TOPS system is utilized to photopolymerize the primary acrylamide network into a 3D structure above the sol–gel transition of κ-carrageenan (80 °C), while cooling down generates the secondary physical κ-carrageenan network to realize tough DN hydrogel structures. 3D structures, printed with high lateral (37 μm) and vertical (180 μm) resolutions and superior 3D design freedoms (internal voids), exhibit ultimate stress and strain of 200 kPa and 2400%, respectively, under tension and simultaneously exhibit a high compression stress of 15 MPa with a strain of 95%, both with high recovery rates. The roles of swelling, necking, self-healing, cyclic loading, dehydration, and rehydration on the mechanical properties of printed structures are also investigated. To demonstrate the potential of this technology to make mechanically reconfigurable flexible devices, we print an axicon lens and show that a Bessel beam can be dynamically tuned via user-defined tensile stretching of the device. This technique can be broadly applied to other hydrogels to make novel smart multifunctional devices for a range of applications.
Dimension of the sample holder
The dimensions associated with the geometry are provided in Table 1.
Heat Distribution Simulation
It was important to maintain a critical temperature of the solution throughout the fabrication process, hence we designed a CAD model of the sample holder and studied the temperature distribution using simulations. The design of the sample holder consisted of a copper plate with a center hole embedded inside the PDMS bath ( Figure S1). Two heated rods on either side of the dish were designed to heat the copper plate and the 16mm diameter hole in the Cu plate acted as the fabrication window. To gain a better insight into the temperature distribution over the PDMS layer of the sample holder design, we performed computational fluid dynamics (CFD) simulation with conjugate heat transfer. The computational domain consisted of the designed PDMS dish and a copper plate extended on either side of the dish (Figure S1). At the top and bottom surface of the copper plate, a constant temperature boundary condition with T = 416 K (142.85°C) was applied to mimic the heater (used in an experimental study). All other surfaces of the geometry were provided with convection heat transfer to ambient temperature (T amb = 300 K (26.85°C).
The simulation was performed by discretizing the computational domain into a finite number of control volumes (or grid points) and by simultaneously solving the physical equations of continuity, momentum, and energy, in each point to obtain the spatial and temporal distribution of temperature. For the computational domain, a mesh with 160,000 grid points was utilized to obtain the temperature distribution ( Figure S2). The distribution was obtained for various time points until the steady state is attained. After the simulation study, the mesh was refined, and the simulation was repeated for meshes with 200,000, 240,000, and 280,000 grid points to investigate the grid sensitivity of the result. By comparing temperature distribution over the PDMS layer for different meshes, one with 240,000 grid points was found to be an optimum mesh which is utilized for further study here.
Finite volume methods-based commercial solver ANSYS Fluent was utilized to solve the equations. In the simulation, the entire domain was first initialized with T init = 353 K (79.85 °C), and the ambient was set at 300 K (26.85°C). The operating pressure was 1 atm. Figure S3 shows the temperature distribution over the PDMS layer at different time instants obtained using the optimum mesh. At approximately t = 90 s, the spatial distribution of temperature attains a steady state and further continuing the simulation shows no changes in the state. Figure S2. Temperature distribution over the PDMS layer at the plane corresponding to sections A-A' shown in Figure S1.
Swelling of DN gel structures
DN cylindrical stubs printed via TOPS were immersed in DI water. Results show that stub diameters and heights increased by 21% during the first hour, reached 45% in one day, and saturated after 78 hrs ( Figure S8). In terms of mass, the printed structure (right after the printing) weighed 0.5 gm and the structure absorbed 7.9 gm of water, which is 17 times the original mass.
Total water content before swelling was 81% and this increased to 98.8% after immersing the structure in water for 78 hours.
Influence of swelling on tensile and compression properties
Tensile properties of DN dogbone samples swollen for 5 min, 10 min, 4hrs, and 4 days were studied. The 4-day swollen sample was too soft to reliably handle, and hence it was omitted from this study. Results show that the ultimate stress, the ultimate strain, and the modulus were highest for samples swelled for 5 minutes as compared to samples swelled for 10 minutes and 4 hours. Longer exposure time during TOPS printing resulted in lesser swelling and therefore exhibited better mechanical properties. For instance, structures exposed for 2 minutes swelled 1.28 times of original length in 4 hours, whereas the structure exposed for 1 minute swelled 1.85 times its length at the same time. Longer light exposure during TOPS, when swelled for 4 hours withstood the larger ultimate stress of 14±2 kPa and modulus 14.95±3.7 kPa. These parameters were 5.5±0.5 kPa and 7.68±0.9 kPa for the structure exposed for 1 minute. Further, the strain was almost double (8.05±1.45) for longer exposed structures ( Figure S10). Figure S10. Stress-strain plots obtained from the swelled structures printed using different exposure times (60 seconds and 120 seconds). Structures were swelled for 4 hours in water.
Ultimate stress, ultimate strain, position, and elastic moduli obtained from swelled structures printed using different exposure times.
Effects of hydration, dehydration, and rehydration
13 The ability of the printed structure to recover after dehydration followed by rehydration was tested. As-printed DN dogbone structure, dried for 2 days using a dehumidifier, was rehydrated in water for 30 mins until the size reaches 1.3 times the size of the as-printed structure. Tensile tests showed that these samples regained their ultimate stress (24.5±1.5 kPa), ultimate strain (7.675±0.205). Similar results were obtained when the as-printed samples were first hydrated completely, dehydrated completely, and rehydrated to 1.3 times the size of as-printed samples with ultimate stress (24±0 kPa), ultimate strain (9±0.6), and modulus (11.11±0.12 kPa) ( Figure S11). Lens stretcher Figure S15. CAD design of before and after assembly of axicon lens stretching device. | 1,549.8 | 2023-03-23T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Tunable Ultra-high Aspect Ratio Nanorod Architectures grown on Porous Substrate via Electromigration
The interplay between porosity and electromigration can be used to manipulate atoms resulting in mass fabrication of nanoscale structures. Electromigration usually results in the accumulation of atoms accompanied by protrusions at the anode and atomic depletion causing voids at the cathode. Here we show that in porous media the pattern of atomic deposition and depletion is altered such that atomic accumulation occurs over the whole surface and not just at the anode. The effect is explained by the interaction between atomic drift due to electric current and local temperature gradients resulting from intense Joule heating at constrictions between grains. Utilizing this effect, a porous silver substrate is used to mass produce free-standing silver nanorods with very high aspect ratios of more than 200 using current densities of the order of 108 A/m2. This simple method results in reproducible formation of shaped nanorods, with independent control over their density and length. Consequently, complex patterns of high quality single crystal nanorods can be formed in-situ with significant advantages over competing methods of nanorod formation for plasmonics, energy storage and sensing applications.
The interplay between porosity and electromigration can be used to manipulate atoms resulting in mass fabrication of nanoscale structures. Electromigration usually results in the accumulation of atoms accompanied by protrusions at the anode and atomic depletion causing voids at the cathode. Here we show that in porous media the pattern of atomic deposition and depletion is altered such that atomic accumulation occurs over the whole surface and not just at the anode. The effect is explained by the interaction between atomic drift due to electric current and local temperature gradients resulting from intense Joule heating at constrictions between grains. Utilizing this effect, a porous silver substrate is used to mass produce free-standing silver nanorods with very high aspect ratios of more than 200 using current densities of the order of 10 8 A/m 2 . This simple method results in reproducible formation of shaped nanorods, with independent control over their density and length. Consequently, complex patterns of high quality single crystal nanorods can be formed in-situ with significant advantages over competing methods of nanorod formation for plasmonics, energy storage and sensing applications.
Electromigration (EM) is defined as the transport of atoms driven by momentum transfer from electron flow inside a current carrying material. This can lead to structural changes such as whisker growth and stress induced voids 1,2 . Electromigration is normally depicted in a negative light, as a serious problem for Very-Large-Scale Integration (VLSI) and Ultra-Large-Scale Integration (ULSI) electronic circuits due to the increasing current densities that accompany miniaturization 3 . Therefore much effort has been directed at developing new electronic materials, wiring designs and fabrication methods so as to minimize the effects of electromigration 4 . However, electromigration has recently been used constructively as a tool for fabrication of zero and one dimensional nanocrystals 5 nanostructures for local electric field enhancement in plasmonics 6,7 , molecular-scale biochemistry measurements [8][9][10][11] and to control the kinetic faceting of surface orientations that belong to the equilibrium shape of the crystals 12 . In general electromigration is affected by a large number of parameters such as current density, temperature, film thickness, grain size and timescale [1][2][3][4]13 . Previous attempts to create whiskers using electromigration either resulted in whiskers growing only at the anode 14,15 or required precise local conditioning of the substrate to generate localised whiskers 15,16 . An industrial method with control over mass production of whiskers over an entire substrate would have applications in plasmonics 17 , energy storage 18 and sensing applications 3,4,6,7 due to the high aspect ratio and large surface area structures that could be produced.
In this work, we demonstrate that electromigration can be applied to grow a dense structure of nanorods on a porous Ag substrate rather than the sparse nanorod formation previously observed. Electromigration in a porous medium is also shown to result in nanorods being formed along the length of the conductor rather than being confined to the anode as in a typical non-porous media. In addition to the high density of nanorod formation, it is found that the density and nanorod length can be independently controlled. The ability of the process to produce single crystal nanorods with aspect ratios exceeding 200 is also highly noteworthy, as is the production of high aspect ratio platelets in addition to nanorods. Furthermore, electromigration in a porous medium results in transformation of the internal pore and grain structure, an effect which has not previously been reported but which may have interesting technological applications in its own right. The growth mechanism of nanorods along the length of the conductor can be explained by the interaction of the normal electron wind force 19 driving atoms from cathode to anode with thermal gradients generated by the presence of the pores causing current constrictions. The simplicity of nanorod formation by electromigration, utilizing voltages ~7 mV across the ~500 μm length of the conducting stripe to generate high current density, may have significant advantages over other methods of nanorod growth 20 and in particular may allow complex patterns of nanorods being grown simply by controlling the local current densities.
Results
A schematic of the EM experimental setup is shown in Fig. 1a. The nanorod density, location, size and diameter are controlled by EM duration, current density and interruptions. EM was carried out on five samples (see Methods and Supplementary information) in air, at two temperatures (ambient and 200 °C) with current densities ranging from 2.2 × 10 8 to 2.45 × 10 8 A/m 2 (Table S1). The duration of the electromigration processes was from 6-480 h. Electron microscopy (SEM and TEM) analysis reveals high quality single crystal nanorods with diameter down to 20 nm (Fig. 1b,e). Straight nanorods occur in short duration EM experiments of up to 240 h (Fig. 1b). Figure 1c shows constant diameter curly nanorods formed after long duration uninterrupted EM. Platelets with hexagonal tips (Fig. 1d) can also be formed as a result of the initial protrusions expanding followed by growth. The hexagonal shape indicates that the rod emerges from a [1, 1, 1] plane of the fcc Ag lattice 21,22 . The EM experiments running for 480 h (Fig. 1c,g) show that the nanorods continue to grow until the density of the generated nanorods is high enough for them to meet neighbouring nanorods which may be responsible for limiting nanorod growth to a maximum length of ~20 μm ( Fig. S1b-d). Figure 1g shows a high density of nanorods with possible instances of welding between them. Previous studies have reported welding of individual silver nanorods under high current density at the point of contact 23 . Increasing current density to 1.70 × 10 9 A m −2 results in amalgamation of grains and pores at the cathode where reduction in the number of conduction paths leads to high localised heating and circuit failure after a short time of 18 h (Fig. S2). Increasing temperature up to 200 °C in samples (for 120 and 240 h) does not result in nanorod formation after EM. Figure 2 shows a comparison of the top surface of the Ag stripe S3 before and after EM for 240 h, at the anode, centre and cathode. The number density of nanorods decreases steadily between anode and cathode, with the density of nanorods at the cathode approximately one third that of the anode. This suggests that a uniform coverage of nanorods might be achieved by simply reversing the flow of current so that both ends of the stripe spend equal times as anode and cathode. Figure 2a-f shows the change in nanorod size distribution and number density across the stripe. Figure 2g-i summarizes the changes in number density and average nanorod length (not adjusted for orientation) across sample S3 for time periods of 120, 240 and 480 h. The average length of the nanorods ranges from 400 to 550 nm and longer nanorods do still appear across the sample, with the largest rods reaching an apparent length of 20 μm (Fig. 1f). Nanorod diameter is relatively constant at 25-40 nm, with some larger diameter rods (up to 100 nm) produced in one sample S2 (Table S1) which experienced multiple connection and disconnection events.
Characterisation of the fabricated nanorods shows the formation of good quality single crystal nanorods at the top surface of the samples. The TEM image of the 20 nm diameter nanorod is shown in Fig. 3a,b and the Selected Area Electron Diffraction (SAED) patterns 22,24 are interpreted as the overlapping of [111] and [110] zone axes which indicate a six fold symmetry previously associated with silver nanorod growth. The orientation of nanorods with respect to the electron beam is shown in the inset of ( Fig. 3b). Previous studies have suggested that weak points in a metal oxide layer act as nucleation sites for nanorod growth 25 . Figure 3c shows Energy Dispersive X-Ray (EDX) analysis on a nanorod and on a grain at the surface of the stripe. The bar chart shows that at the surface of the sintered silver grains after electromigration the composition contains 8% oxygen whereas the composition is 0% oxygen in the spectrum taken from the nanorod. These results suggest the existence of a thin oxide layer on the surface of the silver stripe during the EM process. In order to test this hypothesis further, EM on sample S3 (Fig. 3d,e) was interrupted after 120 h for a period of 240 h and then restarted with the same current density as before for an additional 120 h. The original rods (indicated by the red arrows) did not continue growing after the interruption, but new rods emerged, as shown by Fig. 3f. This behaviour was repeated at other locations and in other samples, suggesting that new weak spots in the thin oxide layer form when the samples cool down and contract during the interruption. The spectrum of nanorods after multiple interruption (Table S1) shows evidence of an oxide layer forming on the original nanorods which could account for the lack of growth in these nanorods (Fig. S4).
While there is no change in surface grain morphology during EM evident from either Fig. 2 or Fig. 3 apart from nanorod growth, massive change in internal grain structure is seen in Fig. 3g,i. This is not purely a result of thermal effects as shown by the control (Fig. 3h) which was stored at 200 °C for 480 h while the sample of Fig. 3i was subjected to EM. It should be noted that the structure seen in Fig. 3i after EM was found throughout the stripe at anode, centre and cathode and that the orientation of the elliptical grains changes throughout the stripe and does not appear to be simply correlated with the cathode-anode axis (arrow, Fig. 3i). The contrasting surface and interior behaviour indicates that while the oxide layer at the stripe surface prevents atomic migration along grain surfaces, in the stripe interior, grain surfaces do not support an oxide layer that prevents surface diffusion. Voids have not been directly detected at the surface of the porous sample, but formation of nanorods is accompanied by an increase in resistance (1-5%), followed by rapid resistance fluctuations and finally open circuit, accompanied by a crack. This can be explained with reference to Fig. 3g,i where it is seen that significant internal transformation of structure is occurring. Calata et al. 26 also observed formation of cracks at locations where the current density increases abruptly and attributed this to high atomic flux density in those regions. Thermal gradients are known to affect the divergence of the atomic flux and hence void and hillock (or nanowire) formation in EM experiments due to the dependence of electrical resistivity with temperature 13,27 . In order to calculate the magnitude of thermal gradients, electrical resistivity measurements of the stripe were performed using the four point probe method as the current was varied and the temperature in the stripe calculated using the coefficient of electrical resistivity variation with temperature. As an example, it was found that a current density of 2.4 × 10 8 A/m 2 resulted in a temperature rise of 83 °C.
The experimental determination and modeling of temperature distributions at the scale of the stripe and the scale of individual grains are shown in Fig. 4a,b. The temperature calculations for all samples from experimental (Table S2) can be used to estimate the rate of atomic accumulation due to the local temperature variations at grain level in order to test the hypothesis that these variations, together with the stripe level variations (Fig. 4a) are responsible for the observed pattern of nanorod growth across the stripe.
The number of atoms deposited on a grain surface is estimated from the volume of nanorods in SEM images taken from two similar sized grains after two consecutive EM time periods of 120 h. The experimental results are then compared with the atomic flux divergence model. We have taken the temperature gradient value from FEM simulations to calculate the Atomic Flux Divergence (AFD) using the portion of atomic flux from Eq. 3 due to electromigration, J Em 1,25,28 The value of the AFD ( Div J ( ) Em ), representing the number of atoms deposited per unit volume and in unit time, has been calculated as 3.46 × 10 +19 atom/m 3 /s using the values in Table S2 as an input. Multiplying the AFD by the volume of the grain and timescale of EM (120 h and 240 h) results in an estimate of the number of atoms deposited in a grain and is compared to values measured from SEM images in Fig. 5 experimentally in Table 1.
Discussion
The anomalous Ag nanorod formation at anode and central locations can be explained by the presence of complex thermal gradients generated by Joule heating in the constrictions between grains. An analysis based on Fick's laws of diffusion shows that atoms should accumulate in high current density regions where thermal gradients exist (See Methods). This calculation has considered only lattice diffusion in order to give a lower bound, explaining Table 1. A detailed calculation including grain boundary diffusion should result in closer agreement between the measured and calculated values and will be the subject of further work. However, the present work shows that the proposed mechanism of thermal gradients operating over the length scale of individual grains in combination with electromigration being responsible for nanorod growth is reasonable.
Within the interior of the sample, fast diffusion along the grain surfaces facilitates the internal grain refinement observed in (Fig. 3i) while at the surface the oxide layer prevents rapid surface diffusion and allows compressive stresses to build up locally, eventually leading to the observed nodule (nanodot) and nanorod formation at weak spots in the oxide layer. We note that in a porous material therefore, electromigration can be used to probe the chemical composition at the surface of the internal pores and in particular the absence or presence of an oxide layer inhibiting diffusion. The Ag oxide layer is key to formation of the nanorods and explains why the elevated temperature experiments failed to produce nanorods. Previously reported experimental data 29 shows there are three phases of AgO x film commonly found on a Ag surface. These are a silver rich phase (phase I), mostly Ag 2 O (phase II) and finally a mixed phase of Ag 2 O and AgO (phase III). The Phase III is the least stable and can easily decompose to Ag 2 O and O 2 even at temperatures below 160 °C. The elevated temperature tests and the high current density experiment presumably lead to decomposition of the Phase III layer and prevention of the compressive stress build up that is a prerequisite for nanorod formation. Similarly, the interrupted EM experiments which lead to growth of new nanorods can be explained by the interruption allowing oxide to form on fresh nanorod surfaces and the thermo-mechanical stresses as a result of temperature changes leading to formation of new weak spots on the grain surfaces. The ability to modify nanorod characteristics after initial fabrication can be used to construct sensors with nanorods grown in situ or modified in situ to optimize their sensitivity.
Conclusion
EM has been used as a constructive process leading to mass fabrication of nanorods simply by passing current through a stripe of porous material under controlled current density. Additionally, the internal grain refinement observed in the porous structure has no analogy in non-porous materials, and is facilitated by the large oxide free surface area present in these porous materials. Absence of grain refinement at the surface of the substrate, EDX measurements on the nanorods, heated samples and interrupted EM experiments all indicate that an oxide layer on the exterior Ag surface restricts atomic diffusion here and hence allows compressive stress build up leading to nanodot and then nanorod formation at weak points of the oxide layer. The mechanism of nanorod growth away from the anode in a porous substrate was investigated by atomic flux divergence calculations taking into account thermal gradients on the scale of a single grain and assuming only lattice diffusion. The calculations show that the number of deposited atoms in a grain due to thermal gradient fluctuations is an order of magnitude lower than observed experimentally but supports the hypothesis that grain-scale thermal gradients are responsible for the observed nanorods once the higher atomic flux from grain boundary diffusion is taken into account. In contrast to the exterior surface with its oxide layer constraining surface diffusion, the internal pore surfaces in sintered silver offer fast surface diffusion pathways. This results in refining of the pore structure during EM. Electromigration can hence be used as an in-situ probe of the surface condition of the interior pores in conducting porous media as well as a mechanism by which these pores can be transformed. Finally the experimental approach reported here suggests that high density, high aspect ratio nanorods can be fabricated using EM with precise control of nucleation density and size achievable by modulating the current density. Methods Experimental. The porous Ag samples were fabricated using NanoTach ® X silver paste produced by NBE Tech, a paste used for attaching semiconductor die to ceramic substrates typically for power electronics applications. The paste consists of ~30 nm diameter Ag particles together with ligands to prevent agglomeration and organic components to improve paste rheology. The paste is printed onto the edge of a glass cover slip to create a simple sample geometry with a high degree of control over parameters such as length, width and thickness. The same paste is then applied at both ends to connect gold wires at anode and cathode ends after a further sintering step. A feature of this EM setup is that the anode and cathode is not defined by junctions with a refractory metal as in a typical Blech setup 30 , but by the existence of sharp variations of cross sectional area resulting in high current density inside the stripe and low current densities beyond the anode and cathode to form a 3D analogue of the bow-tie structure 31 . Figure 1a shows a schematic of the experimental setup. Figure S1a shows a TEM image of a cross section through the sintered material revealing that the original nanoparticles have merged to form grains ~1 μm diameter with a high density of twin boundaries and a porosity of ~25% (see 32 for further details of the structure). Table 1 of number of atoms deposited in two similar sized grains. SEM images of the anode region of sample S3 (a) before EM, (b) after 120 hours and (c) after 240 hours at the same location. Two similar sized grains (Grain 1 (d) and 2 (g)) were selected to compare the calculation of atomic deposition numbers after 120 hours (e,h) and after 240 hours (f,i).
where C a is the atomic concentration, J a is the total atomic flux, r is the source/sink term which in the simplest models is given by − ΔC a /τ where ΔC a represents the excess of atoms from its equilibrium value and τ represents a relaxation time. The atomic flux vector contains contributions from self-diffusion, electric current ('wind force'), temperature gradient and hydrostatic stress gradient shown respectively as the terms in Eq. (2) 1,25,28 where D a is the diffusivity of atoms, Z* is the effective charge, e is the elementary charge, k b is Boltzmann's constant, T is the absolute temperature, J e is the current density, Q* is the heat of transport, f is the atomic relaxation factor, Ω is the atomic volume, and σ is the hydrostatic stress. D a can further be expressed by Arrhenius law as where D 0 is the pre-exponential factor and E a is the activation energy. Equations (1-3) form the standard base for discussion of electromigration, and have been investigated extensively for Cu and Al on-chip interconnects 13,28 . Numerical estimates of the temperature gradient term in Eq. (2) show that the direct effect of the temperature gradient on J a is negligible. However, the ∇. J a term in Eq. (1) leads to significant atomic accumulation or depletion arising from temperature gradients via the spatial variation in D a given by Eq. (3) in the presence of a thermal gradient. Hence knowledge of the local current density and temperature distribution is required to calculate ∇. J a and nanorod growth.
To calculate the current and temperature distribution within the stripe we have used commercial FEM software COMSOL (Joule heating module). The electrical current has been simulated within the silver stripe from the source electrode to the drain electrode taking into account Joule heating, conduction through the glass at a current density of 2.4 × 10 8 A/m 2 . The temperature of the bottom surface of 3000 μm thick SiO 2 substrate on which the stripe is located was set to the room temperature of 298.15 K. The thermal Conductivity of SiO 2 was given the textbook value of 0.8 W/mK 33 and the thermal conductivity and electrical resistivity of material were similarly set at 250 W/mK and 7.33 × 10 −8 Ω m respectively 34 . | 5,198.2 | 2016-02-29T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
p300/CBP-associated Factor Drives DEK into Interchromatin Granule Clusters*
DEK is a mammalian protein that has been implicated in the pathogenesis of autoimmune diseases and cancer, including acute myeloid leukemia, melanoma, glioblastoma, hepatocellular carcinoma, and bladder cancer. In addition, DEK appears to participate in multiple cellular processes, including transcriptional repression, mRNA processing, and chromatin remodeling. Sub-nuclear dis-tribution of this protein, with the attendant functional ramifications, has remained a controversial topic. Here we report that DEK undergoes acetylation in vivo at lysine residues within the first 70 N-terminal amino acids. Acetylation of DEK decreases its affinity for DNA elements within the promoter, which is consistent with the involvement of DEK in transcriptional repression. Fur-thermore, deacetylase inhibition results in accumulation of DEK within interchromatin granule clusters (IGCs), sub-nuclear structures that contain RNA processing factors. Overexpression of
The mammalian nuclear protein DEK is associated with the pathogenesis of autoimmune diseases and cancer (1)(2)(3).
Autoantibodies specific for DEK are present in patients with juvenile rheumatoid arthritis and other inflammatory diseases (4,5), and fusion of dek and can by translocation of chromosomes 6 and 9 results in acute myeloid leukemia (6). DEK expression is also increased in multiple malignancies, including bladder cancer, hepatocellular carcinoma, glioblastoma, melanoma, T-cell large granular lymphocyte leukemia, and acute myeloid leukemia, independent of the t (6,9) chromosomal translocation (1,2,(7)(8)(9). Indeed, a gene profile analysis of 41 adult patients with acute myeloid leukemia using quantitative real-time PCR showed that DEK is overexpressed in 98% of the cases (10). Most recently, quantitative multiplex PCR was used to precisely map the focal region of genomic gain on chromosome 6p22 in bladder cancer cells (7). Genomic mapping data identified the dek gene as being centrally located within this minimal region, suggesting DEK as an important candidate for involvement in pathogenesis of bladder cancer.
DEK does not belong to any characterized protein family, and sequence similarity with other factors is limited to the SAF (scaffold attachment factor) box (11,12), also known as the SAP domain (from SAF-A/B, acinus and PIAS) (13), a 34-amino acid motif found in nuclear factors that participates in chromatin organization, mRNA processing, and transcription. Interestingly, DEK has been implicated in all three of these nuclear events. We have previously shown that DEK binds to the TGrich peri-ets (pets) site from the human immunodeficiency virus type 2 (HIV-2) 1 promoter and that this binding is responsive to cellular signals (14,15). In the case of HIV-2, DEK appears to function in transcriptional repression (15). Subsequently, DEK has also been isolated in a complex containing both hDaxx and HDAC2, essential proteins involved in transcriptional repression (16), lending further support to the role of DEK in this process. In addition, DEK has been shown to play an active role in DNA and chromatin remodeling and to bind preferentially to supercoiled and four-way junction DNA (17)(18)(19). Moreover, DEK has been found in complexes with a number of mRNA splicing and export factors and spliced transcripts themselves (20,21). Although the direct role of DEK in the exon-exon junction complex, as originally proposed, remains controversial (22,23), there is considerable evidence supporting its association with one or more SR proteins that function in pre-mRNA splicing through protein-protein interactions (20).
There is a growing body of evidence suggesting that separate steps along the pathway of gene expression are integrated (24,25). As DEK has defied categorization as a single function protein, it is likely poised at the interface of multiple components of the gene expression pathway; however, this poses the question of how DEK participates in processes known to occur in separate sub-nuclear compartments. The majority of endogenous DEK is associated with chromatin and DNA, whereas only 10% of DEK is released with RNase treatment (26). Nonetheless, DEK has been found in association with RNA and RNA-processing factors by separate groups using multiple experimental approaches.
One mechanism by which a single polypeptide can exhibit different properties is through post-translational modifications. Acetylation can modify protein function in a number of ways. The most well known mechanism is that of histone acetylation, which by altering the charge and size of particular lysine residues results in a loosened association with DNA and a subsequent increase in local gene expression (27). It has recently become clear that many transcription factors are acetylated, which often results in an enhancement of function (27,28). Mechanisms mediating this increase in transactivating potential include alterations in DNA-binding affinity (p53, GATA-1, GATA-2, E2F, and c-Myb) (28 -33), affinity for negative regulators (NF-B and B-Myb) (34,35) or positive cofactors (p53) (36), and localization (CIITA and HNF-4) (37,38). Here we report that DEK is acetylated in the cell, and in vitro it is a substrate for the acetyltransferases CBP (CREB-binding protein), p300, and P/CAF (p300/CBP-associated factor). Differential reactivity of full-length versus N-terminally truncated DEK to an antibody that specifically recognizes acetylated lysine residues suggests that DEK is acetylated within the first 70 amino acids and that these modifications may have important functional consequences. Indeed, we show that increased acetylation of DEK results in a significantly decreased affinity for DNA. In addition, we find that acetylation markedly alters the localization of DEK: inhibition of deacetylase activity triggers redistribution of DEK from a diffusely nuclear to a punctate pattern within the nuclear space. We show that this pattern results from the accumulation of DEK in structures known as nuclear speckles or interchromatin granule clusters (IGCs), which are well characterized sub-nuclear domains containing RNA-processing and transcription factors. Importantly, DEK can be driven into the IGCs by overexpression of P/CAF, but not CBP/p300, and this movement can be prevented by specifically blocking the activity of P/CAF with a newly developed, synthetic, cell-permeable inhibitor. Another group of investigators has observed that, in a small percentage of cells, DEK is found at a higher concentration in IGCs than in the nucleus at large (20). Our finding both strengthens and explains their observation by revealing the particular condition that favors localization of DEK to the IGC, namely its acetylation by P/CAF. Although there have been several examples of posttranslational modifications controlling the movement of proteins within the cell, this is the first reported example of acetylation playing a direct role in the relocation of a protein to the IGC. It seems that the degree of acetylation, and its regulation, allow DEK to function in multiple pathways that take place in distinct sub-nuclear compartments.
MATERIALS AND METHODS
Cell Culture and Transfection-T98G cells were purchased from ATCC and passaged in Dulbecco's modified Eagle's medium with 10% fetal bovine serum and antibiotics. GFP-DEK was constructed as a fusion protein between DEK and an enhanced variant of Aequorea victoria green fluorescent protein. Primers with EcoRV (5Ј) and EcoRI (3Ј) restriction sites were designed for PCR-based subcloning of the dek coding region into the pGNVL3 mammalian GFP vector (gift of T. Glaser, University of Michigan). Cells were transfected with 2.5 g of GFP-DEK, P/CAF, or CBP plasmid using Lipofectamine 2000 (Invitrogen). After transfection, cells were treated overnight with 1 M TSA or 5 mM sodium butyrate, or left untreated, and fixed for immunocytochemistry on the following day. Alternatively, untransfected cells were treated overnight with 330 nM TSA or left untreated and fixed the following day.
Construction of DEK-encoding Adenovirus-FLAG-tagged DEK was excised from pCMV-Tag1 (Stratagene) using HindIII and BamHI and introduced via those sites into the adenoviral shuttle plasmid pACCMVpLpA(Ϫ)loxP-SSP. This vector was linearized with SfiI, and the transgene was recombined into E1A/E1B-deficient adenovirus by the University of Michigan Vector Core. Virus was harvested from E1A/E1B-positive producer cells, purified by cesium chloride density gradient centrifugation, and assayed for plaque-forming unit concentration. T98G cells were transduced at a multiplicity of infection of 100 by incubation for 48 h followed by cell harvesting for immunoprecipitation.
Immunoprecipitation and Western Blotting-Immunoprecipitation of FLAG-DEK from transduced T98G cells was performed using agaroseconjugated anti-FLAG M2 monoclonal antibodies (Sigma) by following the manufacturer's instructions with regards to cell lysis, incubation, and washing, and elution by competition with 3xFLAG peptide. For Western blotting, proteins were transferred from the gel to polyvinylidene difluoride membrane, blocked in 5% powdered milk, and probed with either horseradish peroxidase-conjugated anti-FLAG antibody at a dilution of 1:1000 (Sigma), or monoclonal anti-acetyllysine antibody at a dilution of 1:1000 (Cell Signaling Technologies) or monoclonal anti-DEK antibody (Biosciences), followed by horseradish peroxidase-conjugated anti-mouse IgG (Molecular Probes) for chemiluminescent detection.
Production of Recombinant His-DEK in Baculovirus-DEK coding sequence was subcloned into the pBacPAK-His3 vector for recombination into the baculovirus genome. Recombination, virus production, cell transduction and harvesting, and protein purification were performed as indicated in the manufacturers' instructions for the BacPAK Baculovirus Expression System (Clontech) and HIS-trap nickel-chelating columns (Amersham Biosciences).
In Vitro Protein Acetylation Assay-Recombinant His-DEK was dialyzed overnight at 4°C into acetylation buffer (10 mM Tris, pH 7.4, 150 mM NaCl, 1 mM EDTA, 1 mM dithiothreitol, 0.5 mM phenylmethylsulfonyl fluoride, and 5% glycerol). DEK was then incubated at 30°C for 1 h with 50 M [ 14 C]acetyl-CoA (Amersham Biosciences), 10 mM sodium butyrate, and 50 nM of either CBP, p300, or P/CAF, which had been purified as previously described (39). Reactions were resolved by SDS-PAGE; gels were stained with Coomassie Blue reagent and dried. The 14 C signal was detected using a phosphorimaging screen and FX phosphorimaging device (Bio-Rad).
Dissociation Constant Measurements Using Electrophoretic Mobility Shift Assay-Dissociation constant titrations were conducted by incubating immunoprecipitated FLAG-DEK (10-fold on either side of the dissociation constant) with a 32 P-end-labeled DNA probe (5 nM) (sense, 5Ј-TAT ACT TGG TCA GGG CGA ATT CTA ACT AAC AGA-3Ј) containing the pets site (bold letters (15)) at 22°C for 1 h in binding buffer (20 mM Tris, pH 7.4, 150 mM NaCl, 5% sucrose w/v; 10 l of total volume). The dependence of binding affinity on buffer ionic strength was assessed by performing a series of binding titrations as a function of NaCl concentration (85 mM to 350 mM). Bound and free DNA were separated by electrophoresis in a non-denaturing polyacrylamide gel (4%) at 170 V at 4°C for 1 h in 1ϫ TBE buffer (100 mM Tris, 90 mM boric acid, 1 mM EDTA, pH 8.4). The gels were dried and exposed to a phosphor screen overnight, and the bands were quantified using a Storm 840 PhosphorImager with ImageQuaNT software (Amersham Biosciences). The data were fit via nonlinear least-squares regression to the single-site binding isotherm: % free DNA ϭ K d, app / (K d, app ϩ [protein]).
From the above equation, the apparent K d corresponds to the protein concentration at which half the DNA is bound (40). Errors in K d are the standard error based on a minimum of three trials.
Immunocytochemistry-Three phosphate-buffered saline (PBS) washes were performed in between each of the following steps; when indicated, PBS washes contained 0.1% saponin. Cells were washed and fixed for 10 min in 4% paraformaldehyde. Fixed cells were washed, blocked for at least 1 h with 0.2% bovine serum albumin in PBS, rewashed with PBS/saponin, and incubated for 1 h with mouse monoclonal anti-SC35 (23 g/ml; Sigma) in PBS/saponin. If cells were not transfected with GFP-DEK, then the incubation included polyclonal anti-DEK serum at a 1:200 dilution (gift of G. Grosveld). Cells were then rewashed in PBS/saponin and reblocked with goat serum in PBS/ saponin at a 1:50 dilution for 1 h, followed by PBS/saponin washing and a 1-h incubation with Alexa fluor 594-conjugated goat anti-mouse IgG (20 g/ml; Molecular Probes) in PBS/saponin. If cells were not transfected with GFP-DEK, then incubation included Alexa fluor 488-conjugated goat anti-rabbit IgG (20 g/ml; Molecular Probes). Cells were washed with PBS/saponin, refixed for 10 min, rewashed, and incubated with 4Ј,6-diamidino-2-phenylindole for 10 min, rewashed with PBS, and distilled water then dried overnight. Coverslips were mounted with Antifade A reagent (Molecular Probes), and images were captured with a Zeiss Laser Scanning Microscope (LSM 510, version 2.8 SPI). For staining of PODs, the procedure was identical except for the use of polyclonal rabbit anti-CBP as the primary antibody (10 g/ml, Upstate Biotechnology), and Alexa fluor 594-conjugated goat anti-rabbit IgG as the secondary antibody (20 g/ml, Molecular Probes).
Preparation of H3-CoA-20-Tat and H3-(Ac)-20-Tat-H3-CoA-20-Tat (Ac-ARTKQTARKSTGGK(CoA)APRKQLYGRKKRRQRRR-OH) is a derivatized version of the synthetic P/CAF inhibitor H3-CoA-20, in which the C-terminal sequence ends in the 12 amino acid residues of the cell permeabilizing Tat sequence. H3-(Ac)-20-Tat, a control compound, differs from H3-CoA-20-Tat in that the CoA moiety is replaced by a hydrogen atom. These compounds were synthesized by the solid phase method on a Rainin PS3 peptide synthesizer using the Fmoc (N-(9fluorenyl)methoxycarbonyl) strategy, analogous to the previously described method for H3-CoA-20 (41,42). The epsilon amino group of the lysine residue that corresponds to Lys-14 of histone H3 was protected with the N-[1(4,4-dimethyl 2,6-dioxocyclohexylidene)ethyl] (Dde) group whereas other Lys residues were protected with the t-butoxycarbonyl group. Following amino acid couplings and N-terminal acetylation, the Dde group was removed by mixing the fully protected peptide resin with 2% hydrazine in dimethylformamide for 3 h at room temperature. The peptide resin was then reacted with 5 equivalents of bromoacetic acid and 5 equivalents of diisopropylcarbodiimide for 16 h at room temperature (or acetic anhydride for 1 h for the control peptide). Peptides were cleaved from the resin with Reagent K (trifluoroacetic acid:phenol:H 2 O: thioanisole:ethanedithiol:triisopropylsilane (81.5:5:5:5:2.5:1)) for 4 h at room temperature and subsequently precipitated with ice-cold diethyl ether. Precipitates were collected by centrifugation (3000 rpm, 5 min), the supernatants discarded, and the pellets washed two times with cold diethyl ether (30 ml). Precipitated peptides were dissolved in 5 ml of water, flash-frozen, lyophilized, and purified by preparative reversed phase (C 18 ) high-performance liquid chromatography using a gradient of H 2 O:CH 3 CN:0.05% trifluoroacetic acid. The bromoacetylated peptide was conjugated with 2 equivalents of CoASH in a minimal volume of aqueous 0.5 M trimethylammonium bicarbonate (pH 8) for ϳ16 h at room temperature, lyophilized, and purified initially by passage over anion exchange chromatography (Dowex 1 ϫ 8 -100) to remove excess CoASH followed by reversed phase high-performance liquid chromatography in a gradient of H 2 O:CH 3 CN:0.05% trifluoroacetic acid. Peptides were confirmed to be Ͼ95% pure by high-performance liquid chroma-tography, and their structural identities were confirmed by mass spectrometry. The inhibitory properties of both peptides against P/CAF and p300 are shown in Table I.
RESULTS
DEK Is an Acetylated Protein-To determine whether DEK is acetylated in vivo, T98G human glioblastoma cells were infected with an adenoviral vector encoding for N-terminally tagged FLAG-DEK. Glioblastoma is one of several tumors that exhibit increased expression of DEK, as compared with its tissue of origin (2). FLAG-DEK was immunoprecipitated with anti-FLAG antibodies and separated by SDS-PAGE. Staining for total protein revealed that one major band is recovered in the immunoprecipitate at 48 kDa and a minor protein band at 35 kDa (data not shown). N-terminal sequencing has previously demonstrated that the 35-kDa band corresponds to a truncated form of DEK (amino acids 70 -375) (5). Importantly, immunoprecipitated DEK is reactive with a monoclonal antibody that is specific for acetylated lysine residues (Fig. 1A, lane 4), suggesting that DEK is acetylated in the cell. Although full-length DEK is reactive with both a DEK-specific monoclonal antibody and the acetyllysine-specific antibody, the 35-kDa band is only reactive with the DEK-specific antibody (Fig. 1B, lane 1 versus lane 2). These data indicate that, of the 67 potential lysine residues within DEK, the acetylated lysine residue is 1 (or more) of the 7 that are within the first 70 amino acids.
In vitro acetylation assays were used to determine whether DEK could serve as a substrate for the well characterized acetyltransferase proteins CBP, p300, and P/CAF. Fig. 1C indicates that all three enzymes acetylate recombinant DEK purified from baculovirus-infected insect cells. In the absence of an acetyltransferase enzyme, DEK remained unlabeled. Similar in vitro acetylation reactions using FLAG-DEK immunoprecipitated from T98G cells gave identical results (data not shown). These data suggest that acetylation is not occurring solely on the tag, because one recombinant protein is FLAGtagged while the other is polyhistidine-tagged, and the His tag contains no lysines. These results also indicate that in mammalian cells there are available CBP/p300 and P/CAF acetylation sites within native DEK, which validates the use of deacetylase inhibitor treatment to shift the equilibrium toward more highly acetylated forms of the protein. To confirm that treatment with a deacetylase inhibitor alters the acetylation state of DEK in vivo, T98G glioblastoma cells were treated or mock-treated overnight with the deacetylase inhibitor trichostatin A (TSA). Indeed, FLAG-DEK immunoprecipitated from TSA-treated T98G cells demonstrates considerably more reactivity toward the monoclonal anti-acetyllysine antibody than similar amounts of FLAG-DEK isolated from untreated cells (Fig. 1A, lane 3 versus lane 4). These data demonstrate that treatment of DEK-infected cells with the deacetylase inhibitor TSA directly affects the acetylation state of DEK.
Acetylation of DEK Decreases Affinity for DNA-We have previously demonstrated that DEK binds to the TG-rich pets site from the HIV-2 promoter and that dephosphorylation of endogenous DEK results in release of DEK bound to this site (14,15). These data indicate that, similar to many other proteins, post-translational modifications may play a significant role in the function of DEK within cells. Waldmann et al. (19) have also demonstrated that DEK binds to alternative forms of DNA such as four-way junctions and positive supercoils. Therefore, to investigate the effect of acetylation on the DNA-binding properties of DEK, the binding affinity of several differentially acetylated forms of FLAG-DEK for the pets site was determined using the gel shift assay (Fig. 2). For these experiments, FLAG-DEK was isolated from untreated and TSA-treated T98G cells, as well as from cells treated with cell-permeable inhibitors of P/CAF and CBP (see Table I and discussion of these inhibitors below). The apparent K d of DEK isolated from untreated cells for the pets site is 350 nM, versus 850 nM for DEK isolated from TSA-treated cells and 210 nM for DEK purified from acetylase inhibitor-treated cells. These data indicate that the more acetylated form of DEK has a 3-to 4-fold decrease in affinity for DNA, as compared with less acetylated forms of DEK. These data are consistent with the general observation that TSA activates transcription, and the finding that DEK likely plays a role in transcriptional repression (15,16).
A significant driving force for many DNA-binding proteins is the release of cations from the negatively charged phosphate backbone of DNA upon complexation (i.e. the polyelectrolyte effect) (43,44). Cation release generally results from salt bridge formation between positively charged protein residues and the DNA backbone, and can be interpreted in terms of the number of ionic interactions present in the complex (43). Therefore, to investigate the contribution of ion pairs to the stability of the DEK⅐pets complex, the binding affinity of DEK for the pets site was determined as a function of [Na ϩ ] (Fig. 2B), where the slope of the plot of ln K d, app versus ln[NaCl] represents the stoichiometry of cation release (43,45). These data suggest that binding of DEK to the pets site is accompanied by significant cation release from the phosphate backbone and that ionic interactions likely play an important role in complex stability. In contrast, the binding affinity of DEK isolated from TSAtreated cells for the pets site demonstrates a considerably lower dependence on buffer salt concentration (Fig. 2C). These data indicate that fewer ionic interactions are made in the DEK⅐pets complex following treatment with TSA, presumably due to the neutralization of the positively charged lysine residues within the binding site. Together, these data suggest that lysine residues of DEK play a direct and important role in DNA binding through recognition of the DNA backbone.
Deacetylase Inhibitors Alter the Localization of DEK-Recent reports have demonstrated that acetylation alters the localization of certain transcription factors (37,38). Therefore, TSA treatment was used to investigate whether acetylation would change the location of DEK. T98G glioblastoma cells were treated or mock-treated with TSA overnight and fixed for immunocytochemistry. Comparison with 4,6-dia-minido-2-phenylindole staining confirmed that, in untreated T98G cells, DEK is distributed diffusely throughout the nucleoplasm as reported previously (Fig. 3A). However, the pattern of DEK localization is dramatically altered in TSAtreated cells (Fig. 3D): in particular, DEK staining adopts a punctate pattern, suggesting that significant amounts of protein have accumulated in specific sub-nuclear structures in response to deacetylase inhibition. These data also indicate that DEK acetylation is dynamic; in other words, DEK is normally a substrate for deacetylase enzymes as well as acetylases.
As the antibody used for the detection of DEK is polyclonal, a vector encoding for DEK fused with an enhanced variant of A. victoria GFP was constructed to independently confirm that the staining corresponded to DEK itself. T98G cells were transfected with this vector, treated overnight with TSA or left untreated, and fixed. As expected, GFP-DEK is diffusely distributed within the nucleus of untreated cells (Fig. 4A). In contrast, GFP-DEK in TSA-treated cells is concentrated in punctuate bodies within the nuclear space (Fig. 4D). To more specifically correlate this phenotype with inhibition of deacetylase activity rather than an unpredicted effect of TSA treatment, cells were next treated with the deacetylase inhibitor sodium butyrate. As seen in Fig. 4G, sodium butyrate treatment also results in the relocalization of GFP-DEK to distinct sub-nuclear structures. Identical results were seen with sodium butyrate treatment and endogenous DEK (data not shown). At the concentrations and incubation times used, neither the TSA nor the sodium butyrate treatments caused cells to undergo apoptosis, as determined by visual inspection of cell nuclei with 4Ј,6-diamidino-2-phenylindole staining.
DEK Relocates to the Interchromatin Granule Cluster-There are many distinct structures within the nuclear space, a number of which appear as collections of small, round dots when stained (46). McGarvey et al. (20) have previously demonstrated that in ϳ15% of cells, DEK appears to be enriched in the nuclear speckles, also termed interchromatin granule clusters (IGCs), which is consistent with their observation that DEK associates with splicing factors. In contrast, acetylation of the oncoprotein EVI1 results in its relocalization to punctate structures that are enriched in CBP, which identifies them as promyelocytic leukemia protein oncogenic domains (PODs) (47,48). To determine whether the punctate structures in Figs. 3 and 4 are either IGCs or PODs, cells were stained with antibodies specific for either SC35 or CBP, respectively. Figs. 3 (B and E) and 4 (B, E, and H) depict staining of SC35. Figs. 3F and 4 (F and I) indicate substantial colocalization between the DEK-containing bodies and the IGC, whereas Figs. 3C and 4C show that the colocalization signal does not result simply from the overlap of SC35 staining with diffuse nuclear DEK. Colocalization was not seen with anti-CBP staining of PODs, which revealed a set of nuclear bodies distinct from the DEK-containing structures (Fig. 3, G-I). Thus, by preventing deacetylation, TSA and sodium butyrate shift the equilibrium toward the more acetylated form of DEK, which causes this protein to accumulate in IGCs. Our data suggest that, in normal cells, the more acetylated fraction of DEK interacts with mRNA processing proteins within IGCs, which are thought to be accumulation sites of transcriptional factors and mRNA processing factors (49,50). Indeed, previous data have supported the association of DEK with spliceosome proteins, most notably in complexes important for the coupling of pre-mRNA splicing and post-splicing events (21,51).
Acetylation by P/CAF Drives DEK into IGCs-To investigate which histone acetyltransferase is responsible for the movement of DEK into IGCs, cells were transfected with a vector encoding GFP-DEK and treated with newly developed, specific inhibitors of P/CAF (Table I) or CBP/p300 (52,53). These inhibitors have a significant advantage over previously employed synthetic histone acetyltransferase inhibitors in that they are cell-permeable and therefore do not require transfection. Inhibition constants were determined as previously described and are shown in Table I (41).
Treatment of GFP-DEK-transfected cells with the selective cell-permeable inhibitor of CBP/p300, followed by treatment with TSA, does not block the movement of GFP-DEK to IGCs (Fig. 5B). In contrast, treatment of cells with the P/CAF inhibitor H3-CoA-20-Tat, prior to TSA treatment, blocks the movement of GFP-DEK to IGCs (Fig. 5C). However, treatment of cells with the control peptide (lacking only the acetyl-CoA functional group), followed by the addition of TSA, has no effect on the localization of DEK (Fig. 5D). These data suggest that, although both P/CAF and CBP/p300 can acetylate DEK in vitro, it is the specific acetylation by P/CAF that results in the movement of DEK to the IGC.
To further investigate the role of P/CAF and CBP in the sub-nuclear movement of DEK, cells were co-transfected with GFP-DEK and a vector encoding either CBP or P/CAF. DEK remains pan-nuclear with the overexpression of CBP (Fig. 6A), whereas overexpression of P/CAF with DEK results in the movement of DEK to IGCs (Fig. 6B). This relocalization can be blocked with the addition of the P/CAF-specific inhibitor (Fig. 6C), but not the P/CAF inhibitor control molecule (data not shown), following co-transfection of plasmids expressing GFP- conjugates against p300 and P/CAF These assays were carried out with recombinant human P/CAF and p300 as described previously (41,42) DEK and P/CAF. These data support the hypothesis that it is the specific acetylation of DEK, or an associated protein, by P/CAF that causes DEK to move to an alternative location within the nucleus: the IGC. To our knowledge, this is the first demonstration that a specific acetylase can control the movement of a protein into the IGC. DISCUSSION Although DEK has been associated with multiple disease states, particularly neoplastic conditions such as acute myeloid leukemia, hepatocellular carcinoma, glioblastoma, melanoma, and T-cell large granular lymphocyte leukemia, its role in disease and normal cellular function remains unclear (1,2,8,9). However, the connection of DEK to diverse nuclear functions suggests that additional mechanisms may exist that confer alternative functional properties to DEK. For example, dephosphorylation results in the release of DEK from the pets site (15). Here we demonstrate for the first time that full-length DEK is reactive to an acetyllysine-specific antibody, indicating that DEK is acetylated in the cell. Interestingly, the N-terminally truncated form of DEK is not reactive with this antibody, suggesting that the acetylation site is within the first 70 amino acids.
Similar to other histone proteins and transcription factors, our data suggest that acetylation of DEK may have significant functional consequences, because TSA treatment results in an almost 4-fold decrease in affinity for DNA. The large dependence of binding affinity on NaCl concentration suggests that a considerable number of ionic interactions are made in the DEK⅐pets complex, presumably between positively charged lysine and/or arginine residues and the negatively charged DNA phosphate backbone. The generally nonspecific nature of these interactions may indicate why DEK has also been shown to bind alternative DNA structures, such as four-way junctions and positive supercoils, and to play a role in chromatin remodeling (17)(18)(19). The decrease in the overall dependence of binding affinity on NaCl concentration for the more acetylated form of DEK suggests that fewer ionic interactions are made in this complex. These data indicate that acetylation of DEK directly affects its ability to bind DNA, as opposed to altering its affinity for potential positive or negative cofactors, as has been described for other transcription factors (34 -36). The recognition of alternative DNA structures may also be affected by the acetylation state of DEK, and indirectly contribute to the role of DEK in disease. Similar to some HMG proteins, the binding of DEK to distorted DNA structures resulting from DNA damage mechanisms, such as UV irradiation, may block access to the DNA repair machinery (54,55). Treatment of cancer cells with HDAC inhibitors, such as TSA and butyrate, has been shown to have potentially therapeutic effects in certain malignancies (56,57). It is interesting to speculate that treatment of DEKassociated malignancies with deacetylase inhibitors might have a beneficial effect by causing the release of acetylated DEK from the damaged DNA, thus allowing access to the DNA repair machinery. HDAC inhibitors might also impact upon DEK-associated cancers by altering gene expression patterns (see discussion below).
We have also demonstrated for the first time that acetylation alters the sub-nuclear localization of DEK, as deacetylase inhibition results in the redistribution of DEK from being diffusely nuclear to being concentrated into IGCs. To our knowledge, this is the first demonstration that movement of any protein into the IGC is under the control of acetylation changes. We have not strictly shown that acetylation of DEK is the cause, rather than the consequence, of its relocation into the IGC. Attempts to address this issue using protein transfection have proven unrewarding, as we have been unable to transduce DEK into cells. This may be due to the highly charged nature of the individual domains of DEK, or to the propensity of this protein to multimerize (see below). However, it appears that the most straightforward explanation for the translocation of DEK into the IGC is that this movement follows acetylation. There are several reasons to assume that this is the case. First, when DEK is bound to DNA, it is tightly associated with chromatin, and acetylases are also known to act on chromatin, so this would be a logical place for DEK/acetylase interaction. Further, because it is the addition of deacetylase inhibitors or transfection of a vector expressing P/CAF that is the first step in our experiments, the simplest explanation is that an acetylation event drives DEK into IGCs. Finally, as is discussed further below, the acetylation of DEK would favor its accumulation in chromatin-free compartments such as the IGC. Therefore, it appears most likely that acetylation events precede the relocalization of DEK into the IGC, rather than DEK first entering IGCs and then undergoing acetylation.
Importantly, DEK is driven into the IGCs by overexpression of P/CAF, but not CBP, even though both histone acetyltransferases can acetylate DEK in vitro. In contrast, the majority of studies to date have demonstrated that it is the acetylation by CBP/p300 that has important functional consequences in vivo, including DNA binding, chromatin remodeling, and proteinprotein recognition (29,30,(32)(33)(34)(35)(36). Interestingly, one of the few examples of an in vivo role for acetylation by P/CAF is for the Class II transactivator protein, CIITA: acetylation results in relocalization of the protein from the cytoplasm to the nucleus (37). The sub-nuclear movement of DEK can be blocked with a novel P/CAF-specific small molecule inhibitor, but not by a similar CBP/p300 inhibitor or a control molecule. These compounds and other similar small molecule inhibitors of various acetylases have considerable advantage over previously described acetylase inhibitors, because they are cell-permeable and therefore do not require transfection. We believe these compounds will have significant and broad application in the identification of proteins that are acetylated in vivo and further the understanding of the functional consequences of acetylation in controlling gene regulation.
IGCs are dense collections of proteins that generally exclude nucleic acid, although RNA is found at the periphery of these structures (60). In fact, IGCs are adjacent to sites of active transcription, which is consistent with the prevailing theory that mRNA processing occurs co-transcriptionally (60). DEK has previously been found to be associated with mRNAprocessing factors, although a proteomic analysis of the spliceosome did not identify DEK (61). It is possible though that association of DEK with the spliceosome is dependent on the acetylation state of the cell and may be quite dynamic and transient (62). The data presented in this report suggest the possibility of a previously unknown connection between protein acetylation and mRNA processing. The movement of DEK to the IGC is also consistent with the role of DEK in chromatin remodeling (18), because acetylation of DEK would be expected to interfere with the formation of compact DEK-DNA structures, and hence favor its accumulation in a chromatin-free compartment such as the IGC. Other chromatin-associated architectural proteins are also acetylated, including histones and high mobility group factors (63,64), although acetylation of these proteins has not been shown to result in their subsequent movement to IGCs.
To date, there is no consensus motif for acetylation by either P/CAF or CBP, although they appear to have different recognition sites and usually acetylate different lysine residues within a protein (27,64). As such, it is difficult to predict based on sequence alone which lysine residue within DEK may be acetylated by P/CAF and ultimately responsible for the movement to IGCs. Transfection of GFP-DEK mutants with each lysine residue of interest changed to alanine is one of the most direct methods to identify the site responsible for movement of DEK to IGCs, following treatment with TSA. However, glutathione S-transferase-pull-down experiments, yeast two-hybrid analysis, and native gel electrophoresis demonstrate that DEK can physically associate with itself and exists in several multimeric forms (data not shown). Dimerization would allow mutant forms of DEK to associate with endogenous DEK and "piggyback" to IGCs following TSA treatment (65). This phenomenon therefore masks the identification of the lysine responsible for sub-nuclear movement of DEK using site-directed mutagenesis. As such, small interference RNA experiments are currently underway to identify RNA sequences that can knock FIG. 6. Overexpression of P/CAF, but not CBP, drives DEK into IGCs. A shows a cell co-transfected with vectors encoding GFP-DEK and CBP. B shows a cell co-transfected with vectors encoding GFP-DEK and P/CAF. C shows a cell co-transfected with GFP-DEK and P/CAFexpressing vectors followed by treatment with the P/CAF-specific inhibitor H3-CoA-20-Tat (50 M). down the expression of endogenous DEK, thus limiting interference from endogenous protein and facilitating the use of DEK mutants in transfection experiments.
Our results suggest a mechanism by which the amount of DEK in various compartments of the nucleus could be regulated, and an explanation for its appearance in multiple nuclear fractions (16,26). These findings may also be relevant to the recent characterization of a complex containing both DEK and the histone deacetylase HDAC2 (16). HDAC2 is essential for transcriptional repression and therefore its association with DEK and hDaxx, a cofactor whose regulatory role is ultimately controlled by its phosphorylation state, supports our previous findings suggesting that DEK plays a role in transcriptional repression (15). DEK is a phosphoprotein, and its acetylation may influence, or be influenced by, other post-translational modifications such as phosphorylation; this has been observed for other proteins such as histone H3 and p53 (66,67). In fact, trafficking of SR splicing factors in and out of the IGC is often determined by phosphorylation (68). It is also notable that SR proteins and other factors in IGCs are autoantigens, and that patients harbor antibodies specific for SR phosphoepitopes (69). The acetyllysines found in DEK may contribute to its targeting in autoimmune disease, as post-translational modifications, including acetylation, have been shown to influence autoreactivity (70,71).
In summary, our data, in the context of previously described properties of DEK, suggest that DEK acts in transcriptional repression through recognition of DNA promoter elements and association with other repressor proteins, such as HDAC2 and hDaxx. The acetylation of DEK by P/CAF drives DEK from the transcriptional enhancer through disruption of ionic interactions, and thereby promotes transcriptional activation. DEK then moves to the IGCs, where it could potentially participate in RNA-processing events through association with spliceosome proteins. This model is in support of evidence that multiple steps along the pathway of transcription and gene regulation are coordinated, and that post-translational modifications of proteins like DEK may play a critical role in the integration of these steps. | 8,198.4 | 2005-09-09T00:00:00.000 | [
"Biology"
] |
Timing, drivers and impacts of the historic Masiere di Vedana rock avalanche (Belluno Dolomites, NE Italy)
The “Masiere di Vedana” rock avalanche, located in the Belluno Dolomites (NE Italy) at the foot of Mt. Peron, is reinterpreted as historic on the base of archeological information and cosmogenic 36Cl exposure dates. The deposit is 9 km2 wide, has a volume of ∼ 170 Mm3 corresponding to a pre-detachment rock mass of ∼ 130 Mm3, and has a maximum runout distance of 6 km and an H/L ratio of ∼ 0.2. Differential velocities of the rock avalanche moving radially over different topography and path material lead to the formation of specific landforms (tomas and compressional ridges). In the Mt. Peron crown the bedding is subvertical and includes carbonate lithologies from Lower Jurassic (Calcari Grigi Group) to Cretaceous (Maiolica) in age. The stratigraphic sequence is preserved in the deposit with the formations represented in the boulders becoming younger with distance from the source area. In the release area the bedding, the SSE-verging frontal thrust planes, the NW-verging backthrust planes, the NW–SE fracture planes, and the N– S Jurassic fault planes controlled the failure and enhanced the rock mass fragmentation. The present Mt. Peron crown still shows hundreds-of-metres-high rock prisms bounded by backwall trenches. Cosmogenic 36Cl exposure ages, mean 1.90± 0.45 ka, indicate failure occurred between 340 BCE and 560 CE. Although abundant Roman remains were found in sites surrounding the rock avalanche deposit, none were found within the deposit, and this is consistent with a late Roman or early Middle Ages failure. Seismic and climatic conditions as landslide predisposing factors are discussed. Over the last few hundred years, earthquakes up to Mw = 6.3, including that at 365 CE, have affected the Belluno area. Early in the first millennium, periods of climate worsening with increasing rainfall occurred in the NE Alps. The combination of climate and earthquakes induced progressive long-term damage to the rock until a critical threshold was reached and the Masiere di Vedana rock avalanche occurred.
Sampling was based on the size of the boulders and stable position, single stage of exposure, continuous exposition in the same position (not shifting), no coverage; minimal surface of weathering or erosion (karren encased for at maximum 0.5 cm).
The samples VB3a, VB3c, VB14 were taken on the Mt. Peron slope at different altitude, VB2 on the right bank of the Cordevole, in the Vedana area and VB12, VB13a and VB13b come from the southernmost part of the deposits, in the Roe Alte sector.
The main characteristics of the dated samples are here reported.
VB2. The sample comes from a grey metric boulder on the right bank of Cordevole river. The rock belongs to Calcari Grigi Group and had a network of black calcite veins. Thin section shows that the rock is a peloidal packstone, with micritic matrix. It is possible to recognize fragments of echinoderm, bivalves, spicules of sponges, algae, foraminifers and peloids.
VB3a. The sample comes from a metric boulder of Upper Rosso Ammonitico Fm. on the left side of Cordevole River, at the foothill of Mt. Peron, 300 m over the Peron village. The sample is a pinkish packstone with nodular structure and fragments of Saccocoma.
VB3c. The sample comes from a decametric boulder of Fonzaso Fm. on the left side of Cordevole River, at the foothill of Mt. Peron, 300 m over the Peron village. The boulder is next to the VB3a sample but is way bigger. On this section the sample appears to be a bioclastic peloidal packstone with fragment of echinoderm, spicules of sponges, bivalve and calcareous algae.
VB12.
Sample comes from a grey Vajont limestone metric boulder with evident crinoids and algae situated in the further part of Roe Alte deposits. Thin section shows a packstone with fragment of echinoderm, bryozoan and algae, subordinately bivalves, peloid, and foraminifers. Note that oolites still show a very well preserved radial structure. Table 2, are expressed as percentage concentrations of element oxides for major and minor elements and as parts per million (ppm) for trace elements. In order to include the LOI value (expressed as %) into the sum of major element oxides, analyses were normalised to 100% minus LOI value.
Instrumental precision (defined by several measurements performed on the same sample) is within 0.6% relative for major and minor elements, and within 3% relative for trace elements. The XRF accuracy was checked by reference standards (Govindaraju, 1994) and was within 0.5 wt% for Si, lower than 3% for other major and minor elements, and lower than 5% for trace elements. The lowest detection limits of XRF were within 0.02 wt% for Al2O3, MgO and Na2O, within 0.4 wt% for SiO2, within 0.005 wt% for TiO2, Fe2O3, MnO, CaO, K2O and P2O5 and within a range between 3 and 10 ppm for trace elements.
S3. XRF
Trace metals and REE were determined by Inductively Coupled Plasma-Mass Spectrometry (Thermo Elemental, mod. X-Series II ). Optimisation of the instrumental parameters was performed to achieve best sensitivity, low levels of oxides (CeO + /Ce + <2%) and double charged ions (Ba ++ /Ba + <3%). Mass calibration of the quadrupole was also performed. Major instrumental parameters are as follows: From the bottom up:
S4. ICP-MS
-Bedrock consisting of the Bolago Marl: thickness varies from a minimum of 3 m in the northern side to a maximum of 5 m in the southern margin, with an undulated upper limit that roughly corresponds to the strata surface.
-Glacial till, 0.5-to-2 m thick, composed of rounded to sub-rounded decimetric clast (b-axis spanning from 0.5 to 40 cm) of various lithologies (i.e. flysh, limestone/dolostone and volcanic/metamorphic). Clasts show evidence of incisions and striae and little surficial alteration. The deposit is characterized by the presence of a high amount of silty-clayey matrix, brown-to-grey in colour. No evidence of clasts organization. Upper limit is extremely undulated, erosive in origin.
-The uppermost unit is monogenic, being constituted by angular carbonate clast, varying in size from 1 cm to 1 m, with a lot of coarse sandy matrix. Smaller clasts are more abundant than larger ones, that are grouped in the uppermost part of the deposit. This unit is 2-to-20 m thick | 1,442 | 2020-08-12T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Electronic Structure of Boron Flat Holeless Sheet
The electronic band structure, namely energy band surfaces and densities-of-states (DoS), of a hypothetical flat and ideally perfect, i.e., without any type of holes, boron sheet with a triangular network is calculated within a quasi-classical approach. It is shown to have metallic properties as is expected for most of the possible structural modifications of boron sheets. The Fermi curve of the boron flat sheet is found to be consisted of 6 parts of 3 closed curves, which can be approximated by ellipses representing the quadric energy-dispersion of the conduction electrons. The effective mass of electrons at the Fermi level in a boron flat sheet is found to be too small compared with the free electron mass m0 and to be highly anisotropic. Its values distinctly differ in directions Γ–K and Γ–M: mΓ–K/m0 ≈ 0.480 and mΓ–M/m0 ≈ 0.052, respectively. The low effective mass of conduction electrons, mσ/m0 ≈ 0.094, indicates their high mobility and, hence, high conductivity of the boron sheet. The effects of buckling/puckering and the presence of hexagonal or other type of holes expected in real boron sheets can be considered as perturbations of the obtained electronic structure and theoretically taken into account as effects of higher order.
Why should Boron Sheets be Formed?
There are different reasons in favor of the formation of stable 2-D all-boron structures.They can be divided into several groups.Let's consider them separately.
3-D All-Boron Structures
Boron, the fifth element of the Periodic Table, is located at the intersection of semiconductors and metals.Due to a small covalent radius (only 0.84 Å) and number (only 3) of valence electrons, boron does not form simple three-dimensional structures but crystals with icosahedral clusters with many atoms in the unit cell.At least three all-boron allotropes are known-α-and β-rhombohedral and high-pressure γ-orthorhombic phases-for the experimental phase diagram of boron see Reference [18].In addition, αand β-tetragonal and a number of other boron structures, probably stabilized by the presence of impurities/defects, were reported.Theoretical studies of the five boron crystal structures (α, dhcp, sc, fcc, and bcc) were carried out using the LAPW (linearized plane wave) method in Reference [19].The current state of research on the phase diagram of boron from a theoretical point of view is given in Reference [20].It should be noted that, in the last decade, several new structures of boron allotropes were discovered and some have been disproved.Currently, even the number of allotropes of boron is uncertain.The reason for this is that there are many such structures, all of them complex, and some of them are minimally different from others.A pseudo-cubic tetragonal boron recently discovered under high-pressure and high-temperature conditions may also be another form of boron allotropes; however, its structure, studied in Reference [21] using a DFT (density functional theory) calculation, is abnormal compared to other allotropes of boron in many ways.
The almost regular icosahedron B 12 with B-atoms at the vertices (Figure 1) serves as the main structural motif not only of boron allotropes but also of all known boron-rich compounds.In the boron icosahedron, each atom is surrounded by 5 neighboring atoms and, as usual, with one more atom from the rest of the crystal.For this reason, the average coordination number of a boron-rich lattice ranges from 5 to 5+1=6.
Condens.Matter 2019, 4, x 2 of 22 many atoms in the unit cell.At least three all-boron allotropes are known-α-and β-rhombohedral and high-pressure γ-orthorhombic phases-for the experimental phase diagram of boron see Reference [18].In addition, α-and β-tetragonal and a number of other boron structures, probably stabilized by the presence of impurities/defects, were reported.Theoretical studies of the five boron crystal structures (α, dhcp, sc, fcc, and bcc) were carried out using the LAPW (linearized plane wave) method in Reference [19].The current state of research on the phase diagram of boron from a theoretical point of view is given in Reference [20].It should be noted that, in the last decade, several new structures of boron allotropes were discovered and some have been disproved.Currently, even the number of allotropes of boron is uncertain.The reason for this is that there are many such structures, all of them complex, and some of them are minimally different from others.A pseudo-cubic tetragonal boron recently discovered under high-pressure and high-temperature conditions may also be another form of boron allotropes; however, its structure, studied in Reference [21] using a DFT (density functional theory) calculation, is abnormal compared to other allotropes of boron in many ways.
The almost regular icosahedron B12 with B-atoms at the vertices (Figure 1) serves as the main structural motif not only of boron allotropes but also of all known boron-rich compounds.In the boron icosahedron, each atom is surrounded by 5 neighboring atoms and, as usual, with one more atom from the rest of the crystal.For this reason, the average coordination number of a boron-rich lattice ranges from 5 to 5+1=6.However, an isolated regular boron icosahedron is an electron-deficient structure-the total number of valence electrons of 12 boron atoms is not sufficient to fill all the covalent bonding orbitals corresponding to such a cage-molecule.Thus, if it were a stable structure, then intra-icosahedral bonds would be only partially covalent but also to some extent metallic.As for boron icosahedra constituting real crystals, it was clearly demonstrated, for example, for β-rhombohedral boron [22][23][24][25][26][27], that they are stabilized by the presence of point structural defects-vacancies and interstitials, in other words, both partially filled regular or irregular boron sites-at very high concentrations.For example, in the case of β-rhombohedral boron, the total effect of such a stabilization is to increase the average number of boron atoms inside the unit cell from the ideal value 105 (Figure 2) to 106.7 [28], which leads to the saturation of the electron-deficient orbitals and a 5-or 6-coordination number for the majority of constituent boron atoms.However, an isolated regular boron icosahedron is an electron-deficient structure-the total number of valence electrons of 12 boron atoms is not sufficient to fill all the covalent bonding orbitals corresponding to such a cage-molecule.Thus, if it were a stable structure, then intra-icosahedral bonds would be only partially covalent but also to some extent metallic.As for boron icosahedra constituting real crystals, it was clearly demonstrated, for example, for β-rhombohedral boron [22][23][24][25][26][27], that they are stabilized by the presence of point structural defects-vacancies and interstitials, in other words, both partially filled regular or irregular boron sites-at very high concentrations.For example, in the case of β-rhombohedral boron, the total effect of such a stabilization is to increase the average number of boron atoms inside the unit cell from the ideal value 105 (Figure 2) to 106.7 [28], which leads to the saturation of the electron-deficient orbitals and a 5-or 6-coordination number for the majority of constituent boron atoms.many atoms in the unit cell.At least three all-boron allotropes are known-α-and β-rhombohedral and high-pressure γ-orthorhombic phases-for the experimental phase diagram of boron see Reference [18].In addition, α-and β-tetragonal and a number of other boron structures, probably stabilized by the presence of impurities/defects, were reported.Theoretical studies of the five boron crystal structures (α, dhcp, sc, fcc, and bcc) were carried out using the LAPW (linearized plane wave) method in Reference [19].The current state of research on the phase diagram of boron from a theoretical point of view is given in Reference [20].It should be noted that, in the last decade, several new structures of boron allotropes were discovered and some have been disproved.Currently, even the number of allotropes of boron is uncertain.The reason for this is that there are many such structures, all of them complex, and some of them are minimally different from others.A pseudo-cubic tetragonal boron recently discovered under high-pressure and high-temperature conditions may also be another form of boron allotropes; however, its structure, studied in Reference [21] using a DFT (density functional theory) calculation, is abnormal compared to other allotropes of boron in many ways.
The almost regular icosahedron B12 with B-atoms at the vertices (Figure 1) serves as the main structural motif not only of boron allotropes but also of all known boron-rich compounds.In the boron icosahedron, each atom is surrounded by 5 neighboring atoms and, as usual, with one more atom from the rest of the crystal.For this reason, the average coordination number of a boron-rich lattice ranges from 5 to 5+1=6.However, an isolated regular boron icosahedron is an electron-deficient structure-the total number of valence electrons of 12 boron atoms is not sufficient to fill all the covalent bonding orbitals corresponding to such a cage-molecule.Thus, if it were a stable structure, then intra-icosahedral bonds would be only partially covalent but also to some extent metallic.As for boron icosahedra constituting real crystals, it was clearly demonstrated, for example, for β-rhombohedral boron [22][23][24][25][26][27], that they are stabilized by the presence of point structural defects-vacancies and interstitials, in other words, both partially filled regular or irregular boron sites-at very high concentrations.For example, in the case of β-rhombohedral boron, the total effect of such a stabilization is to increase the average number of boron atoms inside the unit cell from the ideal value 105 (Figure 2) to 106.7 [28], which leads to the saturation of the electron-deficient orbitals and a 5-or 6-coordination number for the majority of constituent boron atoms.Thus, all-boron 5-and 6-coordinated regular 3-D lattices cannot exist, but one can naturally imagine 2-D flat or buckled/puckered structures with a triangular arrangement of atoms with and without periodically spaced hexagonal (rarely quadric, pentagonal, or heptagonal) holes.Obviously, most of them are expected to be (semi)metallic.At the moment, a number of different atomic geometries for quasi-planar boron sheets are theoretically proposed [4][5][6][7].
Using the ab initio evolutionary structure prediction approach, a novel reconstruction of the α-boron (111) surface with the lowest energy was discovered [52].In this reconstruction, all single interstitial boron atoms bridge neighboring icosahedra by polar covalent bonds, and this satisfies the electron counting rule, leading to the reconstruction-induced semiconductor-metal transition.The new stable boron sheet, called H-borophene, proposed in Reference [53] and constructed by tiling 7-membered rings side by side, should be especially noted.
As for the irregularly distributed holes, they have to be considered as defects.The research [54] is focused on the formation of local vacancy defects and pinholes in a 2-D boron structure-the so-called γ 3 -type boron monolayer.
Boron Quasi-Planar Clusters
Indirectly, the reality of boron sheets can be proved by the presence of various quasi-planar boron clusters, i.e., finite fragments of sheets, in gaseous state and also boron nanotubes, which are the fragments of boron sheets wrapped into cylinders (see for example, the review from Reference [55] and the references therein).Experimental and theoretical evidences that small boron clusters prefer planar structures were reported in Reference [8].
In addition, recently, a highly stable quasi-planar boron cluster B 36 of hexagonal shape with a central hexagonal hole [9], which is viewed as a potential basis for an extended 2-D boron sheet, and boron fullerene B 40 [10], which can be imagined as the fragment of a boron sheet wrapped into the sphere, were discovered experimentally.Photoelectron spectroscopy in combination with ab initio calculations have been carried out to probe the structure and chemical bonding of the B 27 − cluster [56].
A comparison between the experimental spectrum and the theoretical results reveals a 2-D global minimum with a triangular lattice containing a tetragonal defect and two low-lying 2-D isomers, each with a hexagonal vacancy.
Liquid Boron Structure
There are also evidences [57][58][59] that liquid boron does not consist of icosahedra but mainly of quasi-planar clusters.Ab initio MD (molecular-dynamics) simulations of the liquid boron structure yields that at short length scales, B 12 icosahedra, a main structural motif of boron crystals and boron-rich solid compounds, are destroyed upon melting.Although atoms form an open packing, they maintain the 6-coordination.
According to measurements of the structure factor and the pair distribution function, the melting process is associated with relatively small changes in both the volume and the short-range order of the system.Results of a comprehensive study of liquid boron with X-ray measurements of the atomic structure and dynamics coupled with ab initio MD simulations also show that there is no evidence of survival of the icosahedral arrangements into the liquid, but many atoms appear to adopt a geometry corresponding to the quasi-planar pyramids.
Growing of Boron Sheets
Currently, some of the 2-D materials beyond graphene also are used [60].But for non-layer structured 3-D materials such as boron, it is a real challenge to fabricate the corresponding 2-D nanosheets due to the absence of the driving force of anisotropic growth.There are rare examples of some 2-D metal nanosheets; see for example, the recent report [61] on single-crystalline Rh nanosheets with a 3-5 atomic layers thickness.Boron sheets are expected to be metallic as well.Thus, this should increase the chances of their actual formation.
In this regard, we have to mention the recent report [62] in which large-scale single-crystalline ultrathin boron nanosheets have been fabricated via the thermal decomposition of diborane.
It is obvious that an infinite boron sheet does not exist in nature and that its finite pieces are not stable compared to bulk and/or nanotubular structures of boron.To grow boron sheets, one needs a substrate which binds boron atoms strongly to avoid bulk phases while, at the same time, provides sufficient mobility of boron atoms on the substrate.Possible candidates for substrates are surfaces of (close-packed transition) metals.The feasibility of different synthetic methods for 2-D boron sheets was assessed [47,[63][64][65][66] using ab initio calculations, i.e., "synthesis in theory" approach.A large-scale boron monolayer has been predicted with mixed hexagonal-triangular geometry obtained via either depositing boron atoms directly on the surface or soft landing of small planar B-clusters.
Recently, a series of planar boron allotropes with honeycomb topology has been proposed [67].Although the free-standing honeycomb B allotropes are higher in energy than α-sheets, these calculations show that a metal substrate can greatly stabilize these new allotropes.
The atomically thin, crystalline 2-D boron sheets, i.e., borophene, were actually synthesized [16] on silver surfaces under ultrahigh-vacuum conditions (Figure 3).An atomic-scale characterization, supported by theoretical calculations, revealed structures reminiscent of fused boron clusters with multiple scales of anisotropic, out-of-plane buckling.Unlike bulk boron allotropes, borophene shows metallic characteristics that are consistent with predictions of a highly anisotropic 2-D metal.In this regard, we have to mention the recent report [62] in which large-scale single-crystalline ultrathin boron nanosheets have been fabricated via the thermal decomposition of diborane.
It is obvious that an infinite boron sheet does not exist in nature and that its finite pieces are not stable compared to bulk and/or nanotubular structures of boron.To grow boron sheets, one needs a substrate which binds boron atoms strongly to avoid bulk phases while, at the same time, provides sufficient mobility of boron atoms on the substrate.Possible candidates for substrates are surfaces of (close-packed transition) metals.The feasibility of different synthetic methods for 2-D boron sheets was assessed [47,[63][64][65][66] using ab initio calculations, i.e., "synthesis in theory" approach.A large-scale boron monolayer has been predicted with mixed hexagonal-triangular geometry obtained via either depositing boron atoms directly on the surface or soft landing of small planar B-clusters.
Recently, a series of planar boron allotropes with honeycomb topology has been proposed [67].Although the free-standing honeycomb B allotropes are higher in energy than α-sheets, these calculations show that a metal substrate can greatly stabilize these new allotropes.
The atomically thin, crystalline 2-D boron sheets, i.e., borophene, were actually synthesized [16] on silver surfaces under ultrahigh-vacuum conditions (Figure 3).An atomic-scale characterization, supported by theoretical calculations, revealed structures reminiscent of fused boron clusters with multiple scales of anisotropic, out-of-plane buckling.Unlike bulk boron allotropes, borophene shows metallic characteristics that are consistent with predictions of a highly anisotropic 2-D metal.The experimental work in Reference [17] shows that 2-D boron sheets can be grown epitaxially on a Ag(111) substrate.Two types of boron sheets, β12 and χ3, both exhibiting a triangular lattice but with different arrangements of periodic holes, were observed by scanning tunneling microscopy.DFT simulations indicate that both sheets are planar without obvious vertical undulations.
According to the ab initio calculations [68], the periodic nanoscale 1-D undulations can be preferred in borophenes on concertedly reconstructed Ag(111).This "wavy" configuration is more stable than its planar form on flat Ag(111) due to an anisotropic high bending flexibility of borophene.An atomic-scale ultrahigh vacuum scanning tunneling microscopy characterization of a borophene grown on Ag(111) reveals such undulations, which agree with the theory.Although the lattice is coherent within a borophene island, the undulations nucleated from different sides of the island form a distinctive domain boundary when they are laterally misaligned.
Recently, borophene synthesis monitored in situ by low-energy electron microscopy, diffraction, and scanning tunneling microscopy and modeled using ab initio theories has been reported in Reference [69].By resolving the crystal structure and phase diagram of borophene on Ag(111), the domains are found to remain nanoscale for all growth conditions.However, by growing borophene on Cu(111) surfaces, large single-crystal domains (up to 100 μm) are obtained.The crystal structure is a novel triangular network with a concentration of hexagonal vacancies of η= 1/5.These experimental data together with ab initio calculations indicate a charge-transfer coupling to the substrate without significant covalent bonding.
Boron on a Pb(110) surface was simulated [70] by using ab initio evolutionary methodology and found that 2-D Pmmn structures can be formed because of a good lattice matching.By increasing the The experimental work in Reference [17] shows that 2-D boron sheets can be grown epitaxially on a Ag(111) substrate.Two types of boron sheets, β 12 and χ 3 , both exhibiting a triangular lattice but with different arrangements of periodic holes, were observed by scanning tunneling microscopy.DFT simulations indicate that both sheets are planar without obvious vertical undulations.
According to the ab initio calculations [68], the periodic nanoscale 1-D undulations can be preferred in borophenes on concertedly reconstructed Ag(111).This "wavy" configuration is more stable than its planar form on flat Ag(111) due to an anisotropic high bending flexibility of borophene.An atomic-scale ultrahigh vacuum scanning tunneling microscopy characterization of a borophene grown on Ag(111) reveals such undulations, which agree with the theory.Although the lattice is coherent within a borophene island, the undulations nucleated from different sides of the island form a distinctive domain boundary when they are laterally misaligned.
Recently, borophene synthesis monitored in situ by low-energy electron microscopy, diffraction, and scanning tunneling microscopy and modeled using ab initio theories has been reported in Reference [69].By resolving the crystal structure and phase diagram of borophene on Ag(111), the domains are found to remain nanoscale for all growth conditions.However, by growing borophene on Cu(111) surfaces, large single-crystal domains (up to 100 µm) are obtained.The crystal structure is a novel triangular network with a concentration of hexagonal vacancies of η = 1/5.These experimental data together with ab initio calculations indicate a charge-transfer coupling to the substrate without significant covalent bonding.
Boron on a Pb(110) surface was simulated [70] by using ab initio evolutionary methodology and found that 2-D Pmmn structures can be formed because of a good lattice matching.By increasing the thickness of 2-D boron, the three-bonded graphene-like P2 1 /a boron was revealed to possess lower molar energy, indicating the more stable 2-D boron.
The influence of the excess negative charge on the stability of borophenes-2-D boron crystals-was examined in Reference [71] using an analysis of the decomposition of the binding energy of a given boron layer into contributions coming from boron atoms that have different coordination numbers to understand how the local neighborhood of an atom influences the overall stability of the monolayer structure.The decomposition is done for the α-sheet related family of structures.It was found a preference for 2-D boron crystals with very small or very high charges per atom.Structures with intermediate charges are not energetically favorable.It has been also found a clear preference in terms of binding energy for the experimentally observed γ-sheet and δ-sheet structures that is almost independent on the considered excess of negative charge of the structures.
Two-dimensional boron monolayers have been extensively investigated using ab initio calculations [72].A series of boron bilayer sheets with pillars and hexagonal holes have been constructed.Many of them have a lower formation energy than an α-sheet boron monolayer.However, the distribution and arrangement of hexagonal holes can cause a negligible effect on the stability of these structures.
Recently, an ab initio study [73] of the effect of electron doping on the bonding character and stability of borophene for the neutral system has revealed previously unknown stable 2-D structures: ε-B and ω-B.The chemical bonding characteristic in this and other boron structures is found to be strongly affected by an extra charge.Beyond a critical degree of electron doping, the most stable allotrope changes from ε-B to a buckled honeycomb structure.Additional electron doping, mimicking a transformation of boron to carbon, causes a gradual decrease in the degree of buckling of the honeycomb lattice.
Applications
In general, the formation of a boron sheet would have wide applications in techniques because the boronizing of metal surfaces is known as an effective method of formation of protective coatings [74].In particular, quasi-planar bare boron surfaces can serve as lightweight protective armor.
Boron sheets are expected to be very good conductors with potential applications in nanoelectronics, e.g., in high-temperature nanodevices.Boron sheets could have potential as metallic interconnects and wiring in electronic devices and IC (integrated circuits) [41].
A theoretical investigation [32] of both the molecular physisorption and dissociative atomic chemisorption of hydrogen by boron sheets predicts physisorption as the leading mechanism at moderate temperatures and pressures.Further calculations performed on hydrogen-storage properties showed that the decoration of pristine sheets with the right metal elements provide additional absorption sites for hydrogen [47].Thus, boron sheets can serve for good nanoreservoirs of fuel hydrogen used in green-energy production.
Due to the high neutron-capture cross section of 10 B nuclei, solid-state boron allotropies, as well as boron-rich compounds and composites, are good candidates to be used as neutron-protectors.Boron sheets will be especially useful as an absorbing component in composite neutron shields [75].Materials with the high bulk concentration of B-atoms usually are nonmetals and, therefore, not suitable for electromagnetic shielding purposes.However, frequently, the simultaneous protection against both the neutron irradiation and electromagnetic waves is needed, in particular, because neutron absorption by 10 B nuclei is accompanied by a gamma-radiation.For this reason, in the boron-containing nanocomposites designed for neutron-protection, it is necessary to introduce some foreign components with metallic conductivity.Utilizing of the metallic boron sheet as a component may resolve this problem [76].
Recently, the mechanical properties of 2-D boron-borophene-have been studied by ab initio calculations [77].The borophene with a 1/6 concentration of hollow hexagons is shown to have the Föppl-von Kármán number per unit area over twofold higher than graphene's value.The record high flexibility combined with excellent elasticity in boron sheets can be utilized for designing advanced composites and flexible devices.The transfer of undulated borophene onto an elastomeric substrate would allow for high levels of stretchability and compressibility with potential applications for emerging stretchable and foldable devices [68].
The boron sheets are quite inert to oxidization and interact only weakly with their substrate.For this reason, they may find applications in electronic devices in the future [17].
In large-scale single-crystalline ultrathin boron nanosheets fabricated [62] via the thermal decomposition of diborane, the strong combination performances of low turn-on field-of-field emissions, favorable electron transport characteristics, high sensitivity, and fast response time to illumination reveal that the nanosheets have high potential as applications in field-emitters, interconnects, IC, and optoelectronic devices.
Some other applications of borophene are described in recent reviews [78,79].
Available Electron Structure Calculations
Because boron sheets are of great academic and practical interests, their electronic structure is studied intensively.Most of them are found to be metallic.
Let us note that there are some indirect evidences for metallic conduction in boron sheets.The absence of icosahedra in liquid boron affects its properties including electrical conductivity [57,59], and it behaves like a metal.
The very stable quasi-planar clusters of boron B n for n up to 46, considered to be fragments of bare boron quasi-planar surfaces, have to possess a singly occupied bonding orbital [29].Assuming that conduction band of the infinite surface is generated from the HOMO (highest-occupied-molecular-orbital) of a finite fragment, it means the partial filling of the conduction band, i.e., the metallic mechanism of conductance.
Diamond-like, metallic boron crystal structures were predicted in Reference [80] employing so-called decoration schemes of calculations, in which the normal and hexagonal diamond-like frameworks are decorated with extra atoms across the basal plane.They should have an overly high DoS near the Fermi level.This result may provide a plausible explanation for not only the anomalous superconductivity of boron under high pressure but also the nonmetal-metal transition in boron structures.
In Reference [45], it was presented the results of a theoretical study of the phase diagram of elemental boron showing that, at high pressures, boron crystallizes in quasi-layered bulk phases characterized by in-plane multicenter bonds and out-of-plane bonds.All these structures are metallic.
Band structures of a series of planar boron allotropes with honeycomb topologies recently proposed in Reference [67], exhibit Dirac cones at the K-point, the same as in graphene.In particular, the Dirac point of honeycomb boron sheet locates precisely on the Fermi level, rendering it as a topologically equivalent material to graphene.Its Fermi velocity is of 6•10 5 m/s, close to that of graphene.However, in H-borophene [53] constructed by tiling 7-membered rings side by side, a Dirac point appeared at about 0.33 eV below the Fermi level.
According to some theoretical results [36,37,47,48], boron sheets can be not only metal but in some cases also an almost zero band-gap semiconductor depending on its atomistic configuration.Probably, the semiconducting character is related to the nonzero thickness of buckled/puckered 2-D boron sheets or double-layered structures.
Some of borophenes can be magnetic.Based on a tight-binding model of 8-Pmmn borophene developed in Reference [81], it is confirmed that the crystal hosts massless Dirac fermions and the Dirac points are protected by symmetry.Strain is introduced into the model, and it is shown to induce a pseudomagnetic field vector potential and a scalar potential.The 2-D antiferromagnetic boron, designated as M-boron, has been predicted [82] using an ab initio evolutionary methodology.M-boron is entirely composed of B 20 clusters in a hexagonal arrangement.Most strikingly, the highest valence band of M-boron is isolated, strongly localized, and quite flat, which induces spin polarization on either cap of the B 20 cluster.This flat band originates from the unpaired electrons of the capping atoms and is responsible for magnetism.M-boron is thermodynamically metastable.
Boron sheets grown on metal surfaces are predicted [63] to be strongly doped with electrons from the substrate, showing that a boron sheet is an electron-deficient material.As mentioned, by simulating [70] boron on Pb(110) surface using ab initio evolutionary methodology, it was found that the 2-D Dirac Pmmn boron can be formed.Unexpectedly, by increasing the thickness of 2-D boron, the three-bonded graphene-like structure P2 1 /a was revealed to possess double anisotropic Dirac cones.It is the most stable 2-D boron with particular Dirac cones.The puckered structure of P2 1 /a boron results in the peculiar Dirac cones.
This present work aims to provide more detailed calculations on the electronic structure of boron sheet including not only DoS but also band structure, electron effective mass, Fermi curve, etc.
Theoretical Approach
We use an original theoretical method of the quasi-classical type [83] based on the proof that the electronic system of any substance is a quasi-classical system; that is, its exact and quasi-classical energy spectra are close to each other.
As for the determination of the materials' electron structure, the quasi-classical method is reduced to the LCAO (linear combination of atomic orbitals) method with a basis set of quasi-classical atomic orbitals.Within the initial quasi-classical approximation, the solution of the corresponding mathematical problem consists of two main stages: 1.
The construction of matrix elements for secular equation, which, within the initial quasi-classical approximation, reduces to a geometric task of determining the volume of the intersection of three spheres [89], and 2.
The solving of the secular equation, which determines the crystalline electronic energy spectrum [90].
This method has been successfully applied to electronic structure calculations performed for various modifications of boron nitride, BN, one of the most important boron compounds [91][92][93][94], as well as metal-doped β-rhombohedral boron [95].
The maximal relative error of a quasi-classical calculation itself, i.e., without errors arisen from input data, is estimated as approx.4%.
As for the input data, the quasi-classical method of band structure calculations requires them to be in the form of quasi-classical parameters of constituent atoms: the inner and outer radii of the classical turning points for electron states in atoms, the radii of layers of the quasi-classical averaging of potential in atoms, and the averaged values of the potential within corresponding radial layers of atoms.These quantities for an isolated boron atom (as well as for other atoms) in the ground state were pre-calculated in Reference [96] on the basis of ab initio theoretical, namely Hartree-Fock (HF), values of electron levels [97].Thus, in our case, the accuracy of the quasi-classical parameters is determined by that of the HF approach.
As is known, the electronic structure of any atomic system is influenced by its geometric structure and vice versa.Often, one starts with the question of how to find the most stable idealized atomic configuration.Despite this, here, we will directly begin with the electron band structure of a flat triangular boron sheet, neglecting the buckling/puckering effects and hexagonal holes (see references above), assuming that in real sheets (e.g., grown on metal surfaces), their buckled/puckered or vacant parts would not be arranged in a periodic manner, and thus, they should be regarded as perturbations which can be taken into account within a higher-order perturbation theory.
Multi-layered (buckled) boron sheets can be imagined by substituting metal Me atoms again with boron B atoms in the layered structure of a metal boride MeB 2 .Analyses of the isolated layer instead of multilayered structure also seems to be quite sufficient for the initial approximation because, in such structures, only intra-layer conductivity is metallic, while interlayer conductivity is nonmetallic due to larger interlayer bond lengths if compared with those in layers.
The 2-D unit cell of the perfectly flat boron sheet without hexagonal holes (Figure 4) is a rhomb with an acute angle of β = π/3, i.e., with a single lattice constant a (Figure 5).Let There is a number of different values for the lattice constant of a boron sheet suggested theoretically.For the self-consistency of calculations, we use a = 3.37 a.u. of length, i.e., 1.78 Å, which corresponds to the B-B pair interatomic potential in the same quasi-classical approximation [91].
Condens.Matter 2019, 4, x 8 of 22 instead of multilayered structure also seems to be quite sufficient for the initial approximation because, in such structures, only intra-layer conductivity is metallic, while interlayer conductivity is nonmetallic due to larger interlayer bond lengths if compared with those in layers.The 2-D unit cell of the perfectly flat boron sheet without hexagonal holes (Figure 4) is a rhomb with an acute angle of 3 / π β = , i.e., with a single lattice constant a (Figure 5).Into the basis set of a simple LCAO formalism, it has been included core (1s), fully (2s), and partially filled (2p) valence and empty excited (2p) atomic orbitals.
Experimentally, there are 10 detected different existing states in the boron atom.To minimize the calculation errors related to the approximation of the crystalline potential by the superposition of atomic potentials, we choose orbitals with the same symmetry as the partially filled valence orbital, i.e., 2p, with the closest energy level and, consequently, with the closest classical turning point radii of electrons.
Taking into account the degeneracy of atomic energy levels by magnetic and spin quantum numbers of 2, 2, 6, and 6, respectively, we can state that this set of 4 orientation-averaged orbitals replaces 16 angularly dependent atomic orbitals.
The secular equation takes the form instead of multilayered structure also seems to be quite sufficient for the initial approximation because, in such structures, only intra-layer conductivity is metallic, while interlayer conductivity is nonmetallic due to larger interlayer bond lengths if compared with those in layers.The 2-D unit cell of the perfectly flat boron sheet without hexagonal holes (Figure 4) is a rhomb with an acute angle of 3 / π β = , i.e., with a single lattice constant a (Figure 5). .Then, the radius-vectors of the lattice sites are Into the basis set of a simple LCAO formalism, it has been included core (1s), fully (2s), and partially filled (2p) valence and empty excited (2p) atomic orbitals.
Experimentally, there are 10 detected different existing states in the boron atom.To minimize the calculation errors related to the approximation of the crystalline potential by the superposition of atomic potentials, we choose orbitals with the same symmetry as the partially filled valence orbital, i.e., 2p, with the closest energy level and, consequently, with the closest classical turning point radii of electrons.
Taking into account the degeneracy of atomic energy levels by magnetic and spin quantum numbers of 2, 2, 6, and 6, respectively, we can state that this set of 4 orientation-averaged orbitals replaces 16 angularly dependent atomic orbitals.
The secular equation takes the form instead of multilayered structure also seems to be quite sufficient for the initial approximation because, in such structures, only intra-layer conductivity is metallic, while interlayer conductivity is nonmetallic due to larger interlayer bond lengths if compared with those in layers.The 2-D unit cell of the perfectly flat boron sheet without hexagonal holes (Figure 4) is a rhomb with an acute angle of 3 / π β = , i.e., with a single lattice constant a (Figure 5). .Then, the radius-vectors of the lattice sites are Into the basis set of a simple LCAO formalism, it has been included core (1s), fully (2s), and partially filled (2p) valence and empty excited (2p) atomic orbitals.
Experimentally, there are 10 detected different existing states in the boron atom.To minimize the calculation errors related to the approximation of the crystalline potential by the superposition of atomic potentials, we choose orbitals with the same symmetry as the partially filled valence orbital, i.e., 2p, with the closest energy level and, consequently, with the closest classical turning point radii of electrons.
Taking into account the degeneracy of atomic energy levels by magnetic and spin quantum numbers of 2, 2, 6, and 6, respectively, we can state that this set of 4 orientation-averaged orbitals replaces 16 angularly dependent atomic orbitals.
The secular equation takes the form Into the basis set of a simple LCAO formalism, it has been included core (1s), fully (2s), and partially filled (2p) valence and empty excited (2p) atomic orbitals.
Experimentally, there are 10 detected different existing states in the boron atom.To minimize the calculation errors related to the approximation of the crystalline potential by the superposition of atomic potentials, we choose orbitals with the same symmetry as the partially filled valence orbital, i.e., 2p, with the closest energy level and, consequently, with the closest classical turning point radii of electrons.
Taking into account the degeneracy of atomic energy levels by magnetic and spin quantum numbers of 2, 2, 6, and 6, respectively, we can state that this set of 4 orientation-averaged orbitals replaces 16 angularly dependent atomic orbitals.
The secular equation takes the form where S(α Formally, these expressions contain infinite series.However, within the initial quasi-classical approximation, due to the finiteness of quasi-classical atomic radii, only a finite number of summands differs from zero.Thus, the series are terminated unambiguously.
The input data in a.u. in the form of quasi-classical parameters of boron atoms are shown in Tables 1 and 2. As it was mentioned above, the parameters of electron states fully or partially filled with electrons in the ground state were calculated on the basis of the theoretical, namely HF, values of electron levels, while for the excited state, we use the experimental value [98], which, however, is modulated by the multiplier of order of 1, 0.984151, leading to the coincidence between experimental and HF-theoretical first ionization potentials for an isolated boron atom: 0.304945 and 0.309856 a.u., respectively.Note that for the ground state, the 1s 2 2s 2 2p configuration is considered, not the 1s 2 2s2p 2 configuration, from which the ground state and first excited states of some boron-like ions arise [99].All the matrix elements and electron energies are calculated in points α 1 k 2 of the reciprocal space with parameters −1/2 ≤ α 1 , α 2 ≤ +1/2, i.e., within a rhombic unit cell of the reciprocal lattice (Figure 7).The first Brillouin zone for a boron flat sheet has a hexagonal shape.Of course, the areas of hexagonal and rhombic unit cells are equal (Figure 8).The unit cell is covered evenly by 1,002,001 points, at which the energy is found as a solution to the generalized eigenvalue problem.
The calculation has been performed in atomic units, a.u.Then, the results have been converted according to the relations: 1 a.u. of energy = 27.212eV and 1 a.u. of length = 0.52918 Å.
Based on the resulting data set, we have constructed the electron band surfaces, the distribution of DoS in the bands, and the Fermi curve, emphasizing that, instead of the Fermi surface, the characteristics of 3-D crystals, 2-D crystals are characterized by Fermi curves.
Results and Discussion
In the quasi-classically calculated electronic structure of the flat boron sheet, we resolve four bands of energy.We have to emphasize that for simplicity, the band surfaces below are shown over a rhombic (not hexagonal) domain.
The lowest energy band 1 E surface is found to be almost a plane placed at the level of The unit cell is covered evenly by 1,002,001 points, at which the energy is found as a solution to the generalized eigenvalue problem.
The calculation has been performed in atomic units, a.u.Then, the results have been converted according to the relations: 1 a.u. of energy = 27.212eV and 1 a.u. of length = 0.52918 Å.
Based on the resulting data set, we have constructed the electron band surfaces, the distribution of DoS in the bands, and the Fermi curve, emphasizing that, instead of the Fermi surface, the characteristics of 3-D crystals, 2-D crystals are characterized by Fermi curves.
Results and Discussion
In the quasi-classically calculated electronic structure of the flat boron sheet, we resolve four bands of energy.We have to emphasize that for simplicity, the band surfaces below are shown over a rhombic (not hexagonal) domain.
The lowest energy band 1 E surface is found to be almost a plane placed at the level of The unit cell is covered evenly by 1,002,001 points, at which the energy is found as a solution to the generalized eigenvalue problem.
The calculation has been performed in atomic units, a.u.Then, the results have been converted according to the relations: 1 a.u. of energy = 27.212eV and 1 a.u. of length = 0.52918 Å.
Based on the resulting data set, we have constructed the electron band surfaces, the distribution of DoS in the bands, and the Fermi curve, emphasizing that, instead of the Fermi surface, the characteristics of 3-D crystals, 2-D crystals are characterized by Fermi curves.
Results and Discussion
In the quasi-classically calculated electronic structure of the flat boron sheet, we resolve four bands of energy.We have to emphasize that for simplicity, the band surfaces below are shown over a rhombic (not hexagonal) domain.
The lowest energy band E 1 surface is found to be almost a plane placed at the level of E 1min = E 1max = −276.21eV.Thus, the chemical shift against the core 1s atomic level E 1s = −209.41eV equals to δE 1 = E 1s − E 1 = 66.80 eV.Dispute the shift of the B 1s atomic level, it retains an order of magnitude after transforming in an electronic band of the boron flat sheet.The lowest-lying band E 1 is fully filled with electrons.
The band E 2 is the highest fully filled band (Figure 9) with bottom at E 2min = −37.21eV and top at E 2max = −19.85eV, i.e., with a width of ∆E 2 = E 2max − E 2min = 17.36 eV.Note that, this range of energies is comparable in order of magnitude with a valence 2s atomic level of E 2s = −13.46eV.
The band E 3 (Figure 10) is partially filled, i.e., partially empty, with a bottom at E 3min = −23.08eV and top at E 3max = −17.16eV, i.e., with a width of ∆E 3 = E 3max − E 3min = 5.92 eV.Note that this range of energies is comparable in order of magnitude with a valence 2p atomic level E 2p = −8.43eV.
The band E 4 (Figure 11) is empty, with a bottom at E 4min = −17.65 eV and top at E 4max = −8.08 eV, i.e., with a width of ∆E 4 = E 4max − E 4min = 9.57 eV.Note that this range of energies is comparable in order of magnitude with the modulated value of the excited 2p level E 2p = −5.84eV.
Note that this range of energies is comparable in order of magnitude with a valence 2p atomic level The band 4 E (Figure 11) is empty, with a bottom at eV 65 .17 The band 4 E (Figure 11) is empty, with a bottom at eV 65 .17 To compare easily our results with the literature data, in addition to the presentation of the band structure using the contour plots of the whole Brillouin zone in Figures 12 and 13, we plot the band energies (as well as their second derivatives and corresponding parabolic approximations) along the main lines of symmetry.
Our quasi-classical calculation of the crystalline band structure, like any other approach also utilizing HF parameters of constituting atoms, cannot determine the absolute values of energy parameters with a high accuracy.By this reason, the above mentioned value E Fermi cannot be used directly to determine the electron work function of the boron sheet.This goal can be achieved only after corrections are made to include the electron-correlation and to exclude the electron-self-interacting effects, which have to allow an accurate determination of the position for the vacuum level of energy E = 0.However, the shifting of the reference point at the energy axis does not affect the energy differences, which are credible as are determined with quite an acceptable accuracy.They are collected in Table 3.
conduction electrons at the Fermi level, are bounded inside the 2-D crystal.This result once more evidences the correctness of the calculations performed in this work.The total width of valence and conduction bands equal to eV 79 .17 .As expected, it is negligible if compared with that of a conduction band.
To compare easily our results with the literature data, in addition to the presentation of the band structure using the contour plots of the whole Brillouin zone in Figures 12 and 13, we plot the band energies (as well as their second derivatives and corresponding parabolic approximations) along the main lines of symmetry. .As expected, it is negligible if compared with that of a conduction band.
To compare easily our results with the literature data, in addition to the presentation of the band structure using the contour plots of the whole Brillouin zone in Figures 12 and 13, we plot the band energies (as well as their second derivatives and corresponding parabolic approximations) along the main lines of symmetry.The Fermi curve of a boron flat sheet is found to be consisting of parts of a number of closed curves including concentric ones, the centre of which can be approximated by ellipse with long and short axes along the Γ-K and Γ-M directions, respectively (Figure 14).The Fermi curve of a boron flat sheet is found to be consisting of parts of a number of closed curves including concentric ones, the centre of which can be approximated by ellipse with long and short axes along the Γ-Κ and Γ-Μ directions, respectively (Figure 14).As it is known, for semiconductors, the effective mass conception referring to band zone curvatures is used to approximate the wave-vector dependence of electron energies near the band gap.As for metals, the Fermi surface curvature can be used to estimate the effective mass of conduction electrons and hence their mobility.
The ellipse representing a branch of intersection between the 3 E -band surface with the Fermi E -plane can be described by the equation As it is known, for semiconductors, the effective mass conception referring to band zone curvatures is used to approximate the wave-vector dependence of electron energies near the band gap.As for metals, the Fermi surface curvature can be used to estimate the effective mass of conduction electrons and hence their mobility.
The ellipse representing a branch of intersection between the E 3 -band surface with the E Fermi -plane can be described by the equation where k Γ-K and k Γ-M are wave-number components along perpendicular axes Γ-K and Γ-M and F = E Fermi − E 3min = 3.66 eV is the Fermi energy.The effective masses m Γ-K and m Γ-M can be estimated from this equation if it is rewritten in the form of a normalized ellipse equation where k Γ-K 0 and k Γ-M 0 are half-axes in the directions Γ-K and Γ-M, respectively: Then, one can calculate the effective mass of the conduction electrons m σ , i.e., electrons placed at the Fermi level, from the relation The effective electron mass at the Fermi level reveals a significant anisotropy.For the central ellipse, the effective masses are m Γ-K /m 0 ≈ 0.480 and m Γ-M /m 0 ≈ 0.052, with m σ /m 0 ≈ 0.094, where m 0 is the free electron mass.
The Fermi curve of a boron flat sheet is found to consist of 6 parts of 3 ellipses representing the quadric energy-dispersion of the conduction electrons; see Figure 15., where 0 m is the free electron mass.
The Fermi curve of a boron flat sheet is found to consist of 6 parts of 3 ellipses representing the quadric energy-dispersion of the conduction electrons; see Figure 15., where 0 m is the free electron mass.
The Fermi curve of a boron flat sheet is found to consist of 6 parts of 3 ellipses representing the quadric energy-dispersion of the conduction electrons; see Figure 15.The overall shapes of DoSs obtained by us and previously by others, especially, in References [31,33,47] are rather similar but with some differences.It is understandable as these structures are buckled/puckered or flat variants of the same triangular lattice with or without hexagonal holes (Figure 17).The discrepancies may be attributed with the perturbations related to the mentioned structural changes and differences between the computing methods utilized, as well as the The overall shapes of DoSs obtained by us and previously by others, especially, in References [31,33,47] are rather similar but with some differences.It is understandable as these structures are buckled/puckered or flat variants of the same triangular lattice with or without hexagonal holes (Figure 17).The discrepancies may be attributed with the perturbations related to the mentioned structural changes and differences between the computing methods utilized, as well as the difference between projected onto in-plane or out-of-plane orbitals' densities-of-states (PDoSs) from the total DoS of the sheet.e overall shapes of DoSs obtained by us and previously by others, especially, in References 47] are rather similar but with some differences.It is understandable as these structures are d/puckered or flat variants of the same triangular lattice with or without hexagonal holes 17).The discrepancies may be attributed with the perturbations related to the mentioned ral changes and differences between the computing methods utilized, as well as the nce between projected onto in-plane or out-of-plane orbitals' densities-of-states (PDoSs) from al DoS of the sheet.The Fermi curve of the monolayer flat boron sheet approximated by parts of concentric c ellipse-like curves could be considered as a certain kind of topological analog of the Fermi su (Figure 18) in the form of a half-torus and distorted cylinder of magnesium diboride MgB2 [ which is believed to be a structural analog of the hypothetical multilayered boron sheet, where m Me atoms in a metal diboride MeB2 structure are replaced by B-atoms themselves.The low effective mass of conduction electrons at the Fermi level indicates a high mobil electrons and, hence, a high conductivity of the flat boron sheet.
Conclusions
In summary, we can conclude that the electronic band structure of a boron flat triangular has been calculated within a quasi-classical approach for the quasi-classical structural param (B-B bond length) of [31], (b) from Reference [33], and (c) modified from Reference [37].
The Fermi curve of the monolayer flat boron sheet approximated by parts of concentric closed ellipse-like curves could be considered as a certain kind of topological analog of the Fermi surface (Figure 18) in the form of a half-torus and distorted cylinder of magnesium diboride MgB 2 [100], which is believed to be a structural analog of the hypothetical multilayered boron sheet, where metal Me atoms in a metal diboride MeB 2 structure are replaced by B-atoms themselves.The Fermi curve of the monolayer flat boron sheet approximated by parts of concentric closed ellipse-like curves could be considered as a certain kind of topological analog of the Fermi surface (Figure 18) in the form of a half-torus and distorted cylinder of magnesium diboride MgB2 [100], which is believed to be a structural analog of the hypothetical multilayered boron sheet, where metal Me atoms in a metal diboride MeB2 structure are replaced by B-atoms themselves.The low effective mass of conduction electrons at the Fermi level indicates a high mobility of electrons and, hence, a high conductivity of the flat boron sheet.
Conclusions
In summary, we can conclude that the electronic band structure of a boron flat triangular sheet has been calculated within a quasi-classical approach for the quasi-classical structural parameter (B-B bond length) of The low effective mass of conduction electrons at the Fermi level indicates a high mobility of electrons and, hence, a high conductivity of the flat boron sheet.
Conclusions
In summary, we can conclude that the electronic band structure of a boron flat triangular sheet has been calculated within a quasi-classical approach for the quasi-classical structural parameter (B-B bond length) of a = 1.78Å.It is shown to have metallic properties like most of other modifications of boron sheets.
The Fermi curve of a boron flat sheet consists of parts of 3 ellipses with semimajor and semiminor axes along the Γ-K and Γ-M directions, respectively.The effective electron mass at the Fermi level reveals a distinct anisotropy: m Γ−K /m 0 ≈ 0.480 and m Γ−M /m 0 ≈ 0.052, with m σ /m 0 ≈ 0.094 for conduction mass.The low effective mass of conduction electrons indicates a high mobility of electrons and, hence, a high conductivity of flat boron sheets.
The shapes of density-of-states obtained here for flat holeless boron sheets and previously calculated ones are rather similar, which is understandable as these structures are buckled/puckered or flat but with hexagonal holes, variants of the same triangular lattice.The remaining discrepancies may be attributed to the perturbations associated with the mentioned structural changes and differences in the used models.
Figure 1 .
Figure 1.A regular icosahedron B12 with B-atoms at the vertices.
Figure 2 .
Figure 2.An idealized unit cell of a β-rhombohedral boron crystal.
Figure 1 .
Figure 1.A regular icosahedron B 12 with B-atoms at the vertices.
Figure 1 .
Figure 1.A regular icosahedron B12 with B-atoms at the vertices.
Figure 2 .
Figure 2.An idealized unit cell of a β-rhombohedral boron crystal.
Figure 2 .
Figure 2.An idealized unit cell of a β-rhombohedral boron crystal.
Figure 3 .
Figure 3.The borophene structure on a silver substrate: the top and side views of the monolayer structure (unit cell indicated by the box) [16].
Figure 3 .
Figure 3.The borophene structure on a silver substrate: the top and side views of the monolayer structure (unit cell indicated by the box) [16].
1 .
Then, the radius-vectors of the lattice sites are a number of different values for the lattice constant of a boron sheet suggested theoretically.For the self-consistency of calculations, we use 37 .3 = a a.u. of length, i.e., 1.78 Å, which corresponds to the B-B pair interatomic potential in the same quasi-classical approximation [91].
Figure 4 .
Figure 4.A boron perfect flat sheet without hexagonal holes.
Figure 5 .
Figure 5.A 2-D rhombic unit cell of a boron flat sheet.
Figure 6 .
Figure 6.The vectors of a reciprocal lattice of a boron flat sheet.
Figure 4 .
Figure 4.A boron perfect flat sheet without hexagonal holes. Let a number of different values for the lattice constant of a boron sheet suggested theoretically.For the self-consistency of calculations, we use 37 .3 = a a.u. of length, i.e., 1.78 Å, which corresponds to the B-B pair interatomic potential in the same quasi-classical approximation [91].
Figure 4 .
Figure 4.A boron perfect flat sheet without hexagonal holes.
Figure 5 .
Figure 5.A 2-D rhombic unit cell of a boron flat sheet.
Figure 6 .
Figure 6.The vectors of a reciprocal lattice of a boron flat sheet.
Figure 5 .
Figure 5.A 2-D rhombic unit cell of a boron flat sheet. Let a number of different values for the lattice constant of a boron sheet suggested theoretically.For the self-consistency of calculations, we use 37 .3 = a a.u. of length, i.e., 1.78 Å, which corresponds to the B-B pair interatomic potential in the same quasi-classical approximation [91].
Figure 4 .
Figure 4.A boron perfect flat sheet without hexagonal holes.
Figure 5 .
Figure 5.A 2-D rhombic unit cell of a boron flat sheet.
Figure 6 .
Figure 6.The vectors of a reciprocal lattice of a boron flat sheet.
Figure 6 .
Figure 6.The vectors of a reciprocal lattice of a boron flat sheet.
Figure 7 .
Figure 7.The transform from the ) , ( 2 1 α α domain to the ) , ( y x k k domain of reciprocal space.
Figure 8 .
Figure 8.The hexagonal (first Brillouin zone) and rhombic unit cells of a reciprocal lattice of a flat boron sheet.
.
Dispute the shift of the B 1s atomic level, it retains an order of magnitude after transforming in an electronic band of the boron flat sheet.The lowest-lying band 1 E is fully filled with electrons.The band 2 E is the highest fully filled band (Figure9) with bottom at , this range of energies is comparable in order of magnitude with a valence 2s atomic level of eV
Figure 8 .
Figure 8.The hexagonal (first Brillouin zone) and rhombic unit cells of a reciprocal lattice of a flat boron sheet.
.
Dispute the shift of the B 1s atomic level, it retains an order of magnitude after transforming in an electronic band of the boron flat sheet.The lowest-lying band 1 E is fully filled with electrons.The band 2 E is the highest fully filled band (Figure9) with bottom at that, this range of energies is comparable in order of magnitude with a valence 2s atomic level of eV
Figure 8 .
Figure 8.The hexagonal (first Brillouin zone) and rhombic unit cells of a reciprocal lattice of a flat boron sheet.
Figure 9 . 3 E
Figure 9.The band 2 E energy surface (a) and contour plots (b) over a rhombic unit cell.
Figure 9 .
Figure 9.The band E 2 energy surface (a) and contour plots (b) over a rhombic unit cell.
Figure 9 .
Figure 9.The band 2 E energy surface (a) and contour plots (b) over a rhombic unit cell.
he band 3 E
(Figure10) is partially filled, i.e., partially empty, with a bottom at
Figure 10 .
Figure 10.The band 3 E energy surface (a) and contour plots (b) over a rhombic unit cell.
Figure 10 .Figure 10 .
Figure 10.The band E 3 energy surface (a) and contour plots (b) over a rhombic unit cell.
Figure 11 .
Figure 11.The band 4 E energy surface (a) and contour plots (b) over a rhombic unit cell.Between bands 1 E and 2 E , there is a very wide energy gap of eV 00 .239 max 1 min 2 12 = − = Δ E E E , while pairs of bands 2 E and 3 E , and 3 E and 4 E overlap each with other, i.e. there are obtained pseudo-gaps of eV 23 .3 max 2 min 3 23 − = − = Δ E E E and eV 49 .0 max 3 min 4 34 − = − = Δ E E E .
Figure 11 .
Figure 11.The band E 4 energy surface (a) and contour plots (b) over a rhombic unit cell.Between bands E 1 and E 2 , there is a very wide energy gap of ∆E 12 = E 2min − E 1max = 239.00eV, while pairs of bands E 2 and E 3 , and E 3 and E 4 overlap each with other, i.e. there are obtained pseudo-gaps of ∆E 23 = E 3min − E 2max = −3.23 eV and ∆E 34 = E 4min − E 3max = −0.49eV.The Fermi level is found at E Fermi = −19.42eV, within the part of the band E 3 without overlapping with other bands.This result confirms the metallicity of the boron sheet.Thus, all the electron energies are found to be negative.It means that all electrons, including conduction electrons at the Fermi level, are bounded inside the 2-D crystal.This result once more evidences the correctness of the calculations performed in this work.The total width of valence and conduction bands equal to ∆E V = E Fermi − E 2min = 17.79 eV and ∆E C = E 4max − E Fermi = 11.34 eV, respectively.The upper valence band width is ∆E VU = E Fermi − E 2max = 0.43 eV.As expected, it is negligible if compared with that of a conduction band.To compare easily our results with the literature data, in addition to the presentation of the band structure using the contour plots of the whole Brillouin zone in Figures12 and 13, we plot the band energies (as well as their second derivatives and corresponding parabolic approximations) along the main lines of symmetry.Our quasi-classical calculation of the crystalline band structure, like any other approach also utilizing HF parameters of constituting atoms, cannot determine the absolute values of energy parameters with a high accuracy.By this reason, the above mentioned value E Fermi cannot be used directly to determine the electron work function of the boron sheet.This goal can be achieved only after corrections are made to include the electron-correlation and to exclude the electron-self-interacting effects, which have to allow an accurate determination of the position for the vacuum level of energy
Figure 12 .
Figure 12.The section of the conduction band surface along the main diagonal of a rhombic unit cell (direction Γ-Κ) of reciprocal space (in atomic units).
Figure 13 .
Figure 13.The section of the conduction band surface along a small diagonal of a rhombic unit cell (direction Γ-Μ) of reciprocal space (in atomic units).
Figure 12 .
Figure 12.The section of the conduction band surface along the main diagonal of a rhombic unit cell (direction Γ-K) of reciprocal space (in atomic units).
Figure 12 .
Figure 12.The section of the conduction band surface along the main diagonal of a rhombic unit cell (direction Γ-Κ) of reciprocal space (in atomic units).
Figure 13 .
Figure 13.The section of the conduction band surface along a small diagonal of a rhombic unit cell (direction Γ-Μ) of reciprocal space (in atomic units).
Figure 13 .
Figure 13.The section of the conduction band surface along a small diagonal of a rhombic unit cell (direction Γ-M) of reciprocal space (in atomic units).
Figure 14 .
Figure 14.Curves of the intersection of the band surface with the Fermi plane in neighboring rhombic unit cells of reciprocal space.
number components along perpendicular axes Γ-Κ and Γ-Μ and eV equation if it is rewritten in the form of a normalized ellipse equation axes in the directions Γ-Κ and Γ-Μ, respectively:
Figure 14 .
Figure 14.Curves of the intersection of the band surface with the Fermi plane in neighboring rhombic unit cells of reciprocal space.
The effective electron mass at the Fermi level reveals a significant anisotropy.For the central ellipse, the effective masses are 480
Figure 15 .Figure 15 .
Figure 15.The Fermi curve of a boron flat sheet.DoS within the bands 2 E , 3 E , and 4 E against electron energy renormalized to the Fermi level Fermi E E E − → are presented in Figure 16 in two different scales for the convenient consideration.As for the band 1 E , DoS within this band is proportional to the Diracfunction, [eV]) 42 .256 ( ~+ E δ , with the accordingly renormalized argument eV 79 .256 eV) 42 .19 ( eV 21 .276 eV 21 .276 = − − − → − .
Figure 15 .Figure 16 .
Figure 15.The Fermi curve of a boron flat sheet.DoS within the bands 2 E , 3 E , and 4 E against electron energy renormalized to the Fermi level Fermi E E E − → are presented in Figure 16 in two different scales for the convenient consideration.As for the band 1 E , DoS within this band is proportional to the Diracfunction, [eV]) 42 .256 ( ~+ E δ , with the accordingly renormalized argument eV 79 .256 eV) 42 .19 ( eV 21 .276 eV 21 .276 = − − − → − .
Figure 16 .
Figure 16.The density-of-electron-states renormalized to the Fermi level in a valence band and the lower and upper conduction bands of a boron flat sheet in two different scales: general view (a) and in Fermi level vicinity (b).
gure 16 .
The density-of-electron-states renormalized to the Fermi level in a valence band and the wer and upper conduction bands of a boron flat sheet in two different scales: general view (a) and Fermi level vicinity (b).
is shown to have metallic properties like most of modifications of boron sheets.
is shown to have metallic properties like most of other modifications of boron sheets.
1 , α 2 ) and H(α 1 , α 2 ) are 16 × 16 matrices of overlapping integrals and single-electron Hamiltonian, respectively, reducible to 4 × 4 matrices.E(α 1 , α 2 ) is a required electron energy band.About the parameters α 1 and α 2 , see below.This equation has 4 different real and negative roots E m (α 1 , α 2 ), m = 1, 2, 3, 4. It can be demonstrated that they exhibit all the different solutions of the corresponding secular equation with 16 × 16 matrices.Within the initial quasi-classical approximation, these matrix elements can be found from the relations shown in the Appendix A.
Table 1 .
The inner and outer classical turning point radii r i and r i of electrons in boron atom.
Table 2 .
The radii r λ of radial layers of quasi-classical averaging of potential in boron atoms and the averaged values of potential ϕ λ .
Table 3 .
The band widths and (pseudo)gaps between bands. | 14,714 | 2019-03-03T00:00:00.000 | [
"Physics"
] |
Dynamic fingerprint of fractionalized excitations in single-crystalline Cu3Zn(OH)6FBr
Beyond the absence of long-range magnetic orders, the most prominent feature of the elusive quantum spin liquid (QSL) state is the existence of fractionalized spin excitations, i.e., spinons. When the system orders, the spin-wave excitation appears as the bound state of the spinon-antispinon pair. Although scarcely reported, a direct comparison between similar compounds illustrates the evolution from spinon to magnon. Here, we perform the Raman scattering on single crystals of two quantum kagome antiferromagnets, of which one is the kagome QSL candidate Cu3Zn(OH)6FBr, and another is an antiferromagnetically ordered compound EuCu3(OH)6Cl3. In Cu3Zn(OH)6FBr, we identify a unique one spinon-antispinon pair component in the E2g magnetic Raman continuum, providing strong evidence for deconfined spinon excitations. In contrast, a sharp magnon peak emerges from the one-pair spinon continuum in the Eg magnetic Raman response once EuCu3(OH)6Cl3 undergoes the antiferromagnetic order transition. From the comparative Raman studies, we can regard the magnon mode as the spinon-antispinon bound state, and the spinon confinement drives the magnetic ordering.
I. INTRODUCTION
When subject to strong quantum fluctuation and geometrical frustration, the quantum spin system may not develop into a magnetically ordered state [1,2], but a quantum spin liquid (QSL) ground state at zero temperature.[3][4][5][6] QSL has no classic counterpart as it exhibits various topological orders characterized by the long-range entanglement pattern.[7][8][9] The lattice of the spin-1/2 kagome network of cornersharing triangles is a long-sought platform for antiferromagnetically interacting spins to host a QSL ground state.[10][11][12][13][14][15][16] Herbersmithite [ZnCu 3 (OH) 6 Cl 2 ] is the first promising kagome QSL candidate, [3,[16][17][18][19][20][21][22][23] in which no long-range magnetic order was detected down to low temperature, [17,18] and inelastic neutron scattering on single crystals revealed a magnetic continuum, as a hallmark of fractionalized spin excitations.[20,22] Up to date, most, if not all, experimental information on the nature of kagome QSL relies on a single compound of Herbertsmithite.Considering the fact that a lattice distortion away from a perfect kagome structure has recently been confirmed in Herbersmithite, [24,25] which stimulates investigations on the subtle magneto-elastic effect in the kagome materials, [26,27] an alternative realization of the QSL compound with the ideal kagome lattice is still in urgent need.Zn-Barlowite [Cu 3 Zn(OH) 6 FBr] is such a can-didate for a kagome QSL ground state [28][29][30][31][32][33][34][35][36][37][38] with no lattice distortion being reported yet.Measurements on the powder samples of Zn-Barlowite indicate the absence of longrange magnetic order or spin freezing down to temperatures of 0.02 K, four orders of magnitude lower than the Curie-Weiss temperature.[30,32] Besides the absence of long-range magnetic order down to low temperature, the fractionalized spin excitations, i.e. spinons, in the spectroscopy is essential evidence for the long-range entanglement pattern in QSL.However, spectroscopic evidence for the deconfined spinon excitations in Zn-Barlowite is still lacking, in part due to unavailable single-crystal samples.Note that for Zn-Barlowite Cu 4 -x Zn x (OH) 6 FBr the doping parameter x ≤ 0.56, in the previously reported Zn-Barlowite single crystal samples, does not belong to the QSL regime.[33-35, 37, 38] In this work, we report the synthesis of the single crystals of Cu 4 -x Zn x (OH) 6 FBr (x = 0.82) of millimeter size, which is in the QSL regime, and the spin dynamics revealed by the inelastic light scattering on these samples.We confirm the ideal kagome-lattice structure by the angle-resolved polarized Raman responses and second-harmonic-generation (SHG), and observe a magnetic Raman continuum in our crystal samples.Raman scattering has previously been reported for Herbertsmithite, [19] and the overall continuum agreed with that in Zn-Barlowite.Although it was not discussed, the lattice distortion in Herbertsmithite was evident by the anisotropic angle dependent Raman responses [19] and may account for the difference from our results in details.In the theory the Raman spectrum of the kagome QSL contains the one-pair component of spinon-antispinon excitations with a peculiar power-law behavior at low frequency, serving as the fingerprint of spinons.[39] Our measured magnetic Raman continuum agrees well with the theoretical prediction, revealing the fractionalized spin excitation in Cu 3.18 Zn 0.82 (OH) 6 FBr.
To demonstrate the one-pair spinon dynamics in the kagome QSL even more evidently, we perform a control experiment on a kagome antiferromagnet EuCu 3 (OH) 6 Cl 3 , which suffers a spinon confinement as a transition taking places from a paramagnetic phase to a q = 0 type 120 • non-collinear antiferromagnetic order (AFM) ground state below the Néel temperature T N = 17 K. [40][41][42] We observe a magnon peak in the AFM state, which can be regarded as the spinon confinement in the magnetically ordered state as schematically summarized in Fig. 1.The magnon excitation emerges from the one-pair continuum, firstly reported in our work, and can be regarded as the bound state of the spinonantispinon excitations.
To study changes in the crystal structures of Cu3Zn and Cu4, we track the temperature evolution of Raman spectra in the two compounds.Cu3Zn and Cu4 at high temperature crystallize the same space group P 6 3 /mmc.[28,30] We didn't observe the structural phase transition in Cu3Zn from the Raman scattering down to low temperature (Supplementary Sec-tion2 and 3).Cu4 transforms to orthorhombic Pnma below 265 K, characterized by changes in the relative occupancies of the interlayer Cu 2+ site.[31,[33][34][35] The splitting of phonon peaks in Cu4 due to the superlattice folding in the orthorhombic Pnma phase is resolved for several modes [Supplementary Fig. S4].Cu3Zn displays sharp E 2g modes at 125 cm −1 for in-plane relative movements of Zn 2+ .The corresponding modes for the interlayer Cu 2+ in Cu4 are broad at 290 K due to the randomly distributed interlayer Cu 2+ and split into two peaks below the structural transition temperature.Cu3Zn has no Raman-active mode related to the kagome Cu 2+ vibra-tions, indicating the kagome layer remains substantially intact as the inversion center of Cu 2+ sites is evident.The kagome layers in Cu4 are distorted, signaled by a new phonon mode for the kagome Cu 2+ vibration at 62 cm −1 .Besides sharp phonon modes, we observe a Raman continuum background in Cu3Zn, particularly at low frequency, signifying substantial magnetic excitations.
Previous X-ray and neutron refinements of the crystal structure suggest ideal kagome planes in Cu3Zn.SHG confirmed the parity symmetry of the crystallographic structure in Barlowite 2 Cu 4 (OH) 6 FBr and Zn-Barlowite Cu 3.66 Zn 0.33 (OH) 6 FBr.In Supplementary Section7, we also reveal the inversion symmetry by SHG in our single crystals of Cu3Zn.To further exclude subtle local symmetry lowering or lattice distortions, we perform the angle-resolved polarization-dependent Raman measurements of Cu3Zn for magnetic and lattice vibration modes.[44,45] The threefold rotation symmetry of the kagome lattice leads to isotropic angle dependence in the XX configuration both for A 1g and E 2g components, XY and X-only configurations for the E 2g component; it also gives rise to the angle θ dependence of cos 2 θ in the X-only configuration for the A 1g component.We find that the angle dependence of Raman responses, in particular for the magnetic continuum at low frequency, the Br − E 2g phonon, and O 2− A 1g phonon modes, fit the theoretical curves very well [Supplementary Section4], confirming the threefold rotational symmetry of the kagome lattice in the dynamical Raman responses of the lattice vibrations and magnetic excitations.Combining the X-ray and neutron refinement, [30][31][32]38] we conclude that Cu3Zn manifests a structurally ideal realization of layered spin-1/2 Cu 2+ kagome-lattice planes.
Having established the absence of a sharp anomaly in the thermodynamic properties [Supplementary Section1] and the lack of an emergent magnetic order with the weak symmetry breaking in the angle-dependent polarized Raman response, which is the first step to a QSL, we now present our main results of the magnetic Raman continuum in Cu3Zn with subtracting phonon contribution, as shown in Fig. 2. The susceptibility is related to the Raman intensity I(ω) = (1 + n(ω))χ (ω) with the bosonic temperature factor n(ω). Fig. 2a, b and c are the A 1g magnetic Raman response in Cu3Zn, which measures the thermal fluctuation of the interacting spins on the kagome lattice.[46][47][48] We can see that the A 1g channel is activated only at high temperatures, disappears at low temperatures, behaving as the thermally activated excitations.At high temperatures, the Raman spectra exhibit the quasielastic scattering that is common in the inelastic light scattering for the spin systems.[48] The maximum in the Raman response function decreases from 60 cm −1 at room temperature to 30 cm −1 at 110 K, and the magnetic intensity becomes hardly resolved at low temperatures below 50 K.The integrated Raman susceptibility χ (T ) in Fig. 2b fits the thermally activated function, ∝ e −ω * /T with ω * = 53 cm −1 , different from the power-law temperature dependence of the quasielastic scattering in Herbertsmithite.[19] The temperature dependence of the A 1g magnetic Raman susceptibilities χ A1g (ω, T ) in Cu3Zn distributes the main spectral weight among the frequency region less than 400 cm −1 and the tem-perature range above 50 K in Fig. 2c.
Different from the A 1g channel, the pronounced E 2g magnetic Raman continuum in Cu3Zn persists at low temperatures (Fig. 2d), indicating the quantum fluctuation of the kagome spin-1/2 system.Along with the theoretical work, [39] we schematically decompose the E 2g Raman continuum into two components, which have the maximum around 150 cm −1 and the higher one around 400 cm −1 , respectively.We denote them as spin excitations for one spin-antispinon pair (one-pair) and two spinon-antispinon pair (two-pair) excitations, respectively.[39] Like the two-magnon scattering in the antiferromagnet, [48,49] the two-pair component doesn't show a significantly non-monotonic temperature dependence as reducing temperatures.The substantial low energy one-pair component has a more pronounced non-monotonic temperature dependence.It increases with the temperature decreasing from 290 K to 50 K and decreases with further temperature reduce as shown in Figs.2d, e, and f.The frequency and temperature dependence of the E 2g magnetic Raman susceptibilities χ E2g (ω, T ) distributes the main spectral weight among the frequency region less than 400 cm −1 , and reaches the maximum at around 150 cm −1 and 50 K, as shown in Fig. 2f.We also observed the Fano effect for the E 2g F − phonon peak at 173 cm −1 in Cu3Zn [Supplementary Section3], whose asymmetric lineshape provides an additional probe of the magnetic continuum.
The one-pair component in the E 2g Raman continuum is crucial as it has an origin in the spinon excitation in the kagome QSL from the perspective of theory.[39] With the incoming and outgoing light polarizations êin and êout , magnetic Raman scattering measures the spin-pair (two-spin-flip) dynamics in terms of the Raman tensor [39,44,50,51] where the summation runs over ij for the nearest neighbor bonds r ij for the S i and S j on the kagome lattice.At zero temperature, the magnetic Raman susceptibility is given as denotes the matrix element for the transition between the ground state |0 and the excited state |f .
denotes the density of state (DOS) for the Raman tensor associated excitations.Introducing the spinon operator f iσ in QSL, the spin-pair operator in the Raman tensor is rewritten in terms of two pairs of spinon-antispinon excitations Besides the two-pair excitation, the magnetic Raman continuum contains the one-pair spinon ex- , where χ = f † iσ f jσ is the spinon mean-field hopping amplitude.[39] As shown in Fig. 2d, the one-pair component in the measured Raman susceptibility in the E 2g has the maximum at 150 cm −1 (J), and extends up to 400 cm −1 (2.6J) at low temperatures.The twopair component has the maximum at 400 cm −1 (2.6 J) and the cut-off around 750 cm −1 (4.9 J).In totality, the mentioned features (maxima and cut-offs) of one-and two-pair excitations in the E 2g measured Raman response in Cu3Zn (Fig. 2 d) overall agree well with the theoretical calculation for the kagome QSL state.[39] In more detail, the one-pair component dominates the E 2g magnetic Raman continuum at low frequency.It displays the power-law behavior up to 70 cm −1 , with a significantly nonmonotonic temperature dependence, as shown in Figs. 3. As lowering the temperature, the E 2g continuum at low frequency increases above 50 K and decreases below 50 K.The lowenergy continuum evolves from a sublinear behavior T α with α < 1 to a superlinear one T α with α > 1 as reducing the temperature.A central question for the kagome spin liquid is whether a spin gap exist.The results of the spin gap in Herbertsmithite are controversial due to the difficulty of singling out the kagome susceptibility.[21,23] Previous results on the powder samples of Cu3Zn suggest a small spin gap, [30,32] and measurements on the single-crystal samples would be of great interest.If such a gap exists, the power-law behavior of the E 2g magnetic Raman continua sets an upper bound for the spin gap of 2 meV.
The temperature-dependent magnetic continua of Cu3Zn in Figs.2d, e, and f, and Fig. 3 imply the maximal spin fluctuations at the characteristic temperature 50 K.The maximum of the kagome spin fluctuations in Cu3Zn signifies the spin singlet forming, [2,52] but is masked by the inter-layer Cu 2+ moments in the thermodynamic properties [Supplementary Section1].It can be revealed by the Knight shift related to the kagome spins in the nuclear magnetic resonace measurements.[30] In contrast to significant energy dependence in magnetic Raman continuum in Cu3Zn in Figs. 2 and 3, the scattered neutron signal in Herbertsmithite is overall insensitive to energy transfer, rather flat above 1.5 meV, but increases significantly at low-energy scattering due to the interlayer Cu 2+ ions.[20,22] The interlayer Cu 2+ ions distribute spatially away from each other, and the spin-pair magnitude among themselves and between them and kagome spins is weak, giving rise to a negligible matrix element in Raman tensors.So different from the neutron scattering, the Raman scattering is not sensitive to the inter-layer Cu 2+ at low energies, advantageous to the detection of kagome spins.Furthermore, inelastic neutron scattering in Herbertsmithite measures the magnetic continuum up to 2-3 J, [20] the same energy range as the one-pair Raman component in Cu3Zn.These results suggest that the magnetic Raman continuum originates from the kagome-plane spins, and the onepair component has an origin of spinon excitations.
The theoretical calculation for kagome Dirac spin liquid (DSL) predicts the power-law behavior for the Raman susceptibility in the E 2g channel at low frequency.[39] The onepair spinon excitation in DSL gives the linear density of state D 1P ∝ ω.The matrix element turns out to be exactly zero for all one-pair excitations in the mean field Dirac Hamiltonian.As a result a Raman spectrum that scales as ω 3 was predicted.[39] However, the vanishing of the matrix element is somewhat accidental and depends on the assumption of a DSL in a Heisenberg model in an ideal kagome lattice.Any deviation from the ideal DSL state, e.g. a small gap in the ground state, [30,32] DM interactions, or other effects of perturbations, [26,53] changes the wave functions and may result in a constant matrix element M (ω).In that case, the Raman spectrum will be simply proportional to the DOS of the one-pair component D 1P which is linear in ω.From our fitting for Cu3Zn in Fig. 3, we find that α = 1.3 when approaching zero temperature.The existence of a small gap in the spinon spectrum may explain this discrepancy.We also note that according to the theory [39] the A 1g and A 2g contributions to the one-pair continuum are the forth-order effect, much smaller than the E 2g contributions.This explains the invisible one-pair continuum in the A 1g and A 2g channels.
Figure 4 presents a control Raman study on the magnetic ordered kagome antiferromagnet EuCu3, which has the antiferromagnetic superexchange J 10 meV, half of the value in Cu3Zn.EuCu3 belongs to the atacamite family with the perfect kagome lattice and has the q = 0 type 120 • ordered spin configuration below T N due to a large Dzyaloshinski-Moriya (DM) interaction [Supplementary Section8].[40,42,[54][55][56] Above the ordering temperature T N = 17 K, the magnetic Raman continuum in the E g channel displays the extended continuum, similar to that in Zn-Barlowite at 4 K as shown in Fig. S10 in Supplementary Section6.This indicates the strong magnetic fluctuations in EuCu3.The less pronounced lowenergy continuum excitations in EuCu 3 (OH) 6 Cl 3 indicate the suppression of the quantum fluctuation due to a large DM interaction.The low energy excitation in the ordered state is the spin-wave excitation, i.e. magnon, and the E g Raman scatter-ing measures one-and two-magnon excitations for the noncollinear 120 • spin configuration as detailed in the Methods section, leading to a sharp magnon peak at 72 cm −1 superimposing on the two-magnon continuum.In this sense, the AFM transition may be thought of as a confinement transition.The comparative studies between Cu3Zn and EuCu3 are sketched in Fig. 1, demonstrating the spinon deconfinement and confinement, respectively, in the different ground states.
III. CONCLUSIONS
Our Raman scattering studies compare the spin dynamics in the kagome QSL compound Cu3Zn and magnetically ordered antiferromagnet EuCu3.In contrast to a sharp magnon peak in EuCu3, the overall magnetic Raman scattering in Cu3Zn agrees well with the theoretical prediction for a spin liquid state.The spinon continuum is evident, providing the strongest evidence yet for the kagome QSL ground state in Cu3Zn.On the material side, Zn-Barlowite provides an ideal structural realization of the kagome lattice, and the available single crystal samples stimulate future systematical studies of the kagome QSL.Along with Herbertsmithite, the singlecrystalline Zn-Barlowite stands able to provide considerable insight into singling out the intrinsic properties of the intrinsic nature of the kagome QSL, without deceiving by the material chemistry details.
METHODS
Sample preparation and characterization.High qualified single crystals of Zn-Barlowite was grown by a hydrothermal method similar to crystal growth of herbertsmithite.[57,58] CuO (0.6 g), ZnBr 2 (3 g), and NH 4 F (0.5 g) and 18 ml deionized water were sealed in a quartz tube and heated between 200 • C and 140 • C by a two-zone furnace.After three months, we obtained millimeter-sized single crystal samples.The value of x in Cu 4 -x Zn x (OH) 6 FBr has been determined as 0.82 by Inductively Coupled Plasma-Atomic Emission Spectroscopy (ICP-AES).The single-crystal X-ray diffraction has been carried out at room temperature by using Cu source radiation ( λ = 1.54178Å) and solved by the Olex2.PC suite programs.[59] The structure and cell parameters of Cu 4 -x Zn x (OH) 6 FBr are in coincidence with the previous report on polycrystalline samples.[30,32] For Barlowite(Cu 4 (OH) 6 FBr), the mixture of CuO (0.6 g), MgBr 2 (1.2 g), and NH 4 F (0.5 g) was transferred into Teflon-lined autoclave with 10 ml water.The autoclave was heated up to 260 • C and cooled to 140 • C after two weeks.A similar growth condition to Barlowite was applied for the growth of EuCu 3 (OH) 6 Cl 3 with staring materials of EuCl 3 • 6 H 2 O (2 g) and CuO (0.6 g).
Measurement methods.Our thermodynamical measure-ments were carried out on the Physical Properties Measurement System (PPMS, Quantum Design) and the Magnetic Property Measurement System (MPMS3, Quantum Design).
The temperature-dependent Raman spectra are measured in a backscattering geometry using a home-modified Jobin-Yvon HR800 Raman system equipped with an electron-multiplying charged-coupled detector (CCD) and a 50× objective with long working distance and numerical aperture of 0.45.The laser excitation wavelength is 514 nm from an Ar + laser.The laser-plasma lines are removed using a BragGrate bandpass filter (OptiGrate Corp.), while the Rayleigh line is suppressed using three BragGrate notch filters (BNFs) with an optical density 4 and a spectral bandwidth ∼5-10 cm −1 .The 1800 lines/mm grating enables each CCD pixel to cover 0.6 cm −1 .The samples are cooled down to 30 K using a Montana cryostat system under a vacuum of 0.4 mTorr and down to 4 K using an attoDRY 1000 cryogenic system.All the measurements are performed with a laser power below 1 mW to avoid sample heating.The temperature is calibrated by the Stocksanti-Stocks relation for the magnetic Raman continuum and phonon peaks.The intensities in two cryostat systems are matched by the Raman susceptibility.The polarized Raman measurements with light polarized in the ab kagome plane of samples were performed in parallel (XX), perpendicular (XY ), and X-only polarization configurations [Supplementary Section4].
SHG measurements were performed using a homemade confocal microscope in a back-scattering geometry.A fundamental wave centered at 800 nm was used as excitation source, which was generated from a Ti-sapphire oscillator (Chameleon Ultra II) with an 80 MHz repetition frequency and a 150 fs pulse width.After passing through a 50x objective, the pump beam was focused on the sample with a diameter of 2 µm.The scattering SHG signals at 400 nm were collected by the same objective and led to the entrance slit of a spectrometer equipped with a thermoelectrically cooled CCD.Two shortpass filters were employed to cut the fundamental wave.
Magnon peak in Raman response for q = 0 AFM state.We consider a kagome lattice antiferromagnet with the DM interaction where summation runs over nearest neighbor bonds ij of the kagome lattice, and the DM interaction is assumed to be of the out-of-plane type.With a large DM interaction D, the kagome antiferromagnet devoleps a q = 0 type 120 • AFM order at low temperature in EuCu3.[40][41][42][60][61][62] In terms of the local basis for the AFM order, we rewrite the Hamiltonian as with where θ ij is an angle between two neighboring spins.The effective linear spin wave Hamiltonian is given as for which the Holstein-Primakoff representation for spin operators in the local basis was applied and the energy dispersion was obtained in Ref. [63].The Raman tensor in the XY configuration is given as In the local spin basis, we have the Raman tensor is given as In the spin-pair operator S i S j in Eq. ( 4), there are twomagnon contribution in terms of S x i S x j + cos(θ ij )(S y i S y j + S z i S z j ), and one-and three-magnon contributions in terms of sin(θ ij )(S z i S y j − S y i S z j ).For the q = 0 spin configuration, we find that τ xy R in Eq. 7 has the non-vanished one magnon contributions.For the √ 3 × √ 3 AFM state, τ xy R has no onemagnon contribution.The observed one-magnon peak in the E g channel in EuCu 3 provides evidence for the q = 0 spin ordering at low temperatures.In the linear spin-wave theory, we take S z in the local basis as a constant, S z i = S z = 1/2, and the XY Raman tensor is given as in terms of the local basis, directly measuring the one magnon excitation.For EuCu3, we have the estimation for the interaction parameters, J = 10 meV, D/J = 0.3, and the magnon peak position is ∆ sw = 1.1J = 88 cm −1 , very close to the measured value 72 cm −1 in our Raman measurement of the one-magnon peak.
FIG. 1. Schematical comparative Raman responses for the AFM and QSL states.With a large DM interaction D, the kagome antiferromagnet develops a 120 • non-collinear AFM ground state with the wave vector q = 0 below TN .[60][61][62] Increasing J/D, the fluctuation of the kagome system increases, driving the system into the QSL state.By increasing the temperature, the thermal fluctuation melts the magnetic order and turns the system into the classic paramagnetic state at high temperatures.By the first-principle calculations in Supplementary Sec-tion8, Cu3Zn and EuCu3 have the values of D/J as 0.05 and 0.3, and thus correspond to the QSL and AFM ground states, respectively.In the middle, the elementary excited states of AFM and QSL states are the magnon and spinon, respectively, resulting in different magnetic Raman spectra shown at the bottom.Here 1P and 2P denote the one-pair and two-pair spinon excitations, respectively.1M and 2M in magnetically ordered state denote the one-and two-magnon excitations, respectively.The 1M Raman peak in AFM measures the magnon [Methods Section] while the 1P Raman continuum in QSL probes the spinon excitations.[39] The shadow background of the 1M peak, marked as '1P', denotes the continuum above TN in EuCu3, mimicking the 1P continuum in the QSL state [Supplementary Section6].So the magnon excitation below TN emerges from the one-pair continuum and can be regarded as the bound state of the spinon-antispinon excitations.The transition between QSL and AFM can be thought to be driven by the spinon confinement.
IV. ANGLE-RESOLVED LIGHT POLARIZATION DEPENDENT RAMAN RESPONSE FOR Cu3Zn
Two typical polarization configurations were utilized to measure the angle-resolved polarized Raman spectra: i) a half-wave plate was put after the polarizer in the incident path to vary the angles between the polarization of incident laser and the analyzer with the fixed vertical polarization, which can be denoted as the X-only configuration; ii) a polarizer is allocated in the common path of the incident and scattered light to simultaneously vary their polarization directions, while the polarizations of incident laser and analyzer were parallel or perpendicular to each other.By rotating the fast axis of the half-wave plate with an angle of θ/2, the polarization of incident and/or scattered light is rotated by θ.
FIG. S6.Three polarization configurations in the angle dependent Raman response.In the XX (XY ) configuration, the incoming and outgoing light polarizations are parallel (perpendicular) and we rotate both of them simultaneously.In the X-only configuration, the outgoing light polarization is fixed and we rotate the incoming light polarization only.For Eu 3+ , we observe the A2g excitation of the 4f 6 configuration with the transition from 7 FJ=0 to 7 FJ=1.EuCu3@4 K EuCu3@10 K EuCu3@19 K EuCu3@23 K<EMAIL_ADDRESS>EuCu3@50 K EuCu3@100 K Cu3Zn@4 K FIG.S10.Magnetic Raman susceptibility in the XY configuration of EuCu3 above the Néel temperature.We present the XY magnetic Raman continuum in EuCu3 below 100 K. Above TN = 17 K, the Raman response has the substantial magnetic continuum below 50 K.
For a comparison, we also plot the XY magnetic Raman continuum in Cu3Zn at 4 K.The Raman shift frequency of Cu3Zn is divided by 1.9, the ratio of the super-exchange strength of two compounds.We can see that above TN , the profile of the Raman susceptibility in EuCu3 mimic that in Cu3Zn, suggesting the spinon contribution.There are less pronounced low-energy continuum excitations in EuCu3 than those in Cu3Zn, probably due to the large DM interaction which suppresses the low-energy quantum fluctuations.The maximum of the continuum excitations above TN in EuCu3 has the same energy scale as the magnon peak below TN , which suggests that the magnon peak can be taken as the bound state of spinon-antispinon pair.
FIG. 2 .
FIG. 2. Temperature dependence of the Raman susceptibilities in Cu3Zn.(a) The A1g Raman susceptibility χ A 1g = χ XX − χ XY .The solid lines are a guide to the eye.(b) Temperature dependence of the static Raman susceptibility in A1g channel χ A 1g (T ) = 2 π 400 cm −1 10 χ A 1g (ω) ω dω.The solid line is a thermally activated function.(c) Color map of the temperature dependence of the magnetic Raman continuum χ A 1g (ω, T ).(d) The E2g Raman response function χ E 2g = χ XY .The solid lines are a guide to the eye.We schematically decompose the E2g magnetic Raman continuum into two components of spin excitations for one and two spin-antispinon pair excitations, respectively.Here 1P and 2P represent one-and two-pair, respectively.(e) Temperature dependence of the static Raman susceptibility in E2g channel χ E 2g = 2 π 780 cm −1 10 χ E 2g (ω) ω dω.The solid line is a guide to the eye.(f) Color map of the temperature dependence of the magnetic Raman continuum χ E 2g (ω, T ).
2 FIG. 3 .
FIG. 3. Power-law behavior for the E2g Raman continua at low frequency in Cu3Zn.(a) and (b) Power-law fitting of χ E 2g (ω) ∝ ω α at low and high temperatures, respectively, in Cu3Zn.(c) Temperature dependent exponent α for the power-law fittings in cu3zn.
FIG. 4 .FIG. S2 .
FIG. 4. temperature dependence of the eg raman susceptibilities in eucu3.(a) the eg raman susceptibility χ eg = χ xy .the solid lines are a guide to the eye.a sharp magnon peak appears in the eg magnetic raman continuum below the magnetic transition temperature tn = 17 k.(b) temperature dependence of the static raman susceptibility in eg channel χ eg = 2 π 780 cm −1 10 χ eg (ω) ω dω. the solid line is a guide to the eye.(c) Color map of the temperature dependence of the magnetic Raman continuum χ Eg (ω, T ).A sharp magnon peak is observed below TN .
FIG. S3 .
FIG. S3.Raman spectra in Cu3Zn at different temperatures.(a) Unpolarized Raman spectra in Cu3Zn.(b) Raman spectra in the XX configuration in Cu3Zn which contains the A1g and E2g channel.(c) Raman spectra in the XY configuration which contains the E2g channel in Cu3Zn.
FIG. S4 .
FIG. S4.Raman spectral evolution from Cu4 to Cu3Zn (a) Unpolarized Raman spectra for Cu4 and Cu3Zn at selected temperatures.Comparison for phonon modes between 40 cm −1 and 90 cm −1 in (b), and between 100 cm −1 and 250 cm −1 in (c) for Cu4 and Cu3Zn.The Cu4 spectra in (a), (b) and (c) have been offset vertically for clarity.The phonon evolution from Cu4 to Cu3Zn displays the difference by substituting Cu4 interlayer Cu 2+ site with Zn 2+ in Cu3Zn.The parent Barlowite Cu4 transforms to orthorhombic Pnma below T ≈ 265 K, characterized by changes in the relative occupancies of the interlayer Cu 2+ site.Between 300 cm −1 and 600 cm −1 , there are several phonon peaks associated with O 2− vibrations in Cu4 and Cu3Zn.Cu3Zn displays the Br − in-plane relative mode (E2g) at 75 cm −1 , and has no active Raman mode related to the kagome Cu 2+ vibrations since Cu 2+ is the inversion center.The Br − phonon mode changes into two peaks in Cu4 due to the superlattice folding in the orthorhombic Pnma phase at low temperature.An additional Br − peak at 85 cm −1 appears in Cu4, related to the Br vibrations along the c-axis.The kagome layers in Cu4 are distorted, signaled by a new phonon mode for the kagome Cu 2+ vibration at 62 cm −1 .Cu3Zn displays sharp E2g modes at 125 cm −1 and 173 cm −1 correspond to in-plane relative movements for Zn 2+ and F − , respectively.The corresponding modes (interlayer Cu 2+ and F − ) in Cu4 are broad at 290 K due to the randomly distributed interlayer Cu 2+ and split into two peaks at 200 K.
FIG. S7 .
FIG. S7.Rotation symmetry of Raman dynamics for lattice vibrations and magnetic excitations in Cu3Zn.(a) We monitor three selected modes (both continua and phonon peaks).(b) Angle dependence of the integrated Raman susceptibility χR = 2 π 60 cm −1 10 χ (ω) ωdω.In X-only configuration, the continua at 290 K follows the cos 2 (θ) for the A1g channel, while at other configurations, the continua remain constant.(c) Angle dependence of the Br E2g phonon (75 cm −1 ) scattering intensity.The lines are constant functions.(d) Angle dependence of the O 2− A1g phonon (429 cm −1 ) scattering intensity.The Raman intensity of O 2− A1g mode exhibits a cos 2 (θ) behavior in the X-only configuration at both room temperature and low temperature, and keeps constant in XX and XY configurations.
FIG. S8.Raman spectra in EuCu3 at different temperatures.(a) Unpolarized Raman spectra in EuCu3.(b) Raman spectra in the XX configuration in EuCu3 which has the Ag and Eg channel.(c) Raman spectra in the XY configuration which contains the Eg and A2g channel.For Eu 3+ , we observe the A2g excitation of the 4f 6 configuration with the transition from 7 FJ=0 to 7 FJ=1.
FIG. S9 .
FIG. S9.Rotation symmetry of Raman dynamics for lattice vibrations and magnetic excitations in EuCu3.We monitor the selected magnetic continuum at low frequency and the O 2− Eg mode in (a).(b)Angle dependence of the integrated Raman continuum from 9-80 cm −1 .The continua at 290 K follows cos 2 (θ) for the A1g channel, while others remain constant.(c) Angle dependence of the O 2− Eg phonon (487 cm −1 ) scattering intensity.Its Raman intensities are independent of θ.
6 FIG. S12 .
FIG.S12.SHG in Cu3Zn at 300 K with different laser powers.(a) and (b) represent the successive SHG measurements in the same point of sample taken every 5 seconds with excitation powers at 25 mW and 32 mW, respectively.There are no SHG signals at the excitation power of 25 mW, whereas strong SHG signals appear at the excitation power above 32 mW after a 10 second exposure.By comparison, damage or degradation in crystal structure under high power excitation induces a detectable SHG signal, implying that inversion symmetry presents in undamaged Cu3Zn at room temperature.The lines have been offset vertically for clarity.
TABLE S1 .
Mode assignment for Cu3Zn.Cu3Zn crystallizes the space group P 63/mmc (No. 194) and has Raman active A1g, E1g, and E2g modes according to the point group representation of D 6h (6/mmm).E1g is not visible when the light polarization lies in the kagome ab plane, and we have Raman active phonon modes ΓRaman = 4A1g + 9E2g.
Raman spectra in EuCu3 at different temperatures.
SHG in Cu3Zn at 26 K with different laser powers.(a) SHG measurements in the same spot of sample taken every 5 seconds (from #1 to #6).At 23 mW, SHG signals in Cu3Zn sample are absent, implying that inversion symmetry remains preserved.(b) A series of SHG measurements under the excitation power of 32 mW in the same point of the sample taken every 5 seconds (from #1 to #12).A remarkable SHG signal at 400 nm is detectable after a 10 second exposure, which dramatically enhances as the time increases.Due to the damage or degradation of Cu3Zn under high power excitation, the inversion symmetry breaking induces a strong SHG signals in sample.By comparison, we conclude that undamaged Cu3Zn single crystal presents spatial inversion symmetry at low temperature.The lines have been offset vertically clarity. | 7,980 | 2020-10-15T00:00:00.000 | [
"Physics"
] |
A Review of the Current Research Trends in the Application of Medicinal Plants as a Source for Novel Therapeutic Agents Against Acanthamoeba Infections
Acanthamoeba keratitis (AK) is a sight-threating infection of the cornea that mostly affects contact lens wearers. Until now, AK treatment remains very difficult due to the existence of a highly resistant cyst stage in the life cycle of Acanthamoeba which is extremely resistant to most of the available anti-amoebic compounds. Moreover, current treatment of AK is usually based in the combination of various therapeutic agents such as polyhexamethylene biguanide or chlorhexidine and propamidine isethionate. However, all the mentioned compounds have also showed toxic side effects on human keratocytes and presented poor cysticidal effect at the concentrations currently used in the established AK treatments. Nowadays, the elucidation of novel compounds with antimicrobial and anticancer properties from plant and herbs with medicinal properties have encouraged researchers to evaluate plants as a source of new molecules with anti-trophozoite and cysticidal effects. Thus, in recent years, many natural products have been reported to present potent anti-Acanthamoeba properties with good selectivity and minimal toxic effects. Therefore, the chemical drugs currently used for AK treatment, their drawbacks as well as the current research in medicinal plants as a source of potent anti-Acanthamoeba compounds are described in this review.
Introduction
Acanthamoeba spp. are free-living amoebae with the potential of being opportunistic pathogens for humans and animals. There are two stages in their life cycle: an active trophozoite form and the double-walled highly resistant cyst. Trophozoites inhabit a variety of bacteria-contained niches such as fresh water bodies, hot springs, soil, drinking water, bottled water, dental treatment units, dialysis units, fluids of contact lenses and infected tissue cultures among others (Table 1.) (1). As mentioned before, the Cyst form of Acanthamoeba is highly resistant to a vast range of temperature, pH, and anti-microbial agents (2). Furthermore, this amoebic genus is the causative agent of two severe diseases in humans: Acanthamoeba keratitis which is serious corneal infection that can develop into blindness and usually Table 1. Characteristics of Acanthamoeba spp. as an agents of amoebic encephalitis and amoebic keratitis (Visvesvara et al, 2007). reported in contact lens wearers, and the fatal Granulomatous Amoebic Encephalitis (GAE) which mostly affects immunocompromised individuals (3,4). Acanthamoeba also may cause other diseases such as cutaneous ulcers, abscesses, arthritis, and/or rhinosinusitis (5).
GAE is a relatively rare disease. Clinical characteristics include headache, fever, nausea, vomiting, behavioral changes, stiff neck, lethargy and increased intracranial pressure. In later stages of the infection also symptoms such as loss of consciousness, seizures, coma, and death have been reported. Approximately more than 150 cases have been reported worldwide (6, 7).
Acanthamoeba keratitis (AK) usually manifests in the early stages of infection with inflammation, eye redness, epithelial defects and photophobia, edema and intense pain. Moreover, if it not diagnosed and treated on time, it may even end in blindness (8). Previous studies in the early to mid-1980 reported an exponential increase in the number of individuals infected with this amoeba (9). This is mainly due to an increased number of soft contact lens wearers and improper use and maintenance of the lenses. Furthermore, it is worthy to mention that 85% of AK cases are detected in soft contact lens wearers (10, 11). In a more recent study in 2007, AK reported case were estimated to be higher than 3000 (6). Therefore, it is clear that the number of AK reported cases continues to rise worldwide.
Methodology based on search strategy
A systematic review based on database sources such as Medline, PubMed, Scopus and Google scholar was conducted in this study. No restrictions were placed on study date, design or language of publication including all valuable and relevant information containing the keywords Acanthamoeba and therapy. We also referred to the databases of Medline, PubMed, Scopus and Google scholar and the keywords Acanthamoeba and Amoebic Keratitis, and words including treatment, medicinal plants and herbal medicine. Furthermore, information in books associated to Acanthamoeba and treatment strategy was also included as well as abstracts and full articles that were written in English and showed to be relevant to the topic described above. Only reports and studies with minimal relevance were excluded from this study.
Current therapy of Acanthamoeba infection Chemical treatment and their drawback
Effective treatment of CNS-related infections due to Acanthamoeba has been recorded as a combined treatment, normally started at an early stage of the infection. However, in the later stages of the infection, the majority of therapeutic agents were reported not to be effective (12). Overall, combination chemotherapies were found more successful than single-drug therapies, Therefore, usual therapeutic agents reported so far include a combination of drugs such as ketoconazole, fluconazole, itraconazole, pentamidine isethionate, azithromycin, sulfadiazine, amphotericin B, rifampicin, voriconazole and miltefosine (12). Because of ineffective therapy, GAE is often deadly, thus less than 10 GAE patients have recovered with the application of a combination ofthe drugs mentioned above (13).
Regarding, Acanthamoeba keratitis (AK) treatments reported so far, the combination of chemotherapeutic agents such as polyhexamethylene biguanide, which destroys cell membranes, and propamidine isethionate, which inhibits DNA synthesis (14, 15) is often used. Moreover, chlorhexidine, alone or in combination with other drugs, has also been applied for AK treatment (16,17). Unfortunately, propamidine is poorly cysticidal and even resistance to this compound has been reported in Acanthamoeba strains (18,19).
In the case of a persistent infection with inflammation, corticosteroids may be used. However, their use is controversial because they cause suppression of the immunological response of the patient. Moreover, corticosteroids produce inhibition of the processes of encystation and excystation of Acanthamoeba, which could be a cause for the appearance of resistance problems (1). Recent studies have highlighted an association of topical corticosteroids and a diagnostic delay of AK (1,15,20).
It is also important to mention that the described combination treatment are normally only active against the trophozoite stage and therefore, Acanthamoeba cysts could remain viable and lead to serious and frequent recurrences of keratitis. Moreover, resistance of the double walled cysts is mainly due to cellulose molecules presented in the inner layer of the cysts. In addition, the majority of drugs mentioned above are highly toxic to human keratocytes. Furthermore, the required treatment duration for the listed drugs is very long and may last up to six months (21,22).
Overall, the reported and worrying lack of effective chemotherapeutic agents, have urged researchers in this field to search for novel compounds as a high priority for the treatment of Acanthamoeba infections. Thus, there is a raising trend to shift resources from chemical drugs to natural origin compounds (mainly isolated from plants and herbs) (23).
Animal-based natural products
Magainins, are defense peptides with antimicrobial activity that have been described to be secreted by the African clawed frog (Xenopuslaevi). These compounds cover the skin of the animal and have been reported to create an exclusive membrane-targeted mechanism of action against pathogenic agents. The reported mechanism of action involves a change in the ion conductance of membrane barriers. Magainins have been reported to be active against grampositive and gram-negative bacteria and present anti-viral, anti-fungal and anti-parasitic effects. In the case of Acanthamoeba, two of the known magainins so far, MSI-103 and MSI-94 have been reported to induce amoebistatic and amoebicidal effects at concentrations from 20 to 40 µg/mL (24). Further evaluation of these compounds as anti-Acanthamoeba agents should be carried out against the cyst stage and also by developing invivo studies.
Plant-based treatments
In recent years, many researchers working on novel therapeutic options against Acanthamoeba infections have focused their studies on the application of medicinal plants as a source of novel molecules with higher anti-amoebic activity and lower toxicity representing an alternative method to currently used synthetic molecules. Many plant extracts have been reported in the literature as powerful inhibitors of microbial agents including bacteria, parasites and fungi. In the case of Acanthamoeba, various medicinal plants and herbal extracts have been evaluated as sources of amoebicidal agents and even some of the evaluated plants have been proven to be useful for therapeutic options even in-vivo. Some of the test plants and herbs until now include: Thymus (25) (39), Propolis (40) and Buddleia cordata (41).
In Table 2. a list of several medicinal plants and herbs with reported amoebicidal and cysticidal effect is included and are described next:
The effective activity was observed at 32 mg/ mL. However, it is important to mention that this medicinal plant presented no toxicity to human keratocytes even at the highest concentration tested (32 mg/mL). A bio-guided fractionation analysis of Thymus sipyleus could help to find the active compounds within this plant against Acanthamoeba in the near future (25).
Allium sativum (garlic)
The anti-Acanthamoeba effects of the methanol extracts of Allium sativum (garlic) have been tested against Acanthamoeba trophozoites and cysts in-vitro. Interestingly, an amoebicidal and cysticidal activity was described for this plant species being dose and time dependent. Moreover, the tested extract was not toxic even at 3.9 mg/mL. Therefore, Allium sativum should be further studied in order to elucidate the novel anti-amoebic compounds presented in this plant (33).
Ziziphus vulgaris and Trigonella foenum graecum (Fenugreek)
Recent research carried out in our laboratories have shown that that the aqueous extracts of Ziziphus vulgaris and Trigonella foenum graecum are active against both the trophozoite and cyst stages of Acanthamoeba.In the case of Trigonella foenum graecuma concentration of 400 mg/mLwas able to eliminate trophozoites and cysts when incubated at a concentration of 750 mg/mL, after 24 h in both cases. In comparison, Ziziphus vulgaris aqueous extracts were able to eliminate Acanthamoeba trophozoites at a concentration of 200 mg/mL and cysts at 500 mg/mL, after 24 h of incubation (unpublished data). It should be mentioned that both plants did not shown toxicity when tested on cell culture at the highest evaluated concentrations.
Arachis hypogaea L., Curcuma longa L. and Pancratium maritimum L
The cysticidal activity of Arachis hypogaea L., Curcuma longa L. and Pancratium maritimum L. was evaluated against Acanthamoeba castellani cysts in vitro. The obtained results revealed that the ethanol extract of A. hypogaea L had a cysticidal effect with a minimal inhibitory concentration (MIC) of 100 mg/mL in all the tested hours (24, 48, 72 h). Curcuma longa extracts showed MIC of 1 g/mL at 48 h and 100 mg/mL after 72 h. Pancratium maritimum L. also showed a MIC of 200 mg/mL after 72 h (34).
Origanum syriacum and Origanum laevigatum
In vitro evaluation of the amoebicidal activity of methanolic extracts of Origanum syriacum and Origanum laevigatum against Acanthamoeba castellanii, have shown that concentrations of 32 mg/mL of Origanum syriacum extracts, were able to eliminate trophozoites after 3 h. Moreover, incubation of cysts with extracts at the same concentration, revealed a cysticidal activity after 24 h. In the case of O. laevigatum, anti-trophozoite activity was observed after 72 h of incubation with extracts at a concentration of 16 mg/mL (29).
Peucedanum caucasicum, P. palimbioides, P. chryseum and P. longibracteolatum
Amoebicidal activity of the methanolic extracts of Peucedanum caucasicum, P. palimbioides, P. chryseum and P. longibracteolatum has been examined in-vitro. The obtained results in this study determined that P. longibracteolatum extracts presented the strongest amoebicidal effect against Acanthamoeba. Thus, elimination of Acanthamoeba trophozoites and cysts was observed between 24 and 72 h of incubation with extracts at a concentration of 32 mg/mL (35).
Salvia staminea and Salvia caespitosa
Amoebicidal activity of Salvia species has been evaluated against Acanthamoeba castellani in-vitro. The reported results revealed that S. staminea presented anti-Acanthamoeba effect. Moreover, the methanolic extracts of S. staminea were shown not be toxic to human cells even at concentrations of 16mg/mL (36). M. officinallis has been reported to present moderate amoebicidal and cysticidal effects but S. cuneifolia presented the highest effect against trophozoites and cysts of Acanthamoeba (26). Moreover, in another study the effect of the polar and nonpolar extracts of various plants from Southeast Asia was evaluated for their in-vitro amoebicidal activity against different species of Acanthamoeba including A. culbertsoni, A. castellanii, and A. polyphaga. The obtained results revealed that of the 200 tested plants, three species/genera (Ipomoea sp., Kaempferia galanga, and Cananga odorata) were active against Acanthamoeba. Furthermore, Gastrochilus panduratum extract had a lytic effect when evaluated against A. polyphaga and amoebistatic effects against A. castellanii and A. culbertsoni species (27).
Satureja cuneifolia and
An in-vitro assay developed to evaluate the amoebicidal activity of the chloroformic fraction of Trigonella foenum graecum has also reported this fraction to present anti-Acanthamoebae effects (28). In another study, four fractions of the methanolic extract of Pouzolziaindica were reported to present cysticidal effects (30). The amoebicidal activity of different parts of plants such as flowers, roots and leaves of Rubus chamaemorus, Pueraria lobata, Solidago virgaurea and Solidago graminifolia extracts were examined in-vitro. The tested extracts presented in-vitro and in-vivo against Acanthamoeba. Moreover, these tested extracts presented not toxic effect for the animals used in the in-vivo assay (31).
The ethyl acetate and methanol extracts of Helianthemum lippii (L.) have been reported to present activity against Acanthamoeba castellanii cysts being the ethyl acetate extract, the most active extract against Acanthamoeba (32). Pterocaulon polystachyum (hexane fraction) extracts have been reported to eliminate 66%-70% of Acanthamoeba trophozoites after 48-72 h of incubation (37). In the same study, I. oculus showed the strongest amoebicidal effect when compared to Pastinaca armenea (38).
Olive trees have also been reported to be able to inhibit the trophozoite stage of Acanthamoeba castellanii Neff. In this study, the activity of Olive Leaf Extracts (OLE) showed Inhibitory Concentrations of the 50% of the population (IC 50 ) ranging from 8.234 μg/mL in the case of the alcoholic mixture of the Dhokkar variety, to 33.661 ± 1.398 μg/mL for the methanolic extract of the toffehi variety (39).
Propolis extracts have also been tested and reported to be cysticidal after incubation of Acanthamoeba cysts with concentrations higher than 15.62 mg/mL at 48 h or longer. Moreover, ethanolic extracts of Propolis have been reported to be active against Acanthamoeba trophozoites and cysts (40).
An in-vitro assay to evaluate the amoebicidal activity of the aqueous and methanolic extracts of Buddleia cordata against 29 strains of freeliving amoebae, reported that the aqueous extract was active against 14 amoebic strains whereas the methanolic one was active against 16 strains. Nevertheless, the observed effects induced only amoebistatic effects against the tested strains. Moreover, no cysticidal activity was observed in any extract after 24 h of incubation and at concentrations up to 32 mg/ml(41).
Conclusion
To date, the beneficial effect of herbal medicine in many conditions such as primary dysmenorrhea, patients with diabetes and many more are studied (42).
Acanthamoeba keratitis is a medical challenge for most ophthalmologists. This severe corneal disease is usually treated with combination drugs such as polyhexamethylene biguanide or chlorhexidine and propamidine isethionate (15). Current therapeutic options present toxic side effects to human keratocytes and present null or low cysticidal effect (18).
In summary, many natural products have been reported to present high anti-Acanthamoeba activities in the recent years. Therefore, plants extracts should be considered as a highly important and powerful source for the search of novel anti-Acanthamoeba compounds in the near future.
Acknowledgments
Dr. Maryam Niyyati was supported by the Iran National elite foundation for young associated professors.
Jacob Lorenzo-Morales was supported by the Ramón y CajalSubprogramme from the Spanish Ministry of Economy and Competitivity RYC-2011-08863 and also by the grants RICET (project no. RD12/0018/0012 of the programme of RedesTemáticas | 3,588.2 | 2016-11-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Genome‐wide association studies of multiple sclerosis
Abstract Large‐scale genetic studies of multiple sclerosis have identified over 230 risk effects across the human genome, making it a prototypical common disease with complex genetic architecture. Here, after a brief historical background on the discovery and definition of the disease, we summarise the last fifteen years of genetic discoveries and map out the challenges that remain to translate these findings into an aetiological framework and actionable clinical understanding.
INTRODUCTION
Multiple sclerosis [MS, (MIM 126200)] is a neurological disorder of the central nervous system (CNS), resulting from an autoimmune attack on CNS white matter. The disease course often results in progressively decreasing motor function and is the most frequent cause of neurological disability in young adults. Over two million people worldwide suffer from MS, with over 75% of these being women. After the description of the disease by Charcot in 1868, MS was gradually recognised as a distinct, multifaceted clinical entity. 1 The discovery of contrast agents for microscopy in the early 20th century catalysed the description of MS lesion pathology as a result of inflammation and myelin damage around blood vessels in the brain. 2 In this golden age of bacteriology, it was assumed that the causes of MS were extrinsic, and the field searched for infectious causes to no avail.
The eventual discovery that immune cells caused myelin destruction in a primate model resembling MS 3 finally put the field on the right track in the 1930s, and the discovery of an immunoglobulin signature in MS patient cerebrospinal fluid 4still in use as a diagnostic tool todayfirmly cemented the idea that MS is an autoimmune disease. Meanwhile, as medical practice became more advanced after the Second World War, patients were increasingly seen by neurologists specialising in MS, who began to compile longterm cohorts of patients. 5 It rapidly became obvious that the disease is geographically segregated 6 and aggregates in certain families and that siblings and offspring of people with MS are far more likely to develop the disease themselves. 7
EARLY GENETIC STUDIES
This realisation that the disease was genetic prompted the search for pathogenic genes, but it took diligent work for two decades to finally discover the first genetic risk factors for MS: three serological alleles of the human leucocyte antigens (HLA), encoded in the major histocompatibility complex [8][9][10][11] (MHC, chromosome 6p21). As the molecular biology of the immune system was unveiled, it was natural to ask whether these were also involved in MS pathogenesis. Candidate gene studies in cohorts of tens or hundreds of individuals at the loci encoding T-cell receptor alpha 12,13 and beta 14,15 loci, the immunoglobulin heavy-chain genes 16,17 and the gene for myelin basic protein, 18,19 among others, produced inconsistent findings. 20,21 As became obvious in retrospect, such studies are underpowered to detect risk alleles for common complex disease, suffer from population stratification and other artefacts and often assess genes that have broad relevance to the immune system but do not drive disease risk per se. 22 The development of genetic maps covering much of the genome led to linkage analyses in extended MS affected families from a number of countries, primarily of European ancestry. [23][24][25][26][27][28][29][30][31][32] These validated the HLA association but showed no significant linkage to loci outside the MHC.
Recognising that the small sample size of these studies limited power to detect non-MHC linkages, the genetic analysis of multiple sclerosis in Europeans (GAMES) consortium was created to perform a genome-wide association screen across multiple populations using microsatellites and pooled DNA. 23 Although extraordinary as a collaboration for the time, this effort also failed to find non-MHC loci. The linkage era culminated in a further collaborative effort by The International Multiple Sclerosis Genetics Consortium (IMSGC) also formed to pool resources and samples to conduct well-powered studies. The IMSGC typed 4506 single nucleotide polymorphisms (SNPs) in 730 multiplex families and again found no significant linkage peaks outside the MHC, although a handful of suggestive signals were present. 24 Although largely negative, these studies strongly supported the notion that MS is not caused by a small amount of mutations of large effect, but is likely to be due to many small risk effects spread across the genome.
GENOME-WIDE ASSOCIATION STUDIES
The completion of the human genome sequencing project led to the development of complete catalogues of common genetic variation across the genome, and concomitant technologies to assay these variants in a cost-effective and high-throughput manner. 25,26 This technological development enabled the profiling of thousands of samples in a single study and prompted a shift away from family studies, where samples are necessarily limited and ascertainment challenging, to population-based association studies comparing unrelated cases and controls. 27 These genomewide association studies (GWAS) compare allele frequency at each variant position of the genome between cases and controls, with significant differences implying an association to disease. The often-inconsistent results of candidate gene studies and the biases that drove them led to the adoption of robust statistical thresholds for significance in GWAS and a standard of requiring replication in independent samples. 22 The currently acceptable standard is a significance level of P < 5 9 10 À8 , which is equivalent to P < 0.05 after Bonferroni correction for the number of independent tests in the genome given linkage disequilibrium between common variants. 28 These studies have demonstrated that the common disease-common variant hypothesis of human diseases 29 is broadly true, where disease risk is driven by many common variants, each of which explains a small fraction of the risk in a population.
In 2007, the first GWAS in MS looked at 1540 parent-affected offspring trios and identified two loci outside the MHC, encoding the interleukin-2 receptor (IL-2RA) and the interleukin-7 receptor (IL-7RA), respectively. 30 Several other loci showed some evidence of association, but fell short of strict genome-wide significance thresholds; these have been subsequently validated in larger studies. The three significant findings were simultaneously replicated in independent studies from the United Kingdom, the United States and the Nordic countries. 31,32 This opened the floodgates, with several successive studies GWAS and meta-analysis followed in rapid succession, so that by 2011, common variants in 26 genomic loci had been associated with MS risk and independently replicated, but clearly only explained a fraction of MS risk attributable to genetic factors. [33][34][35][36][37][38][39][40][41][42] These studies collectively showed that non-MHC MS risk alleles have modest effects on disease (odds ratios < 1.2) and that even larger sample sizes (over 10 000 cases and controls) would be needed to identify more loci. 22 A further expansion of the IMSGC resulted in a collaborative GWAS of 9772 cases and 17 376 controls, again of European descent, in 2011. 43 This study replicated 23 of 26 previously identified associations and identified 29 novel risk loci. The number of significant associations made robust post hoc pathway analyses possible, and it became evident that these loci are strongly enriched for genes acting in T-cell activation and proliferation pathways. In addition, refinement of the associations in the HLA region showed that just four variants are sufficient to account for the risk previously attributed to extended haplotype alleles spanning hundreds of kilobases (kb) and many tens of genes. A further study, this time on a targeted array (the ImmunoChip 44 ) in 29 300 MS cases and 50 794 unrelated healthy individuals, identified 48 new susceptibility variants, bringing the total number of MS risk variants to 110 at 103 discrete loci outside the MHC. 45 Most recently, the IMSGC has completed an even larger GWAS including over 115 000 cases and controls. This latest report brings the total number of MS risk associations to 233, including 200 autosomal variants outside the MHC, one on the X chromosome and 32 independent effects in the broader MHC locus, covering both classical and nonclassical gene regions. 46 Again, careful pathway, transcriptomic and epigenetic enrichment analyses suggest T-cell biology is a major feature of the disease, but also highlight the involvement of many other components of both adaptive and innate immunity in pathogenesis. All these effects combined explain 19.2% of the total heritability for MS. The 32 MHC effects accounted for 4% of the overall heritability, with the bulk of the remaining signal resident in the other regions of the genome associated with MS risk. However, a small portionapproximately 2% of the overall heritabilityresides in regions that either did not show suggestive association in the initial GWAS or that failed to replicate in independent samples, suggesting that there remain additional loci to be found (Table 1 summarises all these findings).
THE ROLE OF THE MHC
The first MS risk associations discovered were three serological alleles of the HLA. [8][9][10][11] Since then, a great deal of effort has been expended to better characterise these associations both genetically and functionally, although we still do not understand how changes to antigen display increase risk for MS or any other autoimmune disease. 47 One of the main challenges to interrogating the MHC is the complexity of the region: multiple alleles in the region are under both positive and balancing natural selection in different populations, leading to complex longrange haplotype structures, and many of the genes in the classical regions also show highsequence homology. This makes genotyping and sequencing assays technically challenging, so genotyping has remained a low-throughput activity, in contrast to the rest of the genome, which is amenable to more scalable technologies. Over the last several years, the compilation of large reference populations with both serological and genotyping data on MHC variation has made imputation of classical alleles possible from standard SNP array data to single amino acid resolution. 48,49 We thus now have tools to interrogate this region at scale and identify the specific functional HLA alleles driving risk. Beyond single-marker analyses, in 2015 the IMSGC described a comprehensive dissection of allelic association in the broader MHC, based on over 48 000 samples with dense genotyping information through the region. 50 This resulted not only in the description of multiple class II HLA-DRB1 and HLA-DRQ1 classical alleles imputed from SNP genotypes, but also epistatic interactions between HLA-DQA1*01:01 and HLA-DRB1*15:01 and between HLA-DQB1*03:01 and HLA-DQB1*03:02. These results raise certain functional questions, for example why the protective effect of HLA-DQA1*01:01 only manifests in the presence of the HLA-DRB1*15:01 risk allele. 50,51 Several variants outside the classical regions of the MHC (both class I and class II) were also shown to be independently associated, suggesting biological functions beyond antigen display underlie the MHC risk effects. However, the nature of the interaction may be more complicated than expected, with multiple different amino acid-level alleles demonstrating consistent interaction with HLA-DRB1*15:01, in addition to the HLA-DQA1*01:01 allele. This suggests that the landscape of antigen presentation is very dynamic in the population and risk-relevant phenotypes may be more complex than changes to diversity or binding strength of individual epitopes. These questions, however, will require the development of more stringent, high-throughput experimental tools to interrogate specific HLA alleles, which we still lack.
IDENTIFYING CAUSAL VARIANTS AND PATHOGENIC GENES
As in other common, complex diseases, identifying MS risk genes has been complicated by two challenges. Firstly, identifying the causal variants in GWAS loci through fine mapping remains difficult: linkage disequilibrium means that many variants will show evidence of association to disease, but only one is likely to be the causal one. As a result of differences in minor allele frequency and population sampling, this is not necessarily the most associated variant. 22 Finemapping approaches, therefore, aim to assign posterior probabilities of causality for each variant based on some criteria. One such approach is to assess the posterior probability of causality using genotype and minor allele frequency information and then select the smallest group of variants in each locus that are likely to include the causal one at some threshold. 52 When applied to 6356 MS cases and 9617 controls from the United Kingdom, this approach only meaningfully resolved a small subset of associations, with 8/68 loci we analysed resolving to fewer than five candidate variants, and from these, we have been able to identify a relevant candidate gene in three. 45 This is likely a limitation in power, and larger sample sizes may help the resolution of these approaches. Alternative fine-mapping strategies 53, 54 have not yet been applied to MS data, but in other instances have performed well and are likely to prove useful in MS locus dissection.
The second challenge is that the majority of MS risk variants appear to localise to gene regulatory regions, rather than coding sequence, 55 and specifically to enhancer elements active in stimulated immune cell subsets. 56 MS GWAS loci are also enriched for expression quantitative trait loci (eQTL) in multiple tissues, 46,57 supporting the idea that much risk is due to changes to gene regulation. These analyses, however, aggregate information across the entire genome and do not identify individual regulatory elements relevant to disease, which remains an open question in the field. The observations of enrichment in regulatory regions engender a further conceptual challenge, as we have lacked tools to effectively predict gene targets of such regulatory elements; this is further complicated by the fact that these elements often exert their effects over considerable distances, so simple proximity-based assignment is usually incorrect. 58 Thus, even if fine mapping is successful in a locus, there is every chance that the relevant gene cannot be readily identified.
These discoveries have spurred efforts to integrate GWAS information with other functional genomics data to identify relevant genes. Two distinct approaches have emerged, with overlapping goals: the first is to identify genes with an eQTL driven by an MS risk variant in a locus and the second is to identify specific regulatory elements driving disease risk, and through these, the genes were affected, which must by definition be pathogenic. In attempts to overlap GWAS and eQTL data, the key issue is not just to identify eQTLs in a GWAS locus, but to identify those that appear to be driven by the same underlying genetic variant driving disease risk. 59 This has proven a difficult challenge, as a result of linkage disequilibrium 60 . Practically, because eQTL are very common, and many variants show association to disease in a locus, it is likely that at least some variants associated with disease will also have eQTL evidence for a nearby gene. 61 Several methods have been proposed to address this colocalisation issue, 59,62 each of which aims to compare GWAS and eQTL data to identify pleiotropic effects between them. Recently, we developed a joint likelihood approach to this problem and used it to compare MS risk associations from the IMSGC ImmunoChip study to eQTL in CD4 + T cells, CD14 + monocytes and lymphoblastoid cell lines. 63 We found that, of 59 densely genotyped loci showing genome-wide significance to MS risk, 56 also had an eQTL to at least one gene within 100 kb of the most associated MS variant, with most of these harbouring eQTLs to multiple genes. However, in only 14/56 loci could we find evidence that an eQTL and the MS risk effect were driven by the same underlying signal, with the remainder showing strong evidence that the genetic effects are distinct for eQTL and disease risk. This suggests that many spurious inferences will be made by simply searching for eQTL in a GWAS locus and assuming that these are causally related. For 11/14 loci, we found matches in CD4 + T cells, confirming the central role played by these cells in MS pathogenesis. These genes are now strong candidates for disease causality, and further work will elucidate their role in pathogenesis.
The second conceptual approach is to identify the regulatory regions driving MS risk and through these identify the relevant genes. 56 The promise of this approach is that not only can we identify pathogenic genes, but the specific mechanisms of risk. We recently described a statistical framework to identify regions of accessible chromatin driving MS risk, 64 using the publicly available data generated by the NIH Roadmap Epigenome Mapping Consortium 65 (REMC). These are genomic regions of 150-400 base pairs where chromatin has been relaxed in some cell types in order to allow DNA-binding protein interaction and is thus sensitive to cleavage by DNase I. These DNase I hypersensitive sites (DHS) usually contain transcription factor binding sites and overlap either promoter or enhancer elements. 55 We were able to detect significant enrichment of risk alleles on open chromatin elements in 25/48 MS risk loci and that these were due to 177 DHS, of a total of > 500 000 DHS sites present in all 48 loci. We then correlated the pattern of accessibility of each of these 177 DHS sites to gene expression across REMC 56 tissues and identified 49 genes in 17/25 loci that show clear evidence of regulation by risk-burdened DHS sites. As expected, the DHS are preferentially accessible in immune cell subsets, particularly T cells and their precursors, and the 49 genes are strongly expressed in these tissues. These genes thus form strong hypotheses about specific MS risk mechanisms in particular cells and physiological contexts.
PATHOGENIC CELL TYPES AND TISSUES
There has been long-standing uncertainty about which specific immune cell subsets drive pathology, and what, if any, the role of the CNS is in generating risk. The vast majority of GWAS loci encode genes obviously active in the immune system, 45,63 and particularly in the lymphocyte lineage, 64 placing beyond a doubt the nature of the disease as autoimmune. However, although the hallmark of MS pathology is the presence of oligoclonal bands in CSF, making antibodysecreting B cells the obvious suspect, the GWAS enrichment studies all point to risk being mediated by gene regulation in CD4 + T cells 43 a view reinforced by the success of a4 integrin blockade by natalizumab, which prevents T cells from crossing the blood-brain barrier and forming new lesions. However, the off-label use of rituximab and recent approval of ocrelizumab in MS, both of which target CD20, indicate the B-cell blockade is also effective. 66,67 Whether this is a symptom control measure rather than an attack on the root cause of disease remains to be determined.
In contrast, there has been little evidence for causal roles for CNS-resident cells from GWAS analyses. This is in common with most other common complex autoimmune and inflammatory diseases, where gene regulation in target tissues appears to not be a major feature of GWAS loci. However, as circulating immune cells are overrepresented in most available transcriptional, epigenetic and pathway data sets, and CNS is either totally absent or represented only as gross anatomical regions, this may be due to ascertainment bias rather than underlying biology. This picture is starting to change as CNS data become more widely available. In the most recent IMSGC GWAS, 104/200 non-MHC risk loci overlapped eQTLs active in prefrontal cortex or immune cells. 46 These sometimes involve more than one eQTL per locus, for a total of 212 eQTLs potentially being relevant to pathogenesis. Of these, 45 are present only in prefrontal cortex and do not appear to affect gene regulation in immune cell subsets, suggesting that some effects may be restricted to CNS-resident cells (including microglia, which are part of the hematopoietic, rather than the neural, lineage).
OVERLAPS WITH OTHER AUTOIMMUNE DISEASES
As a group, the autoimmune and inflammatory diseases have proven remarkably tractable to genetic dissection in large cohorts, with several hundred risk loci now known in each disease. 68 As these results emerged, it became obvious that many loci were associated with multiple diseases and that the genes encoded in those loci fall into distinct immune pathways. 69,70 These results suggest that perturbations to key immune processes mediate risk to multiple diseases. For example, loci encoding the core components of the IL-23-mediated signalling pathway mediate risk for MS, psoriasis and Crohn's disease and those involved in IL-2-mediated signalling with rheumatoid arthritis and type I diabetes. Notably, in some cases the allele associated with increased MS risk is associated with decreased risk to another autoimmune disease. One example is rs744166 (located in an intron of the STAT3 gene on chromosome 17): the G allele is associated with increased risk in MS 38 and decreased risk in Crohn's disease. 71 However, as a result of the difficulties posed by linkage disequilibrium to fine mapping and comparing across traits discussed above, claims that GWAS associations in the same region represent shared effects must be treated with caution. Nevertheless, there are clear examples of biologically plausible mechanisms: an eQTL for ankyrin D55 in CD4 + T cells colocalising with GWAS signals for MS, rheumatoid arthritis and Crohn's disease 63 ; a specific DHS site driving risk to MS, type 1 diabetes and autoimmune thyroiditis in the MND1 locus 64 ; and that T-cell surface expression of IL-12 receptor alpha (CD25) is associated with risk variants for both MS and type 1 diabetes. 72 Such shared effects are interesting because they highlight more general processes of autoimmunity and therapies targeting them may show efficacy in multiple indications. However, those associations unique to MS may identify disease-specific biology, including CNS-relevant mechanisms. 69
FUTURE DIRECTIONS
Genome-wide association studies have proven remarkably successful in MS, with > 200 risk loci now identified. However, as discussed in this review, functional interpretation of these results remains a challenge, and translation to an understanding of pathobiology will remain a major target for the immediate future. The scale of the challenge is immense: from only one analysis, we have garnered 212 eQTLs that are likely to drive risk in 104 GWAS loci, 46 and each of these will have to be followed up experimentally, possibly in multiple cell types under multiple conditions. The current model of low-throughput, human-operated laboratory methods simply cannot accommodate this volume of hypotheses, so the coming decade will likely see the emergence of large-scale, automated assays in build-generate-test cycles to investigate each of these loci.
Looking beyond case-control association, several other aspects of the disease can also be dissected by genetic approaches. The largest single risk factor for MS is biological sex, with > 75% of patients being femalebut the causes for this discrepancy in incidence are unknown. 73 Approximately 95% of MS cases follow a relapsing-remitting pattern (RRMS), with approximately 50% of these converting to a secondary progressive form (SPMS) over time. The remaining 5% of all MS cases are of a more aggressive, primary progressive form (PPMS). We still do not understand the determinants of either PPMS or the risk factors for conversion from RRMS to SPMS. MS is also a remarkably heterogeneous disease, with some patients declining rapidly and others showing few or no symptoms for decades. 73 This clinical course is unpredictable, and no tools for prognosis currently exist. Several studies have explored the genetic basis of clinical course, age of onset and severity, although no genome-wide significant associations have been discovered. 36,37,43,[74][75][76][77] Whether more detailed disease parameters are more prone to error measurement, systematic differences across centres or simply not heritable remains to be determined, but a recent study showing that clinical scores can be predictive across centres suggests that lack of heritability is not the issue. 78 Similarly, patient response to therapy is largely unpredictable; there is no evidence to date, for example, that different patients have slightly different pathologies and would thus respond to distinct modes of therapy targeting those specific pathways, although efforts to dissect this issue are underway. One of the critical barriers to largescale genetic mapping for these secondary characteristics is an absence of data: amassing tens of thousands of cases and controls has been daunting, but retrieving detailed disease data from medical charts written in many languages and scattered across hundreds of medical centres on several continents is the herculean task that now confronts our field. Aggregating endophenotypes such as imaging metrics, electrophysiological parameters, visual disability, biomarkers and high-definition, computerised gait analysis, among others, may further assist in making heterogeneous clinical measurements more robust, although without integration may be difficult to interpret and will present further multiple testing challenges. | 5,604.2 | 2018-05-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Discourse of Online Content Moderation: Investigating Polarized User Responses to Changes in Reddit’s Quarantine Policy
Recent concerns over abusive behavior on their platforms have pressured social media companies to strengthen their content moderation policies. However, user opinions on these policies have been relatively understudied. In this paper, we present an analysis of user responses to a September 27, 2018 announcement about the quarantine policy on Reddit as a case study of to what extent the discourse on content moderation is polarized by users’ ideological viewpoint. We introduce a novel partitioning approach for characterizing user polarization based on their distribution of participation across interest subreddits. We then use automated techniques for capturing framing to examine how users with different viewpoints discuss moderation issues, finding that right-leaning users invoked censorship while left-leaning users highlighted inconsistencies on how content policies are applied. Overall, we argue for a more nuanced approach to moderation by highlighting the intersection of behavior and ideology in considering how abusive language is defined and regulated.
Introduction
In response to the rising surge of abusive behavior online, large social media platforms, such as Facebook, Twitter, and Youtube have been pressured to strengthen their stances against offensive content and increase their transparency in how content policies are enforced. Facebook, for example, first released its community standards publicly in April 2018 and has made efforts to ban white nationalist and separatist content (Stack, 2018), while Twitter announced a new policy against "dehumanizing speech" in September 2018 (Matsakis, 2018).
Nevertheless, the problem of how to define what behaviors are abusive and how these behaviors should be handled remains a challenge. One major issue in terms of defining a content policy for a major platform is that defining what abusive behavior is requires consideration of both behavior and ideology -political ideology is inextricably tied with abusive language on major platforms where sensitive discussion can occur. For example, Reddit (Statt, 2018) and Twitter (Newton, 2019) have faced recent backlash for allowing racist content to remain on their platforms over concerns of bias against right-leaning viewpoints. Prior research (Shen et al., 2018;Jiang et al., 2019) has also demonstrated that ideology can be used as a tool to challenge moderation decisions.
In this paper, we argue that ideology is inextricably tied to how abusive language is defined and regulated in real-world applications in social media. To demonstrate the role of political ideology in the problem of defining abusive language, we present the first NLP study of polarized user responses towards policy. We examine how users frame their arguments in supporting or opposing stronger moderation policies to draw insight into ideologically-related user concerns over their impact. As a case study, we focus on users' responses towards changes to the quarantine policy on Reddit. 1 Reddit provides an interesting site of study into content moderation issues due to a culture of debate over whether free speech is a principal tenet of the platform (Robertson, 2015). Here, we focus on a specific policy change to provide an in-depth analysis of the polarized stances users take.
The rest of the paper is organized as follows. First, we give an overview of related work and describe the recent Reddit quarantine policy update. Next, we present a general topic analysis of discussion surrounding the quarantine policy. We then describe how we operationalized polarization by characterizing users based on their participation across subreddits, then examine how different users frame issues within topics. Finally, we discuss the implications and limitations of our work.
Related Work
One of the primary roles of moderation in online spacesis the regulation of anti-social behaviors (Kiesler et al., 2012), such as spamming, cyberbullying, and hate speech. The design and best practices for moderating abusive content on large social media platforms, however, is a fundamentally challenging issue (Gillespie, 2018), due to the tension between providing a space for open and meaningful interaction and determining what behaviors are acceptable and how unacceptable behaviors should be handled. While social media companies, as private organizations, can legally curate content on their platforms (Robertson, 2015), cracking down on content can lead to tension with users, who may view it as setting a precedent for banning behaviors or even political ideologies in the future. Previous research Shen et al. (2018); Jiang et al. (2019), has demonstrated that tensions and backlash can arise in communities if participants perceive moderation decisions as biased against minority viewpoints, even if decisions seem "fair" after accounting for behavior.
Previous research on the effect of moderation policies has focused primarily on the effect of moderation on directly affected users. For example, Chandrasekharan et al. (2017) investigated the impact of the 2015 Reddit hateful content ban on users who participated on the banned subreddits, while Chang and Danescu-Niculescu-Mizil (2019) examined the participation trajectories of users blocked by community moderators on Wikipedia. User opinions on moderation policies, however, remains relatively understudied from a large-scale quantitative perspective, though previous work has drawn insights from structured interviews and surveys with users. Jhaver et al. (2018) interviewed both users who used blocklists on Twitter and users who have been blocked on their insights about harassment and blocking. Myers West (2018) surveyed participants on On-lineCensorship.org about their experiences with content moderation to gather insights into folk theories about how moderation policies work.
Most closely related to our work, which focuses on ideologically motivated user viewpoints, Jhaver et al. (2017) used a mixed-methods approach to investigate how users on r/KotakuInAction, a sub-reddit associated with the Gamergate movement, view free expression, harassment, and censorship within their own community. Rather than focusing on users who share certain views within a particular subreddit, however, we focus on users who responded to a Reddit-wide moderation policy change. This allows us to examine how users who have participated across a wide range of subreddits present their opinions, with the goal of understanding what elements of the debate between moderation and censorship are polarized.
Reddit Quarantine Policy Announcement
On September 27, 2018, Reddit announced changes to their quarantine policy in response to growing concerns over the visibility of offensive content on their platform. The quarantine feature allows site administrators to hide "communities that, while not prohibited, average redditors may nevertheless find highly offensive or upsetting" 2 from being searched, recommended, or monetized. While the quarantine function was initially announced in August 2015 as part of a broader initiative to address offensive content, the September announcement specifically focused on expanding use of the quarantine function. The two major aspects of the announcement were 1) a quarantine wave of 20+ communities of interest or subreddits and 2) the introduction of an appeals process for moderators of quarantined subreddits. The announcement was posted in the r/announcements subreddit, which allows users to respond to major Reddit-internal policy changes. To investigate the discourse surrounding the announcement, we collected comments that were posted in response to the r/announcements over the course of one month using the Pushshift API. 3 After filtering out 6 comments that were deleted by users or removed by moderators, as we no longer had access to the original comment texts, we then identified 13 well-known meta-bots 4 among the remaining users. Both comments by and responses to these meta-bots were removed, as they are usually formulaic and unrelated to the content of our analyses (e.g. "Good bot", complaints about bot responses), leaving us with a final announcement dataset containing 9,836 posts from 3,640 users.
Topical Analysis
Topic choice has been commonly used in NLP (Tsur et al., 2015;Field et al., 2018;Demszky et al., 2019) as a proxy for agenda-setting, the strategic highlighting of what aspects of a subject are worth discussing (McCombs, 2002). Here, we first describe our preliminary topic analysis for discovering the range of topics discussed.
Models
We used Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to construct our topics. While Structural Topic Models (STM) (Roberts et al., 2013) are popular for social science analyses for enabling document metadata to act as topic covariates, STM consistently performed worse than LDA on our data, both in topical coherence measures and human interpretability. 5 5 A potential challenge for STM for our data is the lack of global consistency in our metadata. Comments in Reddit For the LDA models, we considered each comment to be a document. Comments were tokenized using SpaCy (Honnibal and Montani, 2017) and stopwords and punctuation-only tokens were removed. We trained models with 5, 10, 15, 20, 25, 30, 40, and 50 topics. We selected the model with 10 topics for further analysis for having the highest CV coherence, which has been shown to more closely correlate with human ratings of interpretability (Röder et al., 2015) than semantic coherence (Mimno et al., 2011). When analyzing and interpreting the topics discovered, we examined both the highest weighted words and example comments associated with each topic.
Results
Table 1 presents the topics discovered by the model. The most prevalent topic (T0) in the discussion thread focuses on accessibility to quarantined subreddits. This is unsurprising, as this threads are organized in broad semi-topical hierarchical trees and threads can contain thousands of comments (Weninger, 2014). As a result, user participation on a single thread can be scattered and upvoted comments in one subthread may substantially overlap in content with downvoted comments in another. Thus, the simpler LDA model, with fewer global priors on the structure and content of the data, may have better generalization. topic directly addresses the short-term impact of the quarantine wave, such as the ability to search for and list quarantined subreddits, access to quarantined content on the mobile app, and whether quarantined content will generate ad revenue. The proportion of T0 across comments, however, is relatively low (13.6%), compared to discussion centered around the broader implications of quarantining. For example, T3: Conservative vs. Liberal Politics and T6: Far-Right/Far-Left Ideologies center around broader ideologies associated with controversial content, while T4: Censorship of Political Views/Debate, T5: Moderation/Free Speech on Social Media Platforms, and T8: Laws/Government-Level Policies discuss the legal implications of online content moderation.
One notable topic in our model was T2: Content in r/The Donald. Despite not being one of the subreddits quarantined during the quarantine wave, much of the discussion surrounding the announcement centered around The Donald, due to its prominent reputation for controversial behavior. We can see evidence of discussion about controversial behavior on The Donald, as many of the highly weighted words in the discussion of The Donald are words describing negative behaviors that have been associated with the subreddit in past research, such as propaganda/fake news (Kang, 2016), promotion of violence and racism (Squirrell, 2017), and visibility manipulation and mobilization through bots (Carman et al., 2018;Flores-Saviaga et al., 2018). The Donald is often considered an "elephant in the room" with regards to content moderation on Reddit, as the subreddit remains one of the most visible and active subreddits on the site despite its controversial reputation.
A somewhat surprising omission from the topics discovered was discussion around the new appeals process for quarantined subreddits. While the bulk of the text in the original post of the thread centered around the introduction of the appeals process, only 0.13% of the posts explicitly used the words "appeal" and "appeals" in reference to the appeals policy. The addition of an appeals process is relatively uncontroversial for increasing the transparency of quarantines and primarily affects moderators of quarantined subreddits. This suggests that what is driving discussion within the thread are the more controversial issues that may have a personal, ideological impact on users. As a result, we expect that users with differing view-points may highlight different aspects within the general topics discussed here.
Characterizing User Participation on Reddit
In order to better understand how different users highlight or frame particular aspects within each topic (Entman, 2007;Nguyen et al., 2013;Card et al., 2016), we first want to characterize the types of users who participated in the r/announcements discussion. Because subreddits on Reddit represent interest-based subcommunities, previous work has used participation across subreddits as a signal of user interests or viewpoint (Olson and Neal, 2015;Chandrasekharan et al., 2017). We follow in the lines of this work by characterizing users using their participation in subreddits prior to the announcement. In this section, we describe a graph-partitioning approach for characterizing common interests across subreddits.
Constructing the Interest Graph
For each user who participated in the r/announcements quarantine thread, we collect all submissions and comments posted by the user in the month preceding the quarantine policy update (August 27 -September 26). We then counted how many times each user posted in each subreddit. In order to ensure that users both showed sustained interest in a subreddit and to limit the number of users who participate in subreddits to challenge the widely held view of a subreddit, we consider a user to be interested in a subreddit if they have posted at least 3 times 6 in the preceding month with a positive score.
To capture similarities between the subreddits users participate in, we then cluster them by performing graph partitioning over a subreddit interest graph (Olson and Neal, 2015). We construct a subreddit interest graph by drawing an undirected edge e ij between two subreddit nodes i and j if the same user participates in both subreddits. A ij , the weight of e ij , is set equal to the number of users in common between i and j. We reduce the number of edges in the graph by setting a global edge threshold A ij >= 5. 7
Category
Central
Louvain Community Detection
We use the Louvain community detection algorithm (Blondel et al., 2008) to define a partition over the constructed subreddit interest graph. The objective of the Louvain algorithm is to maximize the modularity of a partition, which measures the density of links within vs. between communities. The Louvain modularity Q is defined as where k i = j A ij is the sum of the weights of edges attached to node i, δ(c i , c j ) = 1 if nodes i and j belong to the same community, 0 otherwise, and m = 1 2 i,j A ij . Because ∆Q from moving node i from one community to another is easy to compute, the algorithm finds the best partition through a simple two-stage process: 1. Assign each node to its own community 2. Repeat until convergence (a) Iterate through nodes i, moving i into the community that gives the highest increase in modularity, until convergence. (b) Construct new graph where nodes are communities and edge weights between communities are equal to sum of edge weights between lower-level nodes.
We use a resolution factor of 1.0 and select the highest modularity partition of the dendrogram for our subreddit categories. The resulting 5 categories are shown in Table 2. r/announcements thread. As a result, a significance-based method of thresholding edges can give uneven results based on how many users were sampled from each subreddit.
Evaluation
To ensure that the 5 discovered subreddit categories gave us high-quality and coherent notions of user interests, we run a human evaluation of the discovered categories using a subreddit intrusion task, analogous to word intrusion tasks used for evaluating topic model interpretability (Chang et al., 2009). The subreddit intrusion task was presented to two native English speaker annotators who used Reddit on a daily basis to ensure familiarity with the types of user interests on Reddit. Given a set of four subreddits belonging to one of the categories, and an "intruder" subreddit from another category, annotators were asked to identify the intruder. Annotators were provided with the description and 5 highly-ranked thread titles for each subreddit for additional context in determining the intruder. For each category, all the other categories were selected as an intruder instance 4 times, giving us 16 sets per category. After completing the intrusion task, the annotators discussed their decision-making process during the intrusion task and assigned labels to the five discovered subreddit categories. Results for the intrusion task for each category are included in Table 2. For all the subreddit categories except C3: Memes, the annotators achieved moderate-to-high agreement and performed significantly better than a random baseline. The category of C3: Memes is more abstract compared to the other categories and contains many subreddits that are not easily identifiable by name and description alone. Nevertheless, the annotators were able to reach an agreement on the interests covered by C3 in discussion after the intrusion task.
From these discovered subreddit categories, for each user, we calculate their distribution of par-ticipation across the five categories and an additional category for unidentified subreddits. One limitation of considering user viewpoints based on these categories, however, is that only C2: Right-Leaning and C4: Left-Leaning are directly related to political viewpoint. Rather, these five categories more closely represent shared sets of interests or personas users can engage in. While this limits what we can say in terms of polarization across the traditional definitions of left-leaning vs. rightleaning political ideologies, we argue that considering user participation in these interest categories is more representative of how users on Reddit engage in politics across the site.
Analyzing Polarized Viewpoints Towards the Quarantine Policy
In the previous sections, we first identified the general topics discussed within the r/announcements thread about the quarantine policy. We then characterized users who participated in the r/announcements thread based on their distribution of participation across different subreddits in the month preceding the announcement. In this section, we examine the relationship between a user's ideological views and how they strategically highlight particular aspects of each topic. Rather than using a static left vs. right framework for operationalizing user viewpoint, we examine how users highlight different aspects as they move along the left-right spectrum. We then analyze the relationship between users' polarization and their framing within the topics identified in Section 3 in an unsupervised manner.
User Polarization
While we can label users strictly as left vs. right based on whether they spend more of their time on left-leaning and right-leaning subreddits in their participation distribution, we can get a more nuanced view of the differences between left-leaning and right-leaning users by additionally considering how polarized users are along the left-right spectrum. Rather than using a simple majoritybased assignment, we introduce a polarization margin hyperparameter β that controls for how skewed a user must be towards one side to be considered a left-leaning or right-leaning user. For a given β, we can assign the class of each user u i based on their participation distribution p: (2) β = 0 is equal to the majority case. For our remaining analyses on agenda-setting and framing, we compare results for β = {0, 0.1, 0.25}. Figure 1 shows the prevalence of each topic across left-leaning and right-leaning users at differing values of β. We found that right-leaning users were significantly more likely to invoke T0: Accessibility of Quarantined Content, T4: Censor-ship of Political Views/Debate, and T5: Moderation/Free Speech on Social Media for all values of β. The high prevalence T0 is unsurprising, as the majority of the newly quarantined subreddits (listed in the Supplementary Material) were associated with conservative views and users. Thus, accessibility to the newly quarantined subreddits would be a concern for many right-leaning users. The increased prevalence of topics T4 and T5, which are focused on the relationship between content moderation online spaces and censorship, suggests that right-leaning users may be challenging the ability or approach of Reddit administrators to expand the quarantine policy as a form of censorship. Finally, the higher prevalence of T7: Personal Experience topic, which is focused on users' personal participation on the quarantined or other controversial subreddits, suggests that to some extent, right-leaning users are leaning into their participation on controversial subreddits in their responses towards the announcement.
Polarized Agenda-Setting
Across all values of β, left-leaning users use T6: Far-Right/Far-Left Ideologies significantly more than right-leaning users. This difference increases as the polarization margin β increases. This suggests that left-leaning users were likely to invoke the controversial behaviors associated with the extremism, particularly the far-right. Interestingly, while extremist ideology is more likely to be invoked by left-leaning users, there was no significant difference in prevalence between left-leaning and right-leaning users for discussion of US politics (T3: Conservative vs. Liberal Politics).
Overall, we note that while the relative prevalence of topics for left-leaning and right-leaning users generally remained the same at different values of β, the major differences between leftleaning and right-leaning users became larger as we increase the polarity margin.
Within-Topic Framing
We expect users who have different positions to highlight different aspects of each topic. To separate out the salient words within each topic t for left-leaning and right-leaning users, for each word w, we use the z-score of the log-odds ratio with a Dirichlet prior (Monroe et al., 2008) as a salience score, δ where n c(t) is the number of words in corpus c, y w is the count of word w in corpus c(t), l(t) and r(t) are the left-leaning and right-leaning corpora for topic t, and α t 0 and α t w are corpus and word priors from a background corpus. We set the Dirichlet prior by using the posts from "neutral" users as a background corpus, with the size and count of words in the background corpus as the corpus and word priors respectively.. We extend the salience score to bigrams and trigrams and sampled posts containing the top 50 salient terms for each topic and faction to analyze framing strategies at different levels of polarization.
First, we found that, across topics, right-leaning users framed the issues surrounding content moderation in terms of censorship and suppression, while left-leaning users tended to frame issues in terms of consistency. For example, in T4: Censorship of Political Views/Debate, right-leaning users consistently used terms such as "silencing", "echo chamber", and "censorship" in reference to impact of the announcement, directly accusing the quarantine policy of being used to silence political viewpoints. This supports our hypothesis from Section 5.2 that right-leaning users invoked T4 to criticize the quarantine policy as a form of censorship. On the other hand, when left-leaning users invoked T4, they used terms such as "picking and choosing", "bad faith" in reference to uneven and insufficient application of the policy. Left-leaning users also often compared the quarantine feature to "bans" in T4, arguing that many subreddits quarantined under the announcement shared similarities with subreddits that were banned in the past.
We see similar patterns in T5: Moderation/Free Speech on Social Media, though many of the salient terms used are specific to internet platforms. Right-leaning users emphasize the ideal of a free and open internet, using terms such as "open platforms" and invoking the name of "Aaron Swartz", the late Reddit co-founder known for his anti-censorship views. Left-leaning users, on the other hand, consistently highlighted that private organizations like Reddit ("private company", "privately owned") had the right to remove or hide content in violation of their policies.
One of the more salient framing strategies related to consistency by left-leaning users is the comparison of quarantines with Reddit's handling of pornographic content, primarily in T0: Accessibility of Quarantined Content and T8: Laws/Government-level Policies. While opinions about how to handle porn on Reddit are mixed, porn is commonly used as an analogue for many of the consistency issues involved with quarantining subreddits with abusive language. For example, some users argue that the intent and functionality of quarantining should be similar to the notsafe-for-work (NSFW) filtering system already in place for pornographic subreddits, which does not explicitly block a subreddit from being searched or shown in r/all. Others compare the liability of hosting pornography vs. other forms of offensive content, such as violence or hate speech.
We also found that across factions, users tried to highlight controversial, even violent, behavior by users on the opposite side. In Section 5.2, while we suggested that left-leaning users invoked T6: Far-Right/Far-Left Ideologies to highlight controversial behaviors in far-right subreddits, T6 is also associated with talk surrounding the quarantine of r/FULLCOMMUNISM, described as a "selfaware socialist satire sub". Thus, invocation of T6 may also be reflective of their personal investment in participating in a quarantined subreddit. We see, however, that discussions about "socialism" and "communism" are highly salient for rightleaning users, who commonly accused subreddits associated with these ideologies of supporting dictatorships and inciting violence. Similarly, for left-leaning users,"nazi", "ethnic", "fascist", and "genocide" are highly salient in T6, which were used to argue that many right-leaning subreddits, quarantined or not, expressed racist views, supported fascism, and denied genocides.
The framing strategy of highlighting controversial behavior from the opposing viewpoint was also apparent in T2: Content in r/The Donald. While the most salient terms for right-leaning users focused on the how The Donald governs itself ("admins", "moderators", "users", "rules"), left-leaning users explicitly emphasized that the Donald has content encouraging violence ("kill", "doxxing", "encouraged", "attacking", "spread"). One of the most common associations between The Donald and incitement of violence cited by left-leaning users was the case of u/Seattle4Truth, a The Donald user, who murdered his own father (Neiwert, 2017).
Like with our analysis of topic choice, the specific strategies on each side remained generally consistent at the different levels of polarity.
Discussion
From our analysis, we find that right-leaning users tend to frame the issues surrounding content moderation in terms of censorship of political viewpoints, while left-leaning users highlight the issues surrounding consistency in how moderation is applied, especially in regards to unmoderated offensive content. On the surface, these findings seem to reflect stereotypes about how freedom of expression is viewed by liberals and conservatives offline in the debate over campus free speech (Friedman, 2019). However, we argue that the emphasis on censorship vs. consistency is not entirely reflective of stereotypical, surface-level differences between conservative and liberal viewpoints on the tension between moderation and free speech. Both left-leaning and right-leaning users, for example, used statements decrying both hate speech and censorship and highlighted concerns with how the Reddit quarantine policy was implemented. Instead, we argue that these strategies are employed as a defense of a user's legitimate participation on Reddit. While previous work has examined the use of free speech discourse as a defense against ego or expressive threat (White et al., 2017), further exploration is needed into why the specific strategies of censorship vs. consistency are applied in the context of online discussion.
As an example for needing more nuance in understanding how opinions on policy are used strategically in argumentation, one common framing strategy we see across both sides is the association of opposing viewpoints with the incitement or encouragement of violence. The question of whether something incites or encourages violence is important, as the encouragement and incitement of violence is explicitly prohibited by Reddit's content policy. 8 While "encouraging and inciting violence" provides a more concrete frame of judgment than broader definitions of offensive language, there still is ambiguity in terms of how administrators should respond to content that violates Reddit policy, especially on the level of broader communities. At the level of subreddits, it is unclear to what extent a community has to demonstrate violent behavior before the administrators take action to quarantine or ban a subreddit. Many users 9 argue that this ambiguity allows for the Reddit administration to protect popular but controversial subreddits like The Donald.
Limitations and Future Work
Our work in this paper is focused on polarized responses to a specific content moderation policy change on Reddit. While we perform an in-depth analysis of the issues raised by the quarantine policy change, our findings may be specific to the context surrounding this particular event, such as the majority of subreddits quarantined in conjunction with the announcement being right-leaning. A longitudinal analysis, where we examine responses to announcements affecting content moderation on Reddit over time may give us a more general view of how users on Reddit talk about free speech and how the discourse of free speech on Reddit has evolved in response to major events. As of June 2019, there have not been other major notifications regarding moderation policy changes in the r/announcements subreddit since the quarantine policy changes. Nevertheless, finding textual signals of user opinions for other moderationrelated events, like the progression and eventual banning of quarantined subreddits (e.g. CringeAnarchy, watchpeopledie), remains an interesting area of study.
While we introduced the polarization margin as a method for capturing differences beyond a static left vs. right ideological assignment over users, we found very few differences between users in the same class at different levels of polarization. One limitation of our approach, however, is that we still rely on a hard left-right distinction at the different values of polarization margin β. Relaxing the assumption that users must be assigned to a class for our topic choice and salience analyses and instead 8 https://www.redditinc.com/policies/ content-policy 9 See r/AgainstHateSubreddits, which tracks behaviors across subreddits that violate Reddit's content policy.
using the raw distribution of participation across all subreddit categories may give us better insight into the range of users' framing strategies across a wider, more nuanced range viewpoints.
Ethical Considerations
The investigation of the discourse surrounding the Reddit quarantine policy requires us to handle sensitive information related to users' political leanings. To limit the impact of this study on users' privacy and participation on Reddit (Fiesler and Proferes, 2018), usernames were only used to collect user activity outside of the r/announcements thread. After data collection, all usernames were anonymized by replacement with a random numeric id. Additionally, this study focuses on the relationship between discussion about moderation and polarization in aggregate. Though individual researchers viewed example posts, these posts were not matched with individual users by either username or id. Finally, while the full anonymized data from the r/announcements thread is publicly available 10 , we only release the user distribution across subreddit categories to prevent the user tracking across subreddits.
Conclusion
In this paper, we used techniques for examining agenda-setting and framing to investigate how users discuss their opinions on an update to Reddit's quarantine policy. We presented a novel approach for operationalizing user polarization for our framing analyses, finding that as a whole, right-leaning users tended to invoke censorship while left-leaning users tended to invoke consistency in how policies are applied. While this seems to reflect stereotypes about how freedom of expression is viewed by conservatives and liberals, we argue for a more nuanced view of formalizing differences in how users frame their opinions about policy. Overall, this work builds towards understanding the relationship between ideology and policy with regards to offensive language. | 7,112 | 2019-01-01T00:00:00.000 | [
"Computer Science",
"Political Science"
] |
A Novel S100 Family-Based Signature Associated with Prognosis and Immune Microenvironment in Glioma
Background Glioma is the most common central nervous system (CNS) cancer with a short survival period and a poor prognosis. The S100 family gene, comprising 25 members, relates to diverse biological processes of human malignancies. Nonetheless, the significance of S100 genes in predicting the prognosis of glioma remains largely unclear. We aimed to build an S100 family-based signature for glioma prognosis. Methods We downloaded 665 and 313 glioma patients, respectively, from The Cancer Genome Atlas (TCGA) and Chinese Glioma Genome Atlas (CGGA) database with RNAseq data and clinical information. This study established a prognostic signature based on the S100 family genes through multivariate COX and LASSO regression. The Kaplan–Meier curve was plotted to compare overall survival (OS) among groups, whereas Receiver Operating Characteristic (ROC) analysis was performed to evaluate model accuracy. A representative gene S100B was further verified by in vitro experiments. Results An S100 family-based signature comprising 5 genes was constructed to predict the glioma that stratified TCGA-derived cases as a low- or high-risk group, whereas the significance of prognosis was verified based on CGGA-derived cases. Kaplan–Meier analysis revealed that the high-risk group was associated with the dismal prognosis. Furthermore, the S100 family-based signature was proved to be closely related to immune microenvironment. In vitro analysis showed S100B gene in the signature promoted glioblastoma (GBM) cell proliferation and migration. Conclusions We constructed and verified a novel S100 family-based signature associated with tumor immune microenvironment (TIME), which may shed novel light on the glioma diagnosis and treatment.
Introduction
Glioma is the most common type of human primary brain cancer, which accounts for approximately 30% of all brain cancer occurrences [1]. Glioma can be divided into low (I-II) or high (III-IV) grades based on the World Health Organization (WHO) criteria. It is difficult to entirely remove tumor tissue during surgery due to the high invasion, infinite proliferation, diffuse infiltration and lack of a clearly boundary of high-grade glioma [2]. Despite advances in surgery, chemotherapy and radiotherapy, glioma is still associated with a poor prognosis, and its median survival is as short as <15 months [3]. However, these therapeutic strategies are limited by drug resistance and tumor recurrence, which are influenced by a complicated gene regulatory network. erefore, identification of reliable targets and prognostic biomarkers for glioma is urgently required. e S100 family is a category of low-molecular-weight (10-14 kda), acidic, calcium-binding protein with an EFhand motif that was first identified from the bovine brain in 1965 by Blake W. Moore [4]. Currently, 25 family members have been described, 16 are clustered together on chromosome 1q21, a locus susceptible to genomic rearrangements in malignant tumors [5]. S100 proteins participate in regulating some cell processes, like proliferation, differentiation, apoptosis, and immune responses. It has also become evident that many specific S100 genes are abnormally expressed in some human tumors, facilitating cancer genesis and development [6]. S100A4, S100B, and S100P, for example, inhibit the phosphorylation of p53 and subsequently attenuate the tumor-suppressive ability of p53 [7,8]. S100A8/S100A9 activates the MAPK pathway to promote the proliferation of breast cancer (BC) [9]. Increased S100A11 expression has been observed in lung cancer, which activates Wnt/β-catenin pathways to facilitate the development of drug resistance and cancer metastasis [10]. Additionally, a number of S100 proteins could be identified as molecular biomarkers to diagnose or predict a specific cancer [11].
is study focused on developing a prognostic nomogram based on the S100 family members to explore the clinical significance of this family for glioma prognosis. e prognostic values and expression profiles of the S100 family in glioma samples were comprehensively evaluated using public resources and bioinformatics analysis. We identified five signature-related genes that are associated with the survival of glioma patients; besides, multiple tumor-related pathways were enriched into the high-risk group. Our results indicate that S100 family-based signature may play a critical role in glioma progression and could be considered as prognostic markers and therapeutic targets for glioma in the future.
Acquisition of Glioma Datasets. Clinical and FPKM
RNAseq data of 703 glioma cases were obtained from TCGA database (https://portal.gdc.cancer.gov/) into the training set. Similarly, we also obtained 325 cases from the CGGA database (http://www.cgga.org.cn) into the validation set. According to patient ID, this study compared clinical features of patients with corresponding transcriptome data. Samples were removed if the data did not match. A total of 665 and 313 patients with complete clinical data were finally selected from TCGA and CGGA database for the next analysis, respectively.
Construction and Verification of a Risk Score Prognostic
Model Based on S100 Gene Family Members. A Cox proportional hazard regression model was constructed to estimate the prognosis of glioma cases in the TCGA training set. Furthermore, the model's prognostic performance was validated in the CGGA validation set. Firstly, the candidate S100 family genes related to prognosis were identified by univariate Cox regression through "survival" package upon the threshold of P < 0.05 [12]. Secondly, overfitting genes were removed through LASSO regression via R package "glmnet" function [13]. irdly, R package "glmnet" function was also utilized to build a prognosis prediction nomogram by multivariate Cox proportional hazard regression [14]. e final risk score prognostic model was established with the following formula: (1) Here, Coef j stands for coefficient of multivariate Cox regression for gene j; n represents the overall hub gene number; X j indicates relative gene j expression within the model.
For exploring the significance of the risk score model in predicting prognosis, glioma cases were classified as a lowor high-risk group according to median risk score value, with high-risk group having a poor prognostic outcome.
en Kaplan-Meier curves were plotted to analyze the OS of two groups of glioma patients through the log-rank test. e sensitivity and specificity of the constructed nomogram were assessed through determining 1-, 3-, and 5-year area under the time-dependent ROC curve (AUC) values using the survival ROC R package, with an AUC > 0.70 denoting a good predictive value.
Integration of Protein-Protein Interaction (PPI) Network and Identification of Hub Genes.
is study constructed the protein-protein interaction (PPI) network using the STRING database (http://www.string-db.org/). Cytoscape (https://cytoscape.org/) is often used for visualizing the complicated network and integrating them with attribute data. In the present work, Cytoscape was utilized for building the PPI network and for analyzing the relationships among S100 family members. Following that, the Maximal Clique Centrality (MCC) algorithm in the Cytoscape software (v 3.7.0) was employed to identify hub genes.
Construction and Evaluation of the Nomogram.
To provide an approach for quantitatively analyzing the OS of glioma, we used the "rms" R packages to construct a nomogram based on clinical variables and the prognosis signature. A calibration curve [15] was plotted to evaluate the nomogram prediction performance by analyzing the consistency of predicted values with actual measurements.
e present work carried out GSEA for comparing the biological functions and pathways related to signature-related genes between low-and high-risk groups from both TCGA and CGGA data sets by the use of GSEA v4.0.3 (https://www.gsea-msigdb. org/). In line with GSEA User Guide, the significant gene sets were selected at the thresholds of FDR q < 0.25, NOM p < 0.05, and |NES| > 1.
Assessment of Immune Cell Type Fractions.
e abundance of immune cell type fraction between low-and highrisk score groups was estimated by CIBERSORT (https:// cibersort.stanford.edu/) [16]. CIBERSORT is a new approach that extensively adopted to characterize cellular components in composite tissues based on gene expression profiling data within cancers, and it can obtain ground truth estimates with high consistency in diverse cancer types. LM22, a white blood cell (WBC) gene signature matrix involving 547 genes, was adopted for distinguishing 22 kinds of tumor-infiltrating immune cells (TIICs), such as regulatory T cells (Tregs), T cells, B cells, natural killer (NK) cells, mast cells, dendritic cells (DCs), monocytes, neutrophils, eosinophils, and macrophages.
RNA Extraction and qRT-PCR.
Total RNA was extracted using Trizol reagent (Invitrogen, America), and cDNAs were synthesized using HiScript Synthesis kit (Vazyme, China). Quantitative real-time PCR (qRT-PCR) was then conducted on a StepOnePlus Real-Time PCR system (Applied Biosystems, CA, US) using the Fast SYBR Green Master Mix (Roche, America). Primer sequences in this study are detailed in Table S1. 2.9. Western Blotting. Cell protein was extracted using RIPA lysis buffer (P0013D, Beyotime, China). Equal amounts of protein samples were separated on 12.5% SDS-PAGE gels, which were then electrotransferred onto nitrocellulose (NC) membranes (Pall Corporation, USA). e membranes were blocked for 2 h at room temperature with 5% nonfat milk, then incubated overnight at 4°C with a primary antibody (against S100B, Abcam), followed by the corresponding secondary antibody for 2 h.
CCK-8 and EdU Proliferation
Assay. Cell counting kit-8 (CCK-8) and 5-ethynyl-20-deoxyuridine (EdU) assays were conducted to evaluate the cells' proliferative ability. Cells were seeded in 96-well plates at a density of 2000 cells per well overnight, and cell growth was assayed at different periods utilizing CCK-8 kit (C0038, Beyotime, China) based on the instruction manual. Absorbance at 450 nm was determined by enzyme labeling ( ermo Scientific Multiskan FC, USA). e EdU test was carried out using a Cell-Light EdU Apollo 567 In Vitro Imaging Kit (Ribobio, China) following the manufacturer's protocol. Cells were first stained with 50 μM EdU for 2 hours, then fixed with 4% paraformaldehyde and permeabilizated with 0.5% Triton X-100. After three washes with PBS, cells were incubated with 1 × Apollo ® for 30 min, followed by DAPI staining. e EdU-positive cells were eventually viewed by fluorescence microscopy (Olympus, Japan).
Transwell Invasion Assay.
e Transwell invasion test was carried out in the 24-well Transwell chambers (Corning, USA) precoated with Matrigel (BD Biosciences, USA). e top chambers were seeded with about 5 × 10 4 cells in serumfree DMEM media, whereas the bottom chambers were filled with DMEM containing 10% FBS. After 24 h of incubation, the penetrated cells were fixed with 4% methanol, then stained with 0.1% crystal violet. Treated cells in each well were finally photographed at random and counted under an inverted microscope (Nikon, Japan).
Data
Analysis. GraphPad Prism 5.0 (GraphPad Software, Inc., San Diego, CA, USA) or R software (version 4.0.3) was utilized for statistical analysis. Differences between two groups were compared by Mann-Whitney U tests or Students T-tests, whereas those across numerous groups were compared by Kruskal-Wallis H tests or oneway ANOVA. Associations of clinical features with risk scores were analyzed by Fisher's exact test and chi-square test. Survival data were analyzed by Kaplan-Meier analysis.
e influence of risk score on OS was evaluated through univariate as well as multivariate Cox regression. e prognosis prediction performance of the risk model was assessed through ROC curve analysis. Each experiment was repeated thrice, and results were expressed as mean ± SD. * P < 0.05, * * P < 0.01, and * * * P < 0.001 were considered significant.
3.1.
Construction of the S100 Family-Based Signature of Glioma in the TCGA Cohort. e flow chart of this study is shown in Figure 1. A total of 25 genes of S100 family were retrieved from the previous literature and TCGA/CGGA databases [6,17]. To better explore the interaction among these genes, we established the PPI network comprising 22 nodes and 84 edges (Figure 2(a)). e 10 genes with the highest MCC score were identified by the STRING database and Cytoscape software (Figure 2(b)), suggesting their important role in human cancers.
Patients were classified as a low-or high-risk group according to the median risk score value. Figure
Independent Prognostic
Value of the Five-S100 Family Gene Signature. We performed univariate and multivariate Cox regression to determine whether the five-S100 family gene signature could be independent of other clinical parameters (age, gender, WHO grade, and risk score) as a predictor for patients with glioma. As revealed by univariate analysis, age, risk score value and grade were significantly related to patients' OS in both data sets (P < 0.001), only gender was not (Figures 4(a) and 4(c)). Upon multivariate Cox regression, age, risk score value and grade were the independent factors to predict OS for TCGA-derived patients (P < 0.001), whereas only grade and risk score remained statistically significant in the CGGA cohort (P < 0.001; Figures 4(b) and 4(d)). e above findings suggested that our constructed model served as an independent predictor for the prognosis of glioma patients.
Nomogram Analysis.
is study constructed the nomogram based on risk score values and clinical features of patients for predicting their risk of survival using "rms" package in R software (Figure 4(e)). e nomogram integrated age, grade, and risk score, and each factor was employed for obtaining relevant score summary, as well as overall score for respective samples. e higher scores indicated a worse prognosis. In the calibration curve, the predictive and actual survival showed a good consistency in 1-, 3-, and 5-year OS (Figure 4(f )). e nomogram model passed the PH assumption and had no statistically significant deviation (P < 0.05) ( Figure S4). In brief, this nomogram model performed well in predicting glioma survival.
GSEA Identifies S100 Family-Based Signature-Related Signaling Pathways.
e constructed S100 family-based signature had potent stratification ability in the prediction of glioma OS, promoting us to investigate the related signal transduction pathways. GSEA was conducted to compare the Gene expression profiling data was downloaded from TCGA and CGGA Identification of 25 S100 family genes Training set (TCGA) Test set (CGGA) A five S100 family genes-based risk signature Kaplan-Meier analysis ROC analysis Tumor immune microenvironment analysis Nomogram establishment
Immune Landscape between Low-and High-Risk Glioma
Patients. More and more studies indicate that tumor development is also affected by the tumor immune microenvironment (TIME). It is necessary to examine the impact of prediction nomogram on TIICs in glioma patients. CIBERSORT with the LM22 signature matrix was adopted for estimating the heterogeneities in 22 kinds of TIIC infiltrating levels in both groups. As shown in Figure 6, high-risk patients with glioma showed markedly increased M2 macrophage and Treg proportions, whereas apparently decreased activated Mast cell proportion in both CGGA and TCGA cohorts. Additionally, high-risk group had a markedly increased proportion of resting Mast cells compared with low-risk group in TCGA, but there was no significant difference in CGGA. Based on the Human Protein Atlas (HPA) database (http:// www.proteinatlas.org), we also found that M2-related molecular (CD163) and Treg-related makers (CD25, STAT5B, and IL-10) were highly expressed in GBM tissues ( Figure S1). e qRT-PCR results revealed that two important immune checkpoints, TGF-β and IL-10, were significantly higher in GBM cells than in NHA control ( Figure S2).
e above findings suggested that high-risk patients were more likely to develop the immunosuppressive microenvironment by upregulating immune checkpoints and immunosuppressive cytokines.
Verification of the Target Genes.
Furthermore, the databases CGGA and GEPIA were adopted to verify the relationship between the expression of those 5 signaturerelated genes and patient survival. For the GEPIA-derived cohort, the expression levels of S100A11, S100A16, and S100B in low-grade glioma (LGG) and GBM samples increased compared with those in noncarcinoma samples, while S10013 in LGG tissue was upregulated in normal tissues ( Figure 8). In TCGA and CGGA database, we performed a series of survival analyses to reveal the prognostic value of target genes of the signature in glioma patients. According to Figure 9, the group with high expression of S100A11, S100A13, S100A16, S100B, and S100PBP showed shorter OS relative to the group with low expression for all patients with glioma in the TCGA database (P < 0.001). In the CCGA database, the OS rate in patients with high levels of S10011, S100A16, and S100B was markedly worse than those with low expression (P < 0.001).
en, a subgroup survival analysis was also performed for patients with LGG and HGG ( Figure S3). In general, upregulated target gene mRNA expression of the S100 family-based signature predicted dismal prognostic outcomes.
S100B Mediates GBM Cell Growth and Migration.
To further validate this prognostic model, S100B gene was selected as a representative to carry out the functional experiments for the following reasons. Firstly, the S100A11, S100A16, and S100B expression levels were significantly upregulated in glioma tissues than in normal controls from GEPIA database. Secondly, the survival analysis further 6 S100PBP S100B S100A13 S100A11 S100A16 type type high low (i) S100PBP S100B S100A13 S100A11 S100A16 type type high low Journal of Oncology demonstrated the significant prognostic power of these 3 signature-related genes in TCGA and CGGA cohorts. en, the qRT-PCR analysis showed that S100B expression was increased more markedly in GBM cell lines (Figure 10(a)), indicating that S100B may serve as an important prognostic biomarker. S100B expression level was relatively higher in U251 cells than in T98G cells. erefore, we knocked down the S100B expression in U251 cells by si-S100B transfection, and upregulated S100B in T98G cells (Figure 10(b)). is study conducted CCK-8 and EdU assays for detecting S100B's effect on the proliferation of GBM cells. As revealed by CCK-8 assay, downregulated S100B expression markedly inhibited U251 cell viability, with its overexpression in T98G cells and yielded the opposite effect (Figure 10(c)). According to EdU assay, inhibiting S100B dramatically decreased the percentage of EdU-positive U251 cells, and overexpressing S100B increased EdU-positive T98G cells (Figure 10(d)). Also, this study conducted a transwell assay for investigating S100B's function in GBM migration and invasion. Silencing S100B expression by siRNA obviously decrease the number of invaded cells, whereas upregulating S100B resulted in more invaded cells (Figure 10(e)). e findings suggested that S100B promoted GBM cell growth and migration.
Discussion
Glioma is the most common type of brain tumor originating from neuroglial progenitor cells. Typically, the blood-brain barrier (BBB), comprising capillaries, basilar membranes, and endothelial cells, is a major reason for limiting the progress of antitumor drugs. With the recent advances in high-throughput technology, identifying novel prognostic markers and therapeutic targets may help improve glioma survival. Many S100 family proteins showed high expression levels within the nervous system [18]. We speculated that S100 family genes might exhibit a potent prognostic value for glioma patients.
A growing amount of open-sourced online platforms and genomic data have made it possible to explore family gene expression levels in glioma as well as the corresponding clinical significance. e present work analyzed S100 family genes and constructed a robust nomogram on this basis to predict glioma OS. Using Cox hazards and LASSO regression, five S100 family genes were identified for the prognostic model. e nomogram reliability, prediction performance, and stability were next analyzed and validated. As a result, the constructed signature could discriminate glioma prognosis with high accuracy. In addition, we created a nomogram consisting of clinical features and risk scores to present a personalized survival prediction for each patient with glioma. e calibration curves showed that the predicted patient survival was close to the actual measurement, indicating good predictive effects of the nomogram for survival time. Our signature, therefore, has great potential to be a clinical prognostic and predictive biomarker of glioma.
By focusing on the five signature-related genes of S100 family in this study, most of which have important functions in cancer genesis and development. S100A11, is also called calgizzarin or S100C, is localized both in the nucleus and in the cytoplasm. S100A11 binds to RAGE receptor thereby increasing epidermal growth factor (EGF) protein expression and stimulating cell growth [19]. It has been reported that S100A11 shows overexpression in many cancer types, including glioblastoma (GBM) [20][21][22][23]. S100A13 is identified to be involved in the nonclassical protein export, containing fibroblast growth factor (FGF), interleukin-1α (IL-1α), and synaptotagmins [24]. Growing evidence showed that S100A13 has a strong relationship with tumorigenesis [25][26][27], and it has been proved as a novel biomarker for papillary thyroid carcinomas (PTC) [28,29]. Interestingly, S100A13 shows differential expression in brain development process, which suggests that it is of great importance to maintain the function of nervous system [30]. S100A16 is a recently discovered member of the S100 family obtained from astrocytoma, which is structurally more stable than other S100 genes [31]. S100A16 overexpression is detected in different cancers, including pancreas cancer, lung cancer, ovarian cancer, bladder cancer, and thyroid gland cancers [32]. S100B is a nervous system-specific protein that is mainly secreted from astrocytes. S100B is widely involved in the regulation of phosphorylation, protein degradation, cellular proliferation, and differentiation. Additionally, serum of S100B is used as the diagnostic biomarker for melanoma for a long time, which has also been adopted to be the candidate predicting factor for lung cancer brain metastasis recently [33,34]. S100PBP is differentially expressed in various organs and disease states, which is dependent on tissue and cancer type. In breast cancer, S100PBP expression was markedly related to patient prognosis and different metastatic sites [35]. S100PBP level is suggested to be related to pancreatic ductal adenocarcinoma [36]. e biological roles of these five genes in cancer have partially provided clues for understanding the diagnostic and prognostic significance of the risk model in glioma. In our study, we demonstrated that most of these genes show high expression within GBM, and glioma patients with high expression levels have a shorter survival time than those with low expression. Moreover, we chose S100B as representative in subsequent functional analyses. e results showed that S100B expression markedly increased within GBM cells, and S100B promoted GBM cells growth, invasion, and migration. More investigations are needed to explore the molecular mechanisms underlying S100B and roles of other markers in the model in GBM.
Functional annotation of the S100 family-based signature via GSEA showed that there are a series of biological functions, such as PI3K-AKT-MTOR signaling, angiogenesis, apoptosis, epithelial-mesenchymal transition, and glioma stem cell pathways. It is worth mentioning here that apart from cancer-associated pathway, cancer stem cells (CSCs) are highly enriched in the high-risk group. CSCs are known as a rare population of self-renewing tumor cells, which contribute mainly to tumor recurrence and resistant to therapy [37,38]. ese findings indicated that high-risk patients based on the prognostic signature are more predisposed to tumorigenesis, recurrence, and resistance. In the future, more functional experiments are expected to explore the role of these five signature-related genes in glioma stem cells.
In addition, accumulating evidence suggests that TIME exerts an important part in glioma progress and development [39]. As a result, this study also examined the association of TIIC infiltration levels with risk score value in the prognosis mode. e high-risk group showed increased fractions of Treg cells and M2 macrophages phenotype. Macrophages can be divided into classically activated M1 macrophages and alternatively activated M2 macrophages. It is clear that M1 macrophages are involved in the antitumor immune response, and M2 macrophages are mainly responsible for tumor initiation, growth, and metastasis. It is also noted that Treg could promote tumor progression by specifically inhibiting tumor-reactive T cells [40]. Consistent Figure 8: Expression of five signature-related genes based GEPIA database. (a) S100A11, S100A13, S100A16, S100B, and S100PBP levels between low-grade glioma (LGG) and normal tissues. (b) Gene levels within glioblastoma (GBM) and noncarcinoma samples. Cytokine levels in tumor immunosuppressive microenvironment between low-and high-risk patients. * P < 0.05, * * P < 0.01, * * * P < 0.001, and * * * * P < 0.0001. Figure 9: Prognosis of five signature-related genes based on TCGA and CGGA database. (a) Prognostic value of S100A11, S100A13, S100A16, S100B, and S100PBP in glioma from TCGA database. (b) Prognostic value of these genes in glioma from CGGA database.
Tumor immune cytokines and checkpoints are considered important factors to determine glioma prognosis and efficacy [41]. Interleukin-10 (IL-10) and transforming growth factor-β (TGF-β) represent two typical immunosuppressive cytokines within TIME. TGF-β is known to inhibit immune responses through suppressing the activity of NK-cells, regulating the generation of proinflammatory cytokines, and changing the differentiation of Tcells [42]. IL-10 is an anti-inflammatory cytokine that is broadly expressed by various immune cells, including M2 macrophages, myeloid dendritic cells (DCs), 1, 2, and Treg cells. Especially, Treg-derived IL-10 can enhance Treg function and involve in Treg-induced immune regulation [43]. In our study, immunosuppressive cytokines, TGF-β and IL-10, were upregulated in the high-risk group. In addition, as the immune checkpoints are often used to escape immune surveillance by cancer cells, we also explore the response of checkpoint inhibitors (e.g., PD-1, PD-L1, PD-L2, CTLA-4, LAG3, and TIM-3) and discovered that many of these genes significantly increased in the high-risk group. Based on our results, high-risk glioma patients may have a better response for immunotherapy.
However, there are some limitations in the present study. Firstly, the data downloaded from public sources was restricted and incomplete, as well as no clinical samples were used for validation. Secondly, the five genes in the signature required more in vitro and in vivo experiments to verify their function in glioma. Finally, further researches in multicenter, large-scale, and prospective clinical trials are needed to confirm the risk model's predictive efficacy.
Conclusion
is work first constructed and validated an S100 familybased signature for the prognosis of glioma.
is risk signature can be used as a factor to independently predict the glioma patients' OS. We also proved an important value of this model in glioma immune microenvironment. Moreover, we identified that S100B, as an important biomarker, could promote GBM cell growth and invasion in vitro. Our study provided a prognostic model and promising biomarkers for glioma diagnosis and treatment.
Data Availability
All data sets used in the present work are included within this manuscript. ese data are available in TCGA (https:// portal.gdc.cancer.gov/) and CGGA (http://www.cgga.org. cn) databases.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Authors' Contributions
YH, JS and ZW contributed equally to this work. YH and JS conceived, designed, and wrote the manuscript. ZW analyzed the data. JK, YG, and RZ performed the experiments. WZ and YL revised the manuscript. All authors have contributed to the article and approved the final manuscript. | 5,990.6 | 2021-09-29T00:00:00.000 | [
"Medicine",
"Biology"
] |
Mechanisms of Texture Development in Lead-Free Piezoelectric Ceramics with Perovskite Structure Made by the Templated Grain Growth Process
The mechanisms of texture development were examined for BaTiO3 and a (K,Na,Li)(Nb,Ta)O3 solid solution made by the templated grain growth method, and compared with the mechanism in Bi0.5(Na,K)0.5TiO3. The dominant mechanism was different in each material; grain boundary migration in BaTiO3, solid state spreading in Bi0.5(Na,K)0.5TiO3, and abnormal grain growth in the (K,Na,Li)(Nb,Ta)O3 solid solution. The factor determining the dominant mechanism is the degree of smoothness of surface structure at an atomic level.
Introduction
One of the recent research interests in piezoelectric ceramics is lead-free materials [1]. The performance of lead-containing piezoelectrics, such as Pb(Zr x Ti 1−x )O 3 (PZT) and Pb(Mg 1/3 Nb 2/3 )O 3 -PbTiO 3 , is so superior that various approaches have been examined to develop lead-free materials with excellent properties. These approaches are mainly divided into two groups, compositional design and microstructural control. Although laborious efforts have been made to develop new compositions, materials which can substitute for PZT have been hardly discovered. In OPEN ACCESS some cases, an increase in the properties has been accomplished by microstructural control such as grain size control and texture development. An increase in various properties has been reported in fine-grained BaTiO 3 [2] and textured piezoelectric ceramics [3,4]. The combination of the compositional design and microstructure control is exemplified in the (K,Na)NbO 3 -based materials [5].
In textured ceramics, one of the crystallographic axes of each grain is intentionally aligned. These textured ceramics have a single-crystal-like nature and also have higher physical properties than those of ordinary ceramics composed of randomly orientated grains. One of the most convenient methods of preparing the textured ceramics is the templated grain growth (TGG) process [3,4]. In this process, two kinds of powders composed of anisometric and equiaxed grains are employed. The compositions of these two powders are the same (homo-template) in some cases and different (hetero-template) in other cases. A mixture of two powders is tape-cast to align the anisometric grains in the cast sheets. The sheets are cut and laminated to make a compact, and the compact is calcined to remove organic additives for tape-casting. Finally, the calcined compact is sintered to make a dense, highly textured ceramic.
In the TGG processes, the calcined compact is composed of aligned anisometric grains dispersed in the matrix of randomly oriented equiaxed grains. The anisometric grains act as a template for texture development. Therefore, the anisometric and equiaxed grains are called template and matrix grains, respectively. The most important step to achieve a large degree of orientation is the disappearance of matrix grains under the presence of template grains. In the case of Bi 0.5 (Na,K) 0.5 TiO 3 , the first textured material having the perovskite structure [6], the mechanism of texture development is the growth of template grains by solid state spreading of the matrix grains [7,8]. The examination of the mechanism of texture development is important not only to attain a high degree of orientation but also to control grain size and other factors determining microstructure. In this paper, we examine the mechanisms of texture development in <111>-textured BaTiO 3 and a <100>-textured (K,Na,Li)(Na,Ta)O 3 solid solution. In the former case, the dominant mechanism is found to be the growth of template grains by grain boundary migration, and in the latter case, the abnormal grain growth in the presence of template grains. Figure 1 shows the microstructure of a calcined compact for <111>-textured BaTiO 3 . The template grains were dispersed in the matrix of the matrix grains and aligned with their plate face parallel to the casting direction. Figure 2 shows the X-ray diffraction patterns of the compacts heated at various temperatures for 2 or 5 h. For the compacts heated at temperatures below 1,300 °C, various diffraction lines were recognized, and the most intense line was (110). The relative intensity of the (111) line increased as the heating temperature was increased and the (111) line became the most intense in the specimens heated at and above 1,350 °C for 5 h. Figure 3 shows the temperature dependence of relative density and the degree of orientation of the compacts heated for 5 h. The compacts with density more than 90% and the degree of orientation of about 0.8 were obtained by heating at and above 1,350 °C. Figure 4 shows the microstructural change in <111>-textured BaTiO 3 . At first, the matrix grains adhered to the surface of the template grain ( Figure 4(A)). The adhered matrix grains were integrated into the template grain ( Figure 4(B)). The size of template grains increased (Figure 4(C)) and impinged on each other, resulting in the microstructure consisting of large equiaxed grains ( Figure 4(D)). The X-ray diffraction profiles shown in Figure 2 indicate that each template grain is oriented with the <111> direction perpendicular to the compact surface.
<111>-Textured BaTiO 3
The microstructures shown in Figure 4 reveal that BaTiO 3 is textured by the growth of template grains at the expense of matrix grains. The microstructure change shown in Figures 4(A) and 4(B) suggests that the mechanism of the growth of template grains is the migration of the boundary between template and matrix grains. Figure 4(A) indicates the adherence of matrix grains to the template grain. The adhered matrix grains are integrated into the template grain ( Figure 4(B)). The surface of this template grain is rugged, indicating that the surface shape of the integrated matrix grains remained. This morphological change suggests the boundary migration shown in Figure 5. The grain boundary develops between the template and adhered matrix grains ( Figure 5(A)). The balance of surface and grain boundary tension bends the grain boundary. The curved boundary migrates toward the center of curvature, resulting in the integration of the matrix grain into the template grain. The surface of the template grain just after the integration of the matrix grain is rugged and the shape of the matrix grain remained on the surface of the template grain ( Figure 5(D)). The rugged surface of the template grain becomes smooth by surface diffusion as shown in Figure 4(C). Thus, Figures 4 and 5 conclude that the mechanism of texture development in the present BaTiO 3 system is the growth of template grains at the expense of matrix grains by the migration of the boundary between the template and matrix grains. shows the cross section of the template grain. The template grain is composed of two parts; the center is composed of many grains and the circumference has a smooth surface. The circumference might be a single crystal with its <111> direction perpendicular to the plate face. The structure of this template grain is not a main point of this paper but the origin of the formation of the central part is discussed here. The platelike BaTiO 3 grain is formed by the reaction of a Ba 6 Ti 17 O 40 (B6T17) grain with BaCO 3 by the unidirectional diffusion of BaO [9]. At first the surface of the B6T17 grain changes to a BaTiO 3 layer surrounding remnant B6T17 and the reaction continues by the diffusion of BaO through the BaTiO 3 layer. Because the volume of product BaTiO 3 is 23%, as large as that of reactant B6T17, the stresses develop in BaTiO 3 at the central part of the platelike grain. These stresses result in the formation of polycrystalline particles at the center of the platelike grain. Furthermore, stressed BaTiO 3 grains have high energy and migrate to the surface of the template grain at high temperatures, resulting in the formation of a rectangular void at the center of the template grain ( Figure 4(C)). Figure 6 shows the X-ray diffraction patterns of the (K,Na,Li)(Nb,Ta)O 3 compacts heated at various temperatures for 1 h. The most intense line was (110) in the compact heated at 950 °C. The intensity of (001), (100), (002), and (200) increased as the heating temperature was increased, and finally diffraction lines other than (001), (100), (002), and (200) were not recognized. This means that platelike NaNbO 3 template grains develop <100>-texture in the (K,Na,Li)(Nb,Ta)O 3 matrix. Figure 7 shows the temperature dependence of the degree of orientation. An abrupt change in the degree of orientation occurred between 1,030 °C and 1,050 °C. Figure 7 also shows the same dependence for Bi 0.5 (Na 0.5 K 0.5 )TiO 3 (BNKT) [7]. In the case of BNKT, the temperature dependence is rather gentle. In BNKT, the mechanism of texture development is the growth of template grains by solid state spreading, as will be mentioned in Section 2.3. The steep temperature dependence in (K,Na,Li)(Na,Ta)O 3 suggests another mechanism for texture development. Figure 8 shows typical microstructures of the compacts heated at a temperature between 1,030 °C and 1,050 °C. Figure 8(A) shows the microstructure of the specimen heated at 1,040 °C for 0 min (the specimen was quenched just after the furnace temperature reached 1,040 °C). The microstructure was almost the same as that of the calcined compact. Figure 8(B) shows the microstructure of the specimen heated at 1,040 °C for 15 min and then quenched. The major part of the compact was composed of large brick-like grains. The specimens heated at the temperature between 1,030 °C and 1,050 °C for 0 to 30 min had either microstructure and it was difficult to prepare a specimen having the brick-like and matrix grains with almost the same volume. The area, which was composed of small matrix grains and surrounded by several brick-like grains, was found by a close examination of the specimen heated at 1,040 °C for 15 min (Figure 8(C)). The coexistence of large grains with flat surfaces and small grains is a typical microstructure formed by abnormal grain growth. The presence of many intragrain pores is additional evidence of abnormal grain growth. These characteristics, i.e., an abrupt increase in the degree of orientation, the formation of large brick-like grains in the matrix of small grains and the presence of the intragrain pores in the brick-like grains, are quite similar to those observed in BaTiO 3 textured by platelike Ba 6 Ti 17 O 40 hetero-template grains, in which abnormal grain growth is the dominant mechanism of texture development [10]. When the compacts without template grains were heated at a temperature between 1,030 °C and 1,050 °C, the grains grew to about 10 m with mono-modal grain size distribution. The addition of the template grains changed the grain size distribution to bi-modal (Figure 8(C)). This indicates that the abnormal grain growth in the present system is nucleation-controlled [11] but not diffusion-controlled [12,13].
Mechanisms of Texture Development
BaTiO 3 , BNKT, and (K,Na,Li)(Na,Ta)O 3 have the same crystal structure (perovskite), but the mechanism of texture development is different. The mechanisms are (1) the growth of template grains by the migration of the boundary between template and matrix grains, (2) the growth of template grains by the solid state spreading of matrix grains, and (3) abnormal grain growth. Mechanisms (1) and (3) are explained in Sections 2.1 and 2.2, respectively. Here, mechanism (2) is briefly reviewed. Figure 9 shows the microstructure development in the BNKT system [7]. The specimen shown in Figure 9(A) is composed of aligned template grains and randomly oriented matrix grains. The size of matrix grains and the thickness of template grains increase up to 1,000 °C (Figure 9(B)). The growth of template grains continues, whereas the volume of matrix grains is decreased (Figure 9(C)), and finally, the specimen is composed of only platelike template grains (Figure 9(D)). This microstructure development indicates that the texture is developed by the growth of template grains. Figure 10 shows the morphological change of matrix grains at an early stage [7]. The matrix grains adhere to the template grain (Figure 10(A)) and spread over the surface of the template grain (Figure 10(B)). A close look at Figure 10(B) reveals the presence of a groove on the surface of the matrix grains. The groove suggests the formation of a third grain between the template and matrix grains. To confirm the formation of the third grain, a composite composed of a SrTiO 3 single crystal substrate and Bi 0.5 Na 0.5 TiO 3 particles on the substrate was heated as a model experiment [8]. The specimen was prepared by dropping a suspension containing Bi 0.5 Na 0.5 TiO 3 particles (average particle size of about 0.5 m) in 2-methoxyethanol and drying. Almost a single layer of Bi 0.5 Na 0.5 TiO 3 particles was formed on the (100) surface of SrTiO 3 . Figure 11 shows the microstructure of the composite heated at 900 °C for 2 h. The positions of SrTiO 3 and Bi 0.5 Na 0.5 TiO 3 are shown in the figure. The third grains were formed between the SrTiO 3 substrate and Bi 0.5 Na 0.5 TiO 3 particles, and the grooves were observed between Bi 0.5 Na 0.5 TiO 3 particles and third grains. This microstructure is quite similar to that shown in Figure 10(B). The mechanism of the formation of the third grain is not neck growth but solid stage spreading [14]. The texture development is closely related to the grain growth behavior. Figure 12 shows the relation between the grain growth rate and driving force [15]. The grain growth behavior is roughly divided into two groups depending on the surface structure. When the surface structure is atomically rough, the growth rate is proportional to the driving force, as shown by curve (a) in Figure 12. When the surface structure is atomically smooth, the growth rate is not proportional to the driving force, as shown by curves (b), (c), and (d) in Figure 12. The driving force at which the growth rate abruptly increases is called the critical driving force. The value of the critical driving force is dependent on the degree of smoothness of the surface; a smoother surface has a larger critical driving force. (d) Figure 13 shows the microstructures of BaTiO 3 , BNKT, and (K,Na,Li)(Na,Ta)O 3 obtained by sintering of the compacts of equiaxed powders. The grain shape of BaTiO 3 is irregular, whereas that of BNKT and (K,Na,Li)(Na,Ta)O 3 is cubic. In the BNKT and (K,Na,Li)(Na,Ta)O 3 cases, the presence of flat (100) faces indicates that the surface is atomically smooth. The degree of smoothness is judged from the shape of the edges. (K,Na,Li)(Na,Ta)O 3 has pointed edges, whereas BNKT has round edges, indicating that the degree of smoothness is higher for (K,Na,Li)(Na,Ta)O 3 than BNKT. It is reported 1m BNKT particles third grain SrTiO 3 substrate groov e that the surface structure of BaTiO 3 heated in air is atomically smooth [16], but Figure 13(A) shows that the grain boundaries are curved. The origin of the curved grain boundaries is either that the grains have {111} surfaces with high surface energy or that the boundaries are composed of facets with a hill-and-valley structure at an atomic level [15]. High energy surfaces and the hill-and-valley structure might provide growing steps for grain boundary migration. Therefore, it is suggested that the degree of smoothness is low for present BaTiO 3 . The relation between the growth rate and driving force is qualitatively expressed by curves (b), (c), and (d) in Figure 12 for BaTiO 3 , BNKT, and (K,Na,Li)(Na,Ta)O 3 , respectively. In the case of BaTiO 3 , the driving force for grain growth (determined by the curvature of the grain boundary) exceeds the critical value, and the grains can grow with a rate proportional to the driving force. Thus, the boundary between template and matrix grains migrates towards the center of curvature as shown in Figure 5. In BNKT, the critical driving force lies in a medium region. In this case, the driving force for grain boundary migration is lower than the critical value, and the grains cannot grow by the grain boundary migration. The spreading of matrix grains on the template grains becomes the dominant mechanism of microstructure development. In the (K,Na,Li)(Na,Ta)O 3 case, a high critical value inhibits the normal grain growth. The material transport along atomically flat boundaries is also restricted [17]. Thus, the grain growth by grain boundary migration and solid state spreading is sluggish and a small number of grains grow abnormally at a high temperature [15]. The above discussion concludes that the mechanism of texture development is determined by the surface structure at an atomic level.
A liquid phase gives a profound effect on the grain boundary structure and grain growth behavior. In the present experiment, the obvious presence of a liquid phase was not confirmed. However, the detection of a small amount of liquid phase is difficult, and the possible presence of a liquid phase is undeniable. The examination of the effects of a liquid phase remains on the growth behavior in the TGG process.
Experimental Section
In this work, BaTiO 3 and (K,Na,Li)(Na,Ta)O 3 were textured by the TGG method using homo-and hetero-template, respectively [4]. Platelike template grains were prepared by molten salt synthesis. The
(C)
BaTiO 3 template grains for <111>-textured BaTiO 3 were prepared via platelike Ba 6 Ti 17 O 40 grains [9]. BaTiO 3 and TiO 2 were reacted in molten NaCl at 1,150 °C for 5 h. Obtained platelike Ba 6 Ti 17 O 40 grains were further reacted with BaCO 3 at 1,150 °C for 5 h in molten NaCl. NaCl was washed out with hot water more than ten times. The obtained material was platelike BaTiO 3 grains with their <111> direction perpendicular to the plate face. The plate size and thickness were about 20 and 3 m, respectively, with a wide size distribution. The NaNbO 3 template grains for the <100>-textured (K,Na,Li)(Nb,Ta)O 3 were prepared via platelike Bi 2.5 Na 3.5 Nb 5 O 18 grains [5]. Bi 2 O 3 , Na 2 CO 3 , and Nb 2 O 5 were reacted in molten NaCl at 1,100 °C for 2 h. Obtained platelike Bi 2.5 Na 3.5 Nb 5 O 18 grains were further reacted with Na 2 CO 3 at 950 °C for 4 h in molten NaCl. The product was washed with hot water about ten times and with hydrochloric acid for several times to remove NaCl and Bi 2 O 3 (by-product). The obtained material was platelike NaNbO 3 grains with their <100> direction perpendicular to the plate face. The plate size and thickness were about 10 and 1 m, respectively, with a wide size distribution.
The matrix grains were obtained from companies. The equiaxed BaTiO 3 grains were obtained from Sakai Chemical Industry Co., Ltd. (Osaka, Japan). An average particle size was 0.5 m. For (K,Na,Li)(Na,Ta)O 3 , the matrix grains were supplied by NGK Insulators, Ltd. (Nagoya, Japan). The average particle size was 0.2 m.
Mixtures of the template and matrix grains were prepared with a solvent, a binder, and a plasticizer to form slurry. The amount of template grains was 10 and 5 vol% for BaTiO 3 and (K,Na,Li)(Na,Ta)O 3 , respectively. The slurry was tape-cast to form thin sheets in which the template grains were aligned with their plate faces parallel to the cast sheets. The sheets were cut to a square of 3 mm × 3 mm, laminated, and pressed to form the compacts with a thickness of about 1 mm. The compacts were further cut to 1 mm × 1 mm. The resultant compacts were calcined at 500 °C for 2 h. The calcined compacts were sintered in air under various temperature-time conditions.
The sintered compacts were characterized by X-ray diffraction analysis (XRD) using CuK radiation and scanning electron microanalysis (SEM). The degree of orientation was determined on the top surface of the compacts by XRD and evaluated by the Lotgering's method [18]. The microstructures were observed on the side face of the compacts. The fractured surfaces were observed for porous compacts, and the polished and etched surfaces for dense compacts.
Conclusions
The mechanism of texture development has been examined in TGG-processed BaTiO 3 and (K,Na,Li)(Na,Ta)O 3 . The dominant mechanism of texture development is the growth of template grains at the expense of matrix grains by grain boundary migration for BaTiO 3 and abnormal grain growth in the presence of template grains for (K,Na,Li)(Na,Ta)O 3 . Another mechanism is the growth of template grains by solid state spreading of matrix grains in Bi 0.5 (Na,K) 0.5 TiO 3 . The factor determining the dominant mechanism is the surface structure at an atomic level. The dominant mechanism in BaTiO 3 with a low degree of smoothness is grain boundary migration, that in Bi 0.5 (Na,K) 0.5 TiO 3 with an intermediate degree of smoothness is solid state spreading, and that in (K,Na,Li)(Na,Ta)O 3 with a high degree of smoothness is abnormal grain growth. | 4,859.4 | 2010-11-01T00:00:00.000 | [
"Materials Science"
] |
Application of Multimedia Semantic Extraction Method in Fast Image Enhancement Control
In order to solve the problem that it is difficult to effectively enhance the details of the compressed domain and maintain the overall brightness and clarity of the image when improving the image contrast in the current image enhancement method in the compressed domain, a multimedia semantic extraction method is applied in fast image enhancement control. It has been proposed that thealgorithm that synthesizes training samples according to the Retinex model converts the original low-light image from RGB (red-green-blue) space to HSI (hue saturation intensity) color space, keeps the chrominance and saturation components unchanged, and uses DCNN to enhance the luminance component; finally, it converts the HSI color space to RGB space to get the final enhanced image. The experimental results show that the performance of the model will increase with the increase of the number of convolution kernels, but the increase of the number of convolution kernels will undoubtedly increase the amount of calculation; it can also be found that when the number of network layers is 7, the PSNR of the image output by the model increases. The highest value, increasing the number of network layers, does not necessarily improve the performance of the model; with or without BN, his training method converges more easily than direct RGB image enhancement, with higher average PSNR and SSIM values. The experimental results show that, compared with the traditional Retinex enhancement algorithm and the DCT compression domain enhancement algorithm, the algorithm has better detail enhancement and color preservation effects and can better suppress the block effect.
Introduction
Due to the interference of camera equipment, lighting conditions, and other factors in the imaging process, it is likely that the image content is not clearly recorded, there is motion blur, the color level is not obvious, noise covers up the key facts, the image resolution is reduced, and so on. When images need to be used as court evidence, news media encounter problems , which these problems may lead to the fact that the truth is buried and the facts are confused [1]. Image processing is a great way to solve visual problems.
is is to select some of the user's favorite properties on the image to match the image with the naked eye, add some data, or modify the data of the original image in some way by clicking on a mask that is not needed in the image. Field features to improve the appearance of the image. In face recognition, this technology can be used in many cases, such as those with unclear contours, blurred photos taken by illegal vehicles, and di cult-to-identify key evidence at the crime scene. However, image enhancement has changed the originality and authenticity of image data [2].
Image enhancement is mainly divided into pixel domain enhancement and compression domain enhancement. Image pixel domain enhancement was proposed in the 1980s, mainly to remap the image pixel value according to the image pixel histogram to achieve the purpose of image enhancement [3]. e enhancement of the image compression domain is proposed later than that of the image pixel domain. e main reason is to modify the DCT coe cient (DCT is the full name of discrete cosine transform) during image compression, so as to achieve the e ect of image enhancement. See Figure 1.
Literature Review
Vigliocco proposed a wavelet-based homomorphic filter to enhance image contrast. e algorithm uses a homomorphic filter to process the wavelet decomposition coefficient. e processed image can not only reduce the influence of uneven illumination but also enhance the local contrast of the image [4]. Scharinger et al. combine homomorphic filtering enhancement with a neural network; a more suitable enhancement method for color images is proposed. In this method, the dynamic range is adjusted by homomorphic filtering, the contrast is enhanced, and the color of the processed image is more natural by using a neural network for color correction [5]. Pan et al. proposed a homomorphic filtering method using a spatial filtering kernel in HSI color space. is method first selects the filtering kernel according to the image size in the spatial domain and then performs homomorphic filtering in the frequency domain, which can better improve the image contrast; In addition, selecting the filtering function space in HSI space can reduce the amount of computation on the one hand, and improve the color fidelity of the processed image on the other hand [6]. Ma et al. proposed a histogram equalization algorithm based on mean segmentation. e algorithm takes the gray mean of the image as the segmentation threshold, divides the image histogram into two subhistograms, and then equalizes the two subhistograms, respectively [7]. Xu et al. proposed a local histogram equalization algorithm to maintain image brightness [8]. Jeong et al. proposed an ε-filter to set a threshold for pixel comparison in the template, so that the range of low-pass filtering is narrow, the extracted incident light component is as smooth as possible, and the accuracy of illuminance estimation is improved [9]. Yang et al. proposed a Retinex algorithm for video sequences. According to the timing characteristics of video sequences, the algorithm calculates the linear correlation of different sequence images in the video to estimate the illumination. e advantages of this algorithm are simple operation and good real-time performance, which can be applied to video processing [10]. Kong et al. proposed a nonlinear filtering Retinex algorithm based on subsampling, which down samples the low-resolution part of the image, estimates the illuminance through the nonlinear filtering of the sampled subimage, and then up samples the high-resolution part of the image to obtain the processing results, so as to speed up the operation speed of the algorithm [11,12].
Based on the above study, this paper describes the performance of the low-illumination algorithm. is method first converts low-light images from an RGB source to an HSI (color saturation energy) color source and uses CNN (DCNN) depth to improve sharpness. Experimental results show that compared to other key algorithms, the input algorithm can not only improve sharpness and contrast but also preserve the color image data without changing it, further improving the performance of visual and objective measurements.
Image Enhancement Algorithm Model.
e goal of current low-light image enhancement algorithms is to improve imageless lighting by combining the advantages of a color-changing model algorithm and CNN. First, the nonilluminated image was transferred from the RGB source to the HSI color source, then the color component h and saturation component s remained unchanged, and the lighting component I was corrected by DCNN. Finally, I converted it to RGB format to get the best picture. e operation of the conceptual algorithm is shown in Figure 2.
Brightness Enhancement.
Unlike in-depth studies, the indirect plan uses DCNN to study the graphical relationship between endpoints between low-light and high-light images with guidance expressed in RGB settings, but only improves illumination in HSI color settings. is is because the past has changed the color of training, and practical algorithms need to be considered in order to avoid this problem, which is essential for network training. e DCNN concept differs from the general CNN in that it has no layers or FC. e network structure consists of five components: input and output, nonlinear mapping,
Machine learning
Big data: Data mining: Large amounts of unstructured or structured data from a variety of sources Finding information from data reconstruction, and output [13]. e network model in this article examines the relationship between low-light images and natural light, so it is necessary to adjust the size of the network output and the image input. e sampling process may distort large images before and after the test, which is not necessary for image processing; in addition, the rotation process can increase the image size before and after the breakdown operation and lose the data boundary. erefore, the zero-fill operation is performed in front of all the rotating layers, which will remain the same as the image size during network transmission.
Inputs and Outputs.
To better train the network, the idea, and the equipment of the network application, not the image as a whole but the similar lighting of low-light images is needed. e block was randomly selected according to the training model; at the same time, the design only improves the illumination, so the input is less bright and the output is brighter.
Unpack the Feature.
e role of the first part of the network is to decompose the properties generated by the convolution layer. e transformation process is usually performed through multiple training vertebral nuclei with an input layer map or a special map of the previous layer, regardless of the characteristics of the different fields visible by activating the nonlinear activation function [14], used to represent the features of a CNN layer map.
ere is a network input image block, and the feature decomposition function is shown in the following formula: where W 1 is the convolution kernel, and b 1 ∈ R n 1 is the neuron bias vector. e size of X is n × n, f 1 is the size of a single convolution kernel, c is the number of channels of the input image (only the brightness component is processed, so c � 1), and max (·) is the maximum value. If there are n 1 convolution kernels, the size of After the convolution operation of the first layer of the network model and the excitation of the nonlinear function, the characteristics of different aspects of the data can be extracted from the brightness component of the low-illumination image.
Nonlinear Mapping.
e nonlinear mapping is recorded as a revolutionary module, which is composed of a convolution layer, a batch normalization (BN) layer, and a ReLu (rectified linear units) excitation layer.
Let the input data set of a hidden layer of the network be {μ 1 , . . ., μ m }, and the number of samples in this batch be m. First, obtain the mean E (μ) and variance D (μ) of the input samples as follows: en, normalize the batch sample data to obtain the data distribution μ h with a mean value of 0 and variance of 1, i.e., the following: where to avoid the denominator of the fraction being 0, ε is usually taken as a positive number close to 0. Finally, the reconstruction parameters α and β are introduced to reconstruct and transform the BN data, and the final output data z h is obtained as follows: rough nonlinear mapping, the brightness component features of the low-illumination image extracted from the first layer of the proposed network can be mapped from the low-dimensional space to the high-dimensional space, making the underlying features more abstract [14][15][16]. is process is expressed as the following equation: where k is the depth of the proposed network, that is, the k − 1 layer is the last layer of the nonlinear mapping part. It should be noted that the BN operation in the proposed network is based on the characteristic graph, not a single neuron, which can greatly reduce the α and β introduced by the reconstruction transformation.
Reconstruction.
e enhancement of low-illumination images is based on the idea of reconstruction. Reconstruction is to achieve image enhancement by minimizing the error between the brightness component of the model output and the normal illumination image through training. In this layer, reconstruction can be realized with only one convolution layer, which is described as the following equation: In order to automatically learn the network model parameter θ � W 1 , . . . , W k ; b 1 , . . . , b k through training, it is assumed that the training sample set D � (X 1 , Y 1 ), . . . , (X η , Y η )} is given, where X η and Y η are the η-th input lowillumination image brightness component and the corresponding normal illumination image brightness component (i.e. ground truth) samples, and the objective function of the following formula is minimized by using the back propagation algorithm and the random gradient descent method.
where N is the number of training samples; λ is a weight constraint term, which can prevent over fitting.
Once the optimal parameter θ is obtained through training and learning, the network training ends. During the test, only the brightness component of a low-illumination image needs to be input, and the network model can calculate an enhanced brightness component according to θ.
Sample Preparation.
In the experiment, a public dataset located in the blind area of the computer collected a total of 500 images from the Berkeley Segmentation Dataset with illumination based on interconnected objects (e.g., ground realities) and selected a total of 256,000 image blocks. 40 pixels × 40 pixels, then it is possible to display a low-light image after reflecting light following the distribution of similar objects. as S (x, y) � LR (x, y).
Experimental Setup.
e training algorithm will use Matlab r2014 as a simulation platform, and MatConvNet will receive an open-source version of the in-depth training. e computer's CPU (central processing unit) is an American Intel Core i7-7700, with a base frequency of 3.6 GHz, 16 GB of memory, and a GPU (graphics processing unit) from NVIDIA, gtx1070. e depth of the network is 7 layers, the size of one connection is 3 pixels × 3 pixels, the number of breakdown cores for decomposing the function within the network is 64, and the size is 3 pixels × 3 pixels × 64 pixels; e total number of rotating cores in a nonlinear map section is 64, and the size is 3 pixels × 3 pixels × 64 pixels × 64 pixels; the redesign has only one integration of 3 pixels × 3 pixels × 64 pixels. e offset time of all layers of the network starts at 0; e Adam algorithm is used to train the model. e training standard is 128 and the starting value is 0.1. Each of the 10 training cycles is reduced by 1/10, for a total of 50 training cycles.
Selection of Experimental Parameters.
To understand the performance and speed benefits, the size of all rotating kernels in this form is 3 pixels * 3 pixels, which shortens the training time and does not have a positive effect on image output [17]. As the subject and size of the conversion cores remain unchanged, this paper also provides an experimental analysis of the number of circulating cores and the number of network processes. After 50 cycles of training, the experiment was completed to obtain the correct PSNR of the images shown. e test results are shown in Table 1, where N1 is the number of first circulating veins and NP-1 (P ≥ 3) is the total number of circulating veins.
Refer to the configuration of the number of crashes for different processes in the SRCNN model, and select for comparison. Network standards are 3, 5, 7, and 9. As Table 1 shows, standard performance may improve as the number of fault cores increases, but there is no doubt that an increase in the number of circulating cores increases computational costs; you can also see that the PSNR value of the model's output image is higher when the number of layers is 7. Increasing the number of network layers does not necessarily improve model performance.
is is because the simple accumulation of the original network structure makes its structure unusable. Similarly, the resulting gradient dispersion will become more severe as the process intensifies.
Experimental Analysis.
To determine the performance of the application algorithm, experiments are performed on low-light and low-light imaging images, and a comparative analysis is performed in terms of purpose and content compared to the classical algorithm. It has well-improved Dong, SRIE, and LIME algorithms in recent years.
Synthetic Low-Illumination Image Experiment.
First, the synthetic low-illumination images were tested. e test samples selected live1, a public data set in the field of computer vision, to synthesize the low-illumination images. A total of 29 pictures were taken. e experimental results are shown in Table 2.
In the subjective evaluation, four images in the live1 data set are selected as examples to illustrate that Dong, SRIE, the LIME algorithms, and the proposed algorithms can enhance the synthesized low-illumination images and improve people's subjective feelings [18][19][20]. e proposed algorithm can not only improve the brightness and contrast of the image but also keep the color information of the image unchanged, which is closer to the original image under normal illumination. However, the proposed algorithm is still the same as other algorithms and cannot effectively enhance the dark part of the actual scene, which is a large white area. In addition, the proposed algorithm has the best enhancement effect for low-illumination images with uniform illumination, but for nonuniform illumination images, the overall brightness is slightly dark. is is because the Retinex model is still relatively simple, which is not enough to describe complex low-illumination scenes, and because the loss function used for such tasks using the depth learning algorithm is the mean square error (MSE), it cannot cover all image pixels.
In a real-world analysis of low-illumination images, the realities of the terrain are known, so we can compare the differences in image enhancement with different algorithms and graph real-time graphs to show how the algorithm works. PSNR, structural similarity (SSIM), MSE, and LOE (lightness order error) were selected for the target measurements. Among them, the PSNR shows the image effect. e higher its value, the less distortion there is. SSIM represents the integrity of image information. e higher its value, the better the image and the similarity of the actual soil structure will be. e MSE map shows the difference between enhancement and ground reality. e higher the value, the closer the image enhancement is to the illumination of the original image. LOE generally measures the ability to hold an enhanced image well. e lower the value, the better the temporary brightness of the image and the higher the image [21]. Table 2 shows the average values of some of the measurements obtained from the live1 dataset when using different algorithms.
With the exception of SSIM, it is lower than the Sri algorithm, and PSNR, MSE, and LOE are higher than other algorithms, indicating that the desired algorithm in this form is closer to the old image with less impact. Image enhancements are more detailed and natural in nature, which determines the performance of the application algorithm.
In addition, in order to determine the effectiveness of the described process and the algorithm for directly improving the RGB image, all 50 workshop networks in this paper were trained in two control groups, four in BN, and no BN in the It can be seen from Figures 3 and 4 that the HSI training method is easier to converge, and the average PSNR and SSIM values obtained are higher than those obtained by directly enhancing RGB images with or without BN. At the same time, it can be seen that BN can effectively improve the convergence speed of network training and obtain better results.
Actual Low-Illumination Image Experiment.
e algorithmic improvements in this article apply not only to low-light imagery but also to 17 low-light images from NASA and DICM's experiments to improve low-light imaging. VV for testing. e test results are shown in Table 3.
In the case of objective analysis of real illumination images, the same image is not classified as objective observation without the use of well-illuminated images. Data entropy, degree of color shift, LOE, and optical image quality (VIF) are used to assess image quality. Of these, the entropy of the data represents the value of the image data. e larger the value, the better the image data and content are stored; the degree of chromatic aberration indicates that the color of the image is preserved. e clearer the value, the less color distortion; VIF is an excellent image analysis that combines beautiful images of patterns, image distortion patterns, and human visual system modeling. e higher the price, the better the image quality. Table 3 shows the average values of the various objective measures after using different algorithms to improve the 17 visual effects. It can be seen from Table 3 that the HE algorithm and the Dong algorithm have higher chromaticity change values, indicating that the color retention ability of the enhanced image is the worst because they are processed directly in the R, G, and B channels, respectively, resulting in a different increase of each color channel and color distortion. e Figure 3: Convergence rate of HSI and RGB enhancement methods with/without BN. LIME algorithmhas higher information entropy, indicating that the image obtained by this algorithm contains more information, but its LOE value is large, indicating that the brightness order of the image is damaged and the naturalness is poor. e information entropy of the SRIE algorithm is the lowest, and the detailed information is not obvious after enhancing the low-illumination image. Except that the information entropy of the proposed algorithm is lower than that of the LIME algorithm, the proposed algorithm is superior to other algorithms in the other three evaluation indexes, which shows that the image color enhanced by the proposed algorithm maintains better and has better naturalness.
Conclusion
At present, the main focus of low-light image processing algorithms is to cause color conflicts to improve sharpness and contrast. DCNN's ability to learn, such as through processing systems, to remove key data properties from large data as a whole, to adapt to hard work, and to keep chrominance components and saturation components unchanged. Based on the final concept, a map is a correlation between low-light image brightness, the best-studied image illumination, and image brightness received wonderfully. Finally, HSI moved from color space to RGB space. Experimental results show that the application algorithm not only improves brightness and contrast but also prevents color distortion. e resulting improvement is better than the current low-precision lighting algorithm, which is theoretically significant. In the future, we will continue to optimize the network structure to improve night lighting.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 5,209.4 | 2022-10-11T00:00:00.000 | [
"Computer Science"
] |
t6A and ms2t6A Modified Nucleosides in Serum and Urine as Strong Candidate Biomarkers of COVID-19 Infection and Severity
SARS-CoV-2 infection alters cellular RNA content. Cellular RNAs are chemically modified and eventually degraded, depositing modified nucleosides into extracellular fluids such as serum and urine. Here we searched for COVID-19-specific changes in modified nucleoside levels contained in serum and urine of 308 COVID-19 patients using liquid chromatography-mass spectrometry (LC-MS). We found that two modified nucleosides, N6-threonylcarbamoyladenosine (t6A) and 2-methylthio-N6-threonylcarbamoyladenosine (ms2t6A), were elevated in serum and urine of COVID-19 patients. Moreover, these levels were associated with symptom severity and decreased upon recovery from COVID-19. In addition, the elevation of similarly modified nucleosides was observed regardless of COVID-19 variants. These findings illuminate specific modified RNA nucleosides in the extracellular fluids as biomarkers for COVID-19 infection and severity.
Introduction
Coronavirus disease 2019 (COVID-19) is a respiratory infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1,2]. This disease spread quickly around the world, causing millions of deaths. For confirmed diagnosis of COVID-19 at the bedsides, RT-PCR test targeting viral genome RNA and antigen test against viral spike proteins are mainly used. However, there are multiple problems in these clinical examinations. One major problem is that these tests show only negative or positive results. Therefore, they are not suitable for determining or predicting the severity Biomolecules 2022, 12, 1233 2 of 13 of this disease. Some serum proteins such as CCL17 and IFN-gamma3 were reported as the biomarker for COVID-19 severity, but the specificities of these biomarkers are not high because of elevation in other diseases [3,4]. Another problem is the risk of infection from clinical samples. Currently, RNA extracted from saliva or pharyngeal swabs is used in both tests, and handling the samples always exposes healthcare workers to infection risks. SARS-CoV-2 is undetectable in serum and urine [5,6]. Therefore, if an appropriate diagnosis method is devised, blood and urine are ideal samples for COVID-19 diagnosis.
SARS-CoV-2 is an RNA virus, having a single-stranded,~30 kb-long RNA genome with 12 open reading frames (ORFs) [7]. In ORF1a and 1b, two RNA modification enzymes are encoded. One is guanine N 7 -methyltransferase catalyzing 5 terminal cap modification which prevents recognition by the host immunity and promotes SARS-CoV-2 protein synthesis [8]. The other is 2 -O-methyltransferase whose modification also contributes to the formation of cap structure and suppresses recognition by the host innate immune system [8]. Moreover, highly modified regions were suggested to exist in SARS-CoV-2 genome RNA and its transcripts using nanopore direct RNA sequencing [7]. These reports suggest that RNA modifications in SARS-CoV-2 play important roles in viral replication and self-defense. However, the clinical implications are completely unclear.
Over 100 kinds of chemical modifications of RNA are reported in the three domains of life and they have a variety of biochemical functions [9]. For example, N 6threonylcarbamoyladenosine (t 6 A) modification exists at position 37 of tRNAs that decipher ANN codons, and t 6 A governs the accuracy and efficiency of protein synthesis in the cytosol [10]. t 6 A modification is introduced by a protein complex called the kinase, putative endopeptidase, and other proteins of small size (KEOPS) [11]. Due to the physiological importance of t 6 A, the deficits of KEOPS components cause nephrotic syndrome and primary microcephaly [12]. t 6 A in tRNA Lys UUU is further thiomethylated by CDKAL1 protein, resulting in 2-methylthio-N 6 -threonylcarbamoyladenosine (ms 2 t 6 A) modification [13,14]. ms 2 t 6 A modification is important for proinsulin synthesis, and the deficit of this modification causes the development of type 2 diabetes [14]. Due to the physiological importance of tRNA modifications, the deficits of various other tRNA modifications in mammals also cause various diseases including mitochondrial diseases and neurological disorders [15][16][17]. At the end of its life, modified RNA is degraded into single nucleosides, and modified nucleosides are excreted to extracellular spaces, circulated in serum, and discarded into the urine [18,19].
In this study, we have identified a characteristic elevation in specific modified nucleosides through infection experiments on cultured cells. These modified nucleosides were significantly elevated in serum and urines of COVID-19 patients and might be useful for novel biomarkers of COVID-19.
Cell Culture and Viral Infection
ACE2-overexpressing HEK293 cells were maintained in DMEM (low glucose) with 10% heat-inactivated fetal calf serum (FCS) and penicillin-streptomycin solution (P/S). SARS-CoV-2 JPN/TY/WK-521 strain was obtained from the National Institute of Infectious Diseases in Japan and amplified with VeroE6/TMPRSS2 cells. ACE2-overexpressing HEK293 cells were infected by SARS-CoV-2 particles at an MOI 1.0. RNA extraction by TRIZOL Reagent (Thermo Scientific, Waltham, MA, USA) was performed 18 h after the infection. The extracted RNAs were degraded into single nucleosides using nuclease P1 and alkaline phosphatase.
Sample Preparation and LC-MS Analysis of Modified RNA Nucleosides
Nucleosides from culture cells were desalted at 4 • C, 12,000 rpm, 30 min centrifugation with Nanosep 3K Omega (Pall Corporation, New York, USA). Modified nucleoside quantification was performed by a triple quadrupole mass spectrometry system (LCMS-8050, Shimadzu Corporation, Kyoto, Japan) equipped with an electrospray ionization (ESI) source and an ultra-high performance liquid chromatography system [19]. The nucleoside samples were injected into an Inertsil ODS-3 column (GL Science, Tokyo, Japan). The mobile phase consisted of two types of solutions. One is 5 mM ammonium acetate in water adjusted to pH 5.3, and the other is 60% (v/v) acetonitrile in water. The LC gradient was set as follows: 1-10 min: 1-22.1% B, 10-15 min: 22.1-63.1% B, 15-17 min: 63.1-100% B, 17-22 min: 100% B, and 22-23 min, 100-0.6% B. The flow rate was 0.4 mL/min, and the injection volume was 2 µL. Detection was performed in the MRM (multiple reaction monitoring) modes of Lab-Solutions System (Shimadzu Corporation). The MRM transitions for modified nucleosides in this method are described in Supplementary Table S1. Interface temperature was 300 • C, desolvation line temperature was 250 • C, and heat block temperature was 400 • C. Nitrogen gas was supplied from an N2 feeder Model T24FD (System Instruments, Tokyo, Japan) for nebulization and drying, and argon gas was used for collision-induced dissociation.
Automatic Sample Preparation and LC-MS Analysis for t 6 A and ms 2 t 6 A in Serum and Urine Samples
Serum and urine samples were desalted and deproteinized by a fully automated sample preparation module (CLAM-2030, Shimadzu Corporation) coupled to an LCMS-8050. Twenty microliters of the sample was automatically delivered to a polytetrafluoroethylene filter vial (0.45 µm pore size) which was pre-conditioned with 20 µL methanol. Eighty microliters of methanol and 20 µL of isopropanol was added to the filter vial and stirred for 60 s. The samples were filtrated and delivered to LC-MS/MS system with 20 µL water. t 6 A and ms 2 t 6 A quantification was performed by the same LCMS-8050 system described above. The serum samples were injected into a Mastro2 C18 column (Shimadzu GLC Ltd., Tokyo, Japan) from CLAM-2030 automatically. The mobile phase consisted of two types of solutions. One is 0.1% (v/v) formic acid in water (A), and the other is 0.1% (v/v) formic acid in acetonitrile (B). The LC gradient was set as follows: 1-1. Supplementary Table S1. Interface temperature was 270 • C, desolvation line temperature was 250 • C, and heat block temperature was 400 • C. Nitrogen gas was supplied from an N2 feeder Model T24FD for nebulization and drying, and argon gas was used for collision-induced dissociation.
Patients and Severity Assessment
We enrolled COVID-19 patients diagnosed by real-time reverse transcription-polymerase chain reaction (RT-PCR) using extracted RNAs from saliva or pharyngeal swabs ( Table 1). The presence of mutations was examined using TaqMan SARS-CoV-2 Mutation Panel (Thermo Scientific, Waltham, MA, USA). The severity definitions of COVID-19 were based on the Spectrum of SARS-CoV-2 Infection from the "COVID-19 Treatment Guidelines" of NIH. We classified COVID-19 patients into two groups: asymptomatic/mild and moderate/severe. Moderate patients were classified as having pneumonia and requiring oxygen administration, and severe patients as requiring ventilator management and extracorporeal circulation. The mild patients had various signs and symptoms of COVID-19, for example, fever, cough, sore throat, malaise, headache, muscle pain, nausea, vomiting, diarrhea, loss of taste and smell but who do not have shortness of breath, dyspnea, or abnormal chest imaging by CT scan. Asymptomatic patients had no symptoms of COVID-19. Patients with other infectious diseases, including bacterial pneumonia and other viral infection, were diagnosed by clinical investigators with various examinations including blood culture tests, pneumococcal urinary antigen tests, and flu tests performed before the COVID-19 pandemic. The information from these patients is described in Supplementary Table S2. We collected serum from the same COVID-19 patients at the infection period and recovery period. A recovery period was defined by the resolution of fever and other symptoms.
Statistical Analysis
Data accorded with normal distribution and homogeneity of variance were expressed as the mean ± standard error of means (S.E.M) and compared by Mann-Whitney U tests. Categorical variables were compared by the Kruskal-Wallis test and Dunn's multiple comparison tests. For calculation of sensitivity and specificity, we used receiver operating characteristic analysis to discriminate between healthy volunteers and COVID-19 patients. Statistical analyses were performed with the Prism 9 software (GraphPad, San Diego, CA, USA), and a p-value less than 0.05 was considered statistically significant.
Results
To identify modified nucleosides whose amount specifically changes in COVID-19, we first performed an infection experiment using angiotensin converting enzyme 2 (ACE2)overexpressing human embryonic kidney (HEK) 293 cells. SARS-CoV-2 particles were infected at an MOI of 1. After 18 h of incubation, we extracted total RNA and degraded it into single nucleosides using nuclease P1 and alkaline phosphatase. We then quantified modified nucleosides by LC-MS. As a result, within the total RNA of SARS-CoV-2-infected cells, we observed elevation of six modified nucleosides, which are N 1 -methyladenosine (m 1 A), N 2 ,N 2 -dimethylguanosine (m 2 2 G), N 6 -threonylcarbamoyladenosine (t 6 A), 2-methylthio-N 6 -threonylcarbamoyladenosine (ms 2 t 6 A), N 6 -methyl-N 6 -threonylcarbamoyladenosine (m 6 t 6 A), and N 6 ,2 -O-dimethyladenosine (m 6 Am) ( Figure 1a). Especially, t 6 A and ms 2 t 6 A ( Figure 1b) were over 4 times elevated compared to control cells. From this result, t 6 A and ms 2 t 6 A were judged as good candidate biomarkers for SARS-CoV-2 infection. methyladenosine (m A), N ,N -dimethylguanosine (m 2G), N -threonylcarbamoyl sine (t 6 A), 2-methylthio-N 6 -threonylcarbamoyladenosine (ms 2 t 6 A), N 6 -methyl-N 6 nylcarbamoyladenosine (m 6 t 6 A), and N 6 ,2′-O-dimethyladenosine (m 6 Am) (Figure pecially, t 6 A and ms 2 t 6 A ( Figure 1b) were over 4 times elevated compared to contr From this result, t 6 A and ms 2 t 6 A were judged as good candidate biomarkers for CoV-2 infection. Next, to investigate if t 6 A and ms 2 t 6 A within human urine can be used as SAR 2 infection biomarkers, we performed LC-MS analysis using the urine of patien COVID-19. These patients were diagnosed by RT-PCR test against SARS-CoV-2 g RNA from saliva or pharyngeal swabs (Table 1). Urine is highly susceptible to phy ical conditions, and appropriate normalization is essential for the urine test. Gen urine creatinine is the most commonly used normalization substance. Therefore, w lyzed t 6 A and ms 2 t 6 A in urine normalized by urine creatinine and these results wer pared to healthy samples (Figure 2a,b). The t 6 A and ms 2 t 6 A levels in urine were cantly increased in COVID-19 patients. We also performed receiver-op Next, to investigate if t 6 A and ms 2 t 6 A within human urine can be used as SARS-CoV-2 infection biomarkers, we performed LC-MS analysis using the urine of patients with COVID-19. These patients were diagnosed by RT-PCR test against SARS-CoV-2 genome RNA from saliva or pharyngeal swabs (Table 1). Urine is highly susceptible to physiological conditions, and appropriate normalization is essential for the urine test. Generally, urine creatinine is the most commonly used normalization substance. Therefore, we analyzed t 6 A and ms 2 t 6 A in urine normalized by urine creatinine and these results were compared to healthy samples (Figure 2a,b). The t 6 A and ms 2 t 6 A levels in urine were significantly increased in COVID-19 patients. We also performed receiver-operating characteristic (ROC) analysis using data of t 6 A and ms 2 t 6 A normalized by urine creatinine. On t 6 A, setting the cutoff value to 344,420 resulted in a sensitivity of 71.7%, a specificity of 77.8%, and a likelihood ratio of 3.23 (Figure 2c). Regarding ms 2 t 6 A, setting the cutoff value to 76,878 resulted in a sensitivity of 86.6%, a specificity of 91.7%, and a likelihood ratio of 2.6 ( Figure 2d). characteristic (ROC) analysis using data of t 6 A and ms 2 t 6 A normalized by urine creatinine On t 6 A, setting the cutoff value to 344,420 resulted in a sensitivity of 71.7%, a specificity o 77.8%, and a likelihood ratio of 3.23 (Figure 2c). Regarding ms 2 t 6 A, setting the cutoff value to 76,878 resulted in a sensitivity of 86.6%, a specificity of 91.7%, and a likelihood ratio o 2.6 ( Figure 2d). To investigate if elevations of t 6 A and ms 2 t 6 A in urine are characteristic of COVID-19 we also compared the patient urine of COVID-19 with other infectious diseases including influenza and bacterial pneumonia. The elevation of t 6 A and ms 2 t 6 A in urine was observed only in the COVID-19 group (Figure 3a,b). From these results, measurements of t 6 A and ms 2 t 6 A in urine were observed to have the equivalent diagnostic ability to the RT-PCR tes for COVID-19. To investigate if elevations of t 6 A and ms 2 t 6 A in urine are characteristic of COVID-19, we also compared the patient urine of COVID-19 with other infectious diseases including influenza and bacterial pneumonia. The elevation of t 6 A and ms 2 t 6 A in urine was observed only in the COVID-19 group (Figure 3a,b). From these results, measurements of t 6 A and ms 2 t 6 A in urine were observed to have the equivalent diagnostic ability to the RT-PCR test for COVID-19. Next, to investigate if t 6 A and ms 2 t 6 A within human serum can be used as SAR 2 infection biomarkers, we measured t 6 A and ms 2 t 6 A in serum normalized by unm adenosine and compared them with healthy samples. The t 6 A and ms 2 t 6 A levels w nificantly elevated in the serum of COVID-19 patients (Figure 4a,b). We also per ROC analysis using data of t 6 A and ms 2 t 6 A. On t 6 A, setting the cutoff value to 1. sulted in a sensitivity of 98.4%, a specificity of 92.5%, and a likelihood ratio of 13.12 4c). Regarding ms 2 t 6 A, setting the cutoff value to 0.1034 resulted in a sensitivity of a specificity of 92.5%, and a likelihood ratio of 12.55 (Figure 4d). Next, to investigate if t 6 A and ms 2 t 6 A within human serum can be used as SARS-CoV-2 infection biomarkers, we measured t 6 A and ms 2 t 6 A in serum normalized by unmodified adenosine and compared them with healthy samples. The t 6 A and ms 2 t 6 A levels were significantly elevated in the serum of COVID-19 patients (Figure 4a,b). We also performed ROC analysis using data of t 6 A and ms 2 t 6 A. On t 6 A, setting the cutoff value to 1.039 resulted in a sensitivity of 98.4%, a specificity of 92.5%, and a likelihood ratio of 13.12 (Figure 4c). Regarding ms 2 t 6 A, setting the cutoff value to 0.1034 resulted in a sensitivity of 94.2%, a specificity of 92.5%, and a likelihood ratio of 12.55 (Figure 4d). Next, to investigate if t 6 A and ms 2 t 6 A within human serum can be used as quantitative biomarkers to determine the severity of SARS-CoV-2 infection, we first examined the patients' conditions from medical records and classified them by severity. Based on the Clinical Spectrum of SARS-CoV-2 Infection from "COVID-19 Treatment Guidelines" of NIH, we classified COVID-19 patients into two groups: asymptomatic/mild and moderate/severe. Then, we compared the measurements of t 6 A and ms 2 t 6 A in the serum of these two groups against a healthy group. As a result, as the severity of COVID-19 worsened, ms 2 t 6 A in serum also increased (Figure 5a,b). Next, we confirmed the relationships between the measurements t 6 A and ms 2 t 6 A in serum and clinical indicators related to COVID-19 severity (Table 1). Our results show the levels of lactate dehydrogenase (LDH), C-reactive proteins (CRP), and lymphocyte percentage in COVID-19 patients significantly correlated with t 6 A and ms 2 t 6 A levels (Supplementary Figure S1).
We also compared the changes in serum t 6 A and ms 2 t 6 A levels within the same COVID-19 moderate/severe patients at the infection period and recovered period. We found that t 6 A and ms 2 t 6 A in serum significantly decreased at the recovered period ( Figure 5c,d). Based on these results, the measurement of t 6 A and ms 2 t 6 A in serum could be useful to determine the severity and the effect of treatment. Next, to investigate if t 6 A and ms 2 t 6 A within human serum can be used as quantitative biomarkers to determine the severity of SARS-CoV-2 infection, we first examined the patients' conditions from medical records and classified them by severity. Based on the Clinical Spectrum of SARS-CoV-2 Infection from "COVID-19 Treatment Guidelines" of NIH, we classified COVID-19 patients into two groups: asymptomatic/mild and moderate/severe. Then, we compared the measurements of t 6 A and ms 2 t 6 A in the serum of these two groups against a healthy group. As a result, as the severity of COVID-19 worsened, ms 2 t 6 A in serum also increased (Figure 5a,b). Next, we confirmed the relationships between the measurements t 6 A and ms 2 t 6 A in serum and clinical indicators related to COVID-19 severity (Table 1). Our results show the levels of lactate dehydrogenase (LDH), C-reactive proteins (CRP), and lymphocyte percentage in COVID-19 patients significantly correlated with t 6 A and ms 2 t 6 A levels (Supplementary Figure S1). [20]. These variants are often associated with enhanced transmissibility and evasion from host antibodies. We collected the serum of patients with B1.1.7 (α) and B.1.617.2 (δ) variants of SARS-CoV-2. Using the same LC-MS method, we measured t 6 A and ms 2 t 6 A in the serum of patients infected with these variants, and we found that t 6 A and ms 2 t 6 A were also elevated in the serum of patients infected with all monitored variants (Figure 6a,b). These results suggest that the diagnosis of COVID-19 by measuring t 6 A and ms 2 t 6 A in serum could be useful regardless of variants of SARS-CoV-2 spike protein. We also compared the changes in serum t 6 A and ms 2 t 6 A levels within the same COVID-19 moderate/severe patients at the infection period and recovered period. We found that t 6 A and ms 2 t 6 A in serum significantly decreased at the recovered period (Figure 5c,d). Based on these results, the measurement of t 6 A and ms 2 t 6 A in serum could be useful to determine the severity and the effect of treatment.
Since the end of 2020, patients with variants of SARS-CoV-2 have been reported from various regions, including the United Kingdom (B1.1.7), South Africa (B1.351), Brazil (P1), and India (B.1.617.2, AY.1, AY.2) [20]. These variants are often associated with enhanced transmissibility and evasion from host antibodies. We collected the serum of patients with B1.1.7 (α) and B.1.617.2 (δ) variants of SARS-CoV-2. Using the same LC-MS method, we measured t 6 A and ms 2 t 6 A in the serum of patients infected with these variants, and we found that t 6 A and ms 2 t 6 A were also elevated in the serum of patients infected with all monitored variants (Figure 6a,b). These results suggest that the diagnosis of COVID-19 by measuring t 6 A and ms 2 t 6 A in serum could be useful regardless of variants of SARS-CoV-2 spike protein.
Discussion
In this study, we first found characteristic elevations of specific modified nucleosides t 6 A and ms 2 t 6 A during SARS-CoV-2 infection experiments. These biomolecules were also elevated in the serum and urine of COVID-19 patients. Moreover, these elevations correlated with the severity and recovery of COVID-19. In the serum of patients infected with several mutant strains, these elevations were also observed.
To examine the presence of SARS-CoV-2, RT-PCR tests and antigen tests are easy and useful. However, clinical samples for these tests, which are saliva and nasopharyngeal
Discussion
In this study, we first found characteristic elevations of specific modified nucleosides t 6 A and ms 2 t 6 A during SARS-CoV-2 infection experiments. These biomolecules were also elevated in the serum and urine of COVID-19 patients. Moreover, these elevations correlated with the severity and recovery of COVID-19. In the serum of patients infected with several mutant strains, these elevations were also observed.
To examine the presence of SARS-CoV-2, RT-PCR tests and antigen tests are easy and useful. However, clinical samples for these tests, which are saliva and nasopharyngeal swabs, often contain SARS-CoV-2, constantly exposing healthcare workers to the risk of infections during the collection and handling of these samples. Serum and urine contain very little of the SARS-CoV-2 virion [5,6]. Therefore, the establishment of COVID-19 diagnosis using modified nucleosides in serum and urine could provide more safety and less stress for healthcare workers. Considering the inaccessibility of mass spec machines in many facilities, we are currently trying to develop an easy and inexpensive t 6 A ELISA kit for COVID-19 detection using safer serum or urine samples rather than dangerous saliva and pharyngeal swabs.
In COVID-19 treatment, RT-PCR tests and antigen tests are not suitable for the proper assessment of COVID-19 severity. PCR tests and antigen tests for the SARS-CoV-2 viral genome from saliva or nasopharyngeal swab have no correlation with COVID-19 severity [21][22][23][24]. From our study, the elevations of t 6 A and ms 2 t 6 A in serum correlated with the severity and recovery of infection. The measurements of t 6 A and ms 2 t 6 A in serum could contribute to the appropriate assessment of severity and treatment effect, as well as to appropriately evaluate the efficacy of therapeutic agents during clinical trials. In this study, we examine only the elevation of t 6 A and ms 2 t 6 A in serum of patients with α and δ variants, and elevations in serum by infections with other variants should be checked.
From our study, the sources of t 6 A and ms 2 t 6 A are unclear, although there are some candidates. One is the result of cell damage to immune cells and/or tissue cells upon infection. When the host is infected by pathogens, large numbers of tissue cells and immune cells react and finally collapse. Our in vitro data using HEK293 cells indicate these elevations of modified nucleosides may be related to tissue cell damage. Moreover, we found that these elevations of modified nucleosides in serum correlated with LDH (Supplementary Figure S1). Upon destruction of tissue or immune cells, many modified nucleosides leak into the extracellular region and where they accumulate [18,19]. Therefore, the correlations of serum t 6 A and ms 2 t 6 A with COVID-19 severity might reflect the damage of tissue and/or immune cells upon SARS-CoV-2 infection. Another potential source of t 6 A and ms 2 t 6 A is the genome RNA of SARS-CoV-2. Within the viral RNA, chemically modified regions were detected using nanopore sequencing experiments, although the modification species are unidentified [7]. No obvious candidates for enzymes that modify t 6 A and ms 2 t 6 A are encoded in the genome RNA of SARS-CoV-2. Therefore, if the viral RNA contains t 6 A and ms 2 t 6 A, SARS-CoV-2 likely uses the host's modifying enzymes, the KEOPS complex for t 6 A modification and CDKAL1 for ms 2 modification [11,14,25]. Some RNA viruses, such as HIV-1, have been reported to use the host RNA modification enzyme to escape from host immunity [26]. In future studies, it will be necessary to monitor changes in the expression of these modifying enzymes upon viral infection, as well as the modification levels of the host tRNAs and other RNAs. Recently, many types of vaccinations, including mRNA vaccines, were certified and used in many countries to combat the COVID-19 pandemic. It will be important to investigate the changes of t 6 A and ms 2 t 6 A in vaccinated patient serum in future studies.
Conclusions
In summary, we discovered serum and urine t 6 A and ms 2 t 6 A nucleosides as effective biomarkers of COVID-19. Modified nucleosides are conceptually new metabolites to be measured in the clinical area, and our study is the first to monitor them in COVID-19. The most important merits of the modified nucleoside test over the RT-PCR test are: (1) correlation of serum t 6 A and ms 2 t 6 A levels with the severity and recovery and (2) accuracy of this test regardless of the mutation in the spike protein of SARS-CoV-2. This test is the first evidence for diagnosis using modified nucleosides for COVID-19 and could be useful for accurate assessment of COVID-19 severity and recovery. | 6,063.4 | 2022-09-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Thermal vacuum tests for the ESA’s OPS-SAT mission
OPS-SAT is an ESA nanosatellite launched in December 2019. The spacecraft is open for third-party experiments, which can use almost all functions provided by the spacecraft and take full control of it. Depending on the experiment and usage of the payload, the power consumption of the spacecraft may be as small as a few watts but can exceed 30 W at full load. The peak power production lies in the same order of magnitude, which is highly demanding for thermal regulation. This article describes the preparation and execution of the OPS-SAT Thermal Vacuum (TVAC) test campaign and discusses the limitations and restrictions that had to be taken into account, such as technical limitations with respect to mounting the spacecraft inside the TVAC chamber. Additionally, the procedure of identifying a comprehensive test scenario is discussed. The general approach of TVAC tests and the results of one full test cycle are presented, and the key findings are discussed. The goal is to address the problems and limitations that were encountered during the TVAC test campaign and to provide some ideas and suggestions for improvement for the future.
Introduction
OPS-SAT is an ESA 3U Cubesat, built by Graz University of Technology (TUG) and serves the purpose of breaking the "has never flown -will never fly" cycle, by providing a powerful experimentation platform in space [1]. OPS-SAT is open for experiments from universities, industry or private researches, completely free of charge. The spacecraft includes a wide variety of payloads, to account for many different types of experiments. It includes UHF, S-Band and X-Band communication systems, a Software Defined Radio (SDR) and a coarse and a fine Attitude Determination and Control System (ADCS) with reaction wheels and a startracker. Further on board are an optical receiver, a HD camera, a GPS module and a retroreflector [2]. On the experiment side, the spacecraft and its payloads are controlled via the so-called Satellite Experimental Processing Platform (SEPP) [7]. The SEPP provides basically full control over the spacecraft, by exposing high and low level interfaces to the experimenter. In case of unforeseen behaviour or any potential risk, the currently active of the two OPS-SAT on-board computers (OBCs) takes over and interrupts the experiment to ensure safety of the spacecraft. An overview of OPS-SAT can be found in Fig. 1, showing the spacecraft in post-launch configuration, with deployed solar arrays and antennas [4].
Payloads Any of the OPS-SAT payloads that are shown in Fig. 2 can be used and controlled by an experiment [2,4]. This leads to a large combination of different use cases, each of which with individual requirements in terms of power and resulting thermal behaviour. The nature of purely radiative heat exchange between the spacecraft and the environment, as it is the case during TVAC tests, leads to extensive test time periods, in order to reach thermal equilibrium states. It is therefore not feasible to account for all possible experimental scenarios on OPS-SAT and a single, representative use case had to be chosen for the TVAC test campaign.
Power consumption A challenging aspect of the spacecraft is the relatively small 3U form factor, paired with a comparably high power consumption, that can exceed 30 W in some scenarios. The satellite is equipped with two double deployable solar wings in order to accommodate for its power requirements. The main contributors to power consumption are the SEPP and the S-Band transceiver. Since the SEPP is the basis for most experiments, this unit will be powered on continuously throughout the course of an experiment. Depending on the type of experiment, the SEPP power usage can reach 7-8 W of continuous power draw. The S-Band adds another 10-12 W of power draw. This combined power draw of almost two thirds of the total capacity of the OPS-SAT Electrical Power Supply (PSU) leaves little headroom for all the other payloads. The S-Band transmitter, however, is only powered during ground station contact and cannot be powered on for more than 15 min continu-K Thermal vacuum tests for the ESA's OPS-SAT mission 17 ously due to thermal constraints. While the combined power limits of the spacecraft have to account for the S-Band transmitter, it's thermal influence can be neglected, as is evident by on-orbit telemetry (TM) [5].
TVAC tests The OPS-SAT TVAC test campaign has been carried out at the facilities of RUAG Space in Vienna. The goal is to determine the reliability and functionality of all spacecraft components throughout the widest possible temperature range under vacuum conditions and as close to in orbit conditions as possible. An additional goal of the tests is to determine the thermal dependency between individual components, as well as the temperature relations between them. The OPS-SAT battery was chosen as the so-called Temperature Reference Point (TRP), since it is both crucial for mission operations and one of the most thermally sensitive components. The TVAC tests consist of several phases that include powered and passive states of the spacecraft, in order to approach the respective operational and non-operational temperature limits of the components. More details on the individual phases can be found in Sect. 2.3. The results show a close correlation between the temperature of the SEPP and the battery, as well as temperature gradient from the SEPP towards the outer edges of the spacecraft, leading to the conclusion that the SEPP is a strongly contributing factor to the overall thermal behaviour of the spacecraft.
OPS-SAT thermal vacuum tests
The following section highlights the key considerations for the OPS-SAT thermal vacuum tests. The scope of this article only allows for a very condensed summary of such a test campaign and it is aimed to highlight some key points of the general approach, phases of the tests and a summary of limitations and restrictions that were encountered during the tests and during preparation. First, due to the versatility of OPS-SAT experiments, a representative use in terms of active payloads had to be defined. In terms of power generation, the lack of availability of a sun simulator means that charging power is limited by the umbilical connection. Further, the radio transmitters for UHF and S-Band could not be activated during the tests, in order to avoid damaging the corresponding receiver units due to high power reflections. This leads to a decreased overall power consumption. Finally, the time constraints of the test campaign only allowed for one full test cycle.
Test scenario
Representative use case OPS-SAT allows for a wide range of experiments that are accompanied with individual configurations of payloads. As such, each configuration has its individual power draw and is accompanied by corresponding heat dissipation and distribution throughout the spacecraft. As it is not Table 1.
In terms of payloads that are enabled during the tests, a compromise had to be taken. It is desirable to use as many payloads as possible in order to provide comprehensive test coverage. However, the available power via the umbilical wire harness and battery capacity is limited. Additionally, it was decided to power off the fine ADCS due to a software problem at the time of test, that prevented reliable temperature readings. Of course, all components that were disabled during the TVAC tests have been tested in prior and subsequent unit level, subsystem level and system level tests.
Power generation The TVAC tests were performed in a thermally uniform environment, without any Sun simulator or other heat sources. Therefore, the only available power comes from the battery and from the umbilical wire harness. The umbilical harness is limited to a current of 1 A at 8 V, which means that the chose use case will drain the battery eventually.
No radio transmission Both UHF and S-Band transmitter had to be switched off, to avoid damaging the respective receiver units due to reflections inside the TVAC chamber, that would exceed the maximum allowed input power of the receivers. In the particular case of the S-Band transmitter, this means that the additional 10-12 W of transmission power are not contributing to the heating of the spacecraft. As mentioned prior however, this turned out not to be significant as the S-Band transmitter is only operated during ground station passes. It is not contributing significantly to the spacecraft thermal behaviour, as became evident during the OPS-SAT commissioning phase.
18 Thermal vacuum tests for the ESA's OPS-SAT mission K Fig. 3 Temperature sensors on the battery (TRP), SEPP (TP1), S-Band transceiver (TP2) and Optical receiver housing (TP3). These sensors are added for the purpose of testing, in order to monitor the respective unit's temperatures even when they are disabled Time constraints TVAC tests naturally take a long amount of time, since the heat exchange between spacecraft and ambient (TVAC chamber) can only happen due to thermal radiation. Approaching a thermal plateau therefore can take anywhere from hours to days or even longer, depending on the size and thermal mass of the spacecraft. In the case of OPS-SAT, the thermal test campaign was limited to four workdays, and the facilities could not be accessed during the night. To avoid starting from zero every day, the TVAC chamber was left in it's respective state at the end of the day and the tests were continued the next day. These constraints in effective test time meant, that only one full thermal cycle could be performed, as shown in the next section.
Thermal sensors
Aside from the multitude of temperature sensors that are integrated into the various bus and payload components of OPS-SAT, a variety of additional temperature sensors has been added for the TVAC tests. Those sensors can be further distinguished as internal temperature sensors and external temperature sensors. The internal sensors are mounted inside of the spacecraft body, directly on the corresponding payload components and can be found in Fig. 3 [3]. One sensor is placed on the battery (TRP), one on the SEPP housing (TP1), one on the S-Band transceiver (TP2) and one on the Optical receiver (TP3). The TRP sensor is critical as the battery has the lowest thermal limits.
The external sensors are shown in Fig. 4, with four sensors (TP4 to TP7) placed on the structure and one placed on the spacecraft front panel (TP8). Not shown in the figures are the sensors placed on the solar wings, the MGSE and the TVAC chamber [3]. Those sensor are listed in Table 2.
Test setup and approach
Satellite mounting In order to conduct tests in a thermal vacuum chamber, appropriate mounting of the spacecraft is required. It cannot simply be placed on The temperature sensors TP4 to TP8 are additional sensors that have been mounted temporarily during the TVAC tests, to monitor the temperatures on the spacecraft structure The cut-out 90°corners (C) are for safety only, to prohibit the spacecraft from sliding out of the MGSE but are otherwise in no direct contact with the structure PTFE was chosen because of its low thermal conductivity [6] on the one hand, but also because it was available in our facilities at the time and could easily be machined into the required shape. Fig. 5 shows the satellite inside of the TVAC chamber [3], resting on the MGSE, as visible in the bottom right part of the figure (A). There are a total of four mechanical contact points (C), one on each corner of the spacecraft. Those contact points are reduced to a circular contact surface of 5 mm in diameter. While the mechanical and as such, the thermal interface between satellite and MGSE had been reduced as much as structurally feasible, a potential problem related to using an MGSE may become apparent. By placing a solid mechanical structure underneath the satellite, the thermo-optical view factor between the satellite and the TVAC chamber is reduced. Potential mitigation strategies for this problem may include: a different type of mechanical mount, i.e. suspension mount, reducing the physical dimensions of the MGSE to reduce view factor blocking or a surface treatment of the MGSE, in order to correlate with the thermo-optical properties of the TVAC chamber. The last point should be coupled with an adjustment of the dwell times at the thermal plateaus to consider the additional thermal mass of the MGSE.
Test phases One OPS-SAT TVAC test cycle can subdivided to seven distinct phases, that are briefly describe in the following list: I Initial cool down: Bring the TVAC chamber and the satellite to temperature T start , in order to start the temperature gradient determination phase II Temperature gradient determination: Determination of the temperature gradient between the A graphical representation of the TVAC test timeline is shown in Fig. 6, indicating the respective phase, corresponding temperature and state of the spacecraft [3]. Initial phase Phases I and II, the initial cool down and the temperature gradient determination phase are a crucial part of the test that serves the purpose of identifying the temperature difference, ΔT , between the spacecraft in a thermal equilibrium state and the TVAC chamber. This ensures that the spacecraft does not eventually exceed any of the component's maximum allowable operating temperature at a given ambient temperature and a given power consumption. If the expected component temperatures are known, e.g. due to thermal simulation, the TVAC chamber can be set to the corresponding temperature at start. Otherwise, it is a best guess scenario, that may require a couple of iterations if the initial temperature of the TVAC chamber is set too high. On the other hand, it takes more time for initial cooling of the chamber 20 Thermal vacuum tests for the ESA's OPS-SAT mission K and the spacecraft, if the initial temperature is chosen too low. For OPS-SAT, the initial temperature of the TVAC chamber was chosen at 5°C, which turned out to be a good guess as this led to a final battery temperature just below 42°C, very close to the maximum allowable 45°. In other words: by increasing the TVAC chamber temperature by another 3°C, the maximum operational temperature of the battery can be reached.
Test cycle A test cycle can be summarised by the phases II to VI, as shown in Fig. 5. Typically, a couple of those cycles should be performed in the course of a TVAC test campaign, however, due to time constraints, only one full cycle could be conducted. As the corresponding temperatures suggest, the spacecraft is brought to it's maximum operational temperature first (phase II) and kept at this temperature until a predefined amount of time, the so-called dwell time, has elapsed. Note that after the first full cycle, no more temperature gradient has to be determined during phase II. In phase III, the spacecraft is powered down and brought to its maximum non-operational temperature. Since the spacecraft is now not producing any heat on its own, this has to be achieved by increasing the temperature of the TVAC chamber accordingly. After another dwell time period has elapsed, the spacecraft is brought down once more to maximum operational temperature (phase IV) and a defined set of functional tests is executed, before the spaceraft is powered down again and cooled to its minimum non-operational temperature (phase V). In phase VI, the temperature is increased towards the minimum operational temperature of the spacecraft and a coldstart is performed, followed by functional checks.
Final phase Once one or more test cycles are complete, the TVAC chamber is brought back up to a few degrees above ambient temperature, to avoid any condensation on the spacecraft. The spacecraft is kept at an operational state during this temperature increase, until its maximum allowable temperature is reached. Functional tests are performed throughout the whole period. Finally, the TVAC chamber is pressurized again.
Safety margin A safety margin of 1°C was subtracted from all maximum and minimum operational and non-operational temperatures to account for sensor inaccuracies and uncertainties in temperature regulation.
Results
This section shows the results of the OPS-SAT TVAC test campaign with respect to the additional temperature sensors that have been placed at key components inside of the spacecraft on the outside of the spacecraft. The temperature sensors that are integrated into the various bus and payload units are not shown, as this would exceed the scope of this article. Internal sensors The top graph of Fig. 7 shows the test phases I to VII and the corresponding temperatures for the additional internal temperature sensors which are highlighted in Fig. 3. The OPS-SAT battery acts as temperature reference point TRP and the temperatures are shown additionally for the SEPP (TP1), the S-Band transceiver (TP2), the Optical receiver (TP3) and the thermal chamber shroud [3].
The test starts at ambient temperature at the section prior to phase I followed by a cool-down to 5°C before powering on the spacecraft. Power states and and functional tests are highlighted as green triangle (power on), red rectangle (power off) and grey diamond (functional test). The TVAC chamber temperature (blue curve, TP14) is decreased on purpose two times (green arrows) to speed up the respective cooling phases of the spacecraft. The changes in slope of the SEPP temperature (black curve, TP1) during phase III serve the purpose of accelerating the heating of the spacecraft by setting the SEPP to a high power consuming state. The TVAC chamber dictates the overall temperatures during the non-operational phases, as nicely visible shown in phase III and V. All temperatures follow the TVAC chamber in the non-operational case but do not fully approach this temperature.
The relation between the battery temperature (TRP) and the rest of the powered payloads and components is most critical for OPS-SAT, since the battery is most critical from a thermal point of view. The results show, that the battery is in close relation to the temperature on the SEPP (TP1). This behaviour can be related to the physical proximity of the SEPP and the comparably large power draw of the SEPP, compared to the rest of the powered payloads. It is further clear form TP2 and TP3, that the temperatures of the less power consuming payloads are significantly lower and appear to gradually decrease with increasing distance from the SEPP.
External Sensors The bottom graph of Fig. 7 shows the temperatures of the external sensors placed on the spacecraft structure, as shown in Fig. 4. The sensors TP4 and TP5 are placed on the front side of the structure and TP6 and TP7 are placed on the backside of the structure. TP8 is placed on the front body panel. TP9 is placed on a deployable solar wing, just next to its hinge. TP10 and TP11 are placed on the bottom and the top of the MGSE repsectivel and TP12 to TP14 are placed at various locations inside the TVAC chamber. Most noteworthy with respect to the external sensors, is that the structure and body mounted sensors TP4 to TP8 follow the SEPP temperatures in close relation, albeit at roughly 10°C difference. A temperature gradient can be observed with increasing distance from the SEPP, leading to higher temperatures on the sensors in closer proximity to the SEPP, namely TP4 and TP6. Further noteworthy is the behaviour of TP11 and TP12, both in very close range, well below the temperatures of the active components but still roughly 4°C above the TVAC chamber itself. For TP12, this means that there is some remaining heat transfer between the solar wings and the spacecraft structure via the solar wing hinges. For TP11, this means that there is some remaining heat transfer between the spacecraft structure and the top part of the MGSE. TP10 can be seen at rougly 1°C above TVAC chamber temperature, meaning that there is conductive heat transfer between the spacecraft and the TVAC chamber via the MGSE.
Functional tests The predefined set of functional tests was executed successfully during all phases and all temperatures. An additional set of tests was performed subsequent to the TVAC tests, in order to verify that no late effects have occurred.
Conclusion
Summary OPS-SAT is a versatile and, for its size, powerful spacecraft. This versatility comes at the cost of testing complexity throughout all test-campaigns, including the TVAC test campaign. Several compromises had to be made to conduct the TVAC tests within the given time frame of four days. This includes selection of a subset of payloads that can be switched on during the tests. On the one hand, not every possible constellation that is required by an OPS-SAT experiment can be tested, on the other hand a subset of payloads had to be selected due to charging power constraints. The radio transmitters could only be powered on in receive mode, leading to significantly less power draw which is mitigated through the fact, that radio transmissions are not continuous and only active for a couple of minutes during ground station passes. The time frame for the tests only allowed for one full cycle of thermal vacuum tests. To mitigate this problem, the time frame could be extended or alternatively, night time could be used additionally for active testing.
TVAC tests The TVAC test results show a strong correlation between the SEPP temperature and the OPS-SAT battery, leading to the conclusion that the SEPP is a major factor to consider, when running OPS-SAT experiments for extended time periods. A negative temperature gradient is observed with increasing distance from the SEPP. This is to be expected, as the SEPP is the most prominent heat source during the tests. The marginal temperature changes on the solar wings leads to the conclusion, that the wings are in poor thermal contact with the rest of the spacecraft. The wings, therefore, can only contribute marginally for cooling or heating, depending on the respective attitude in orbit, i.e. whether a wing is currently illuminated by the Sun or facing cold space. The temperature sensors on the MGSE show, that the chosen MGSE design is not ideal as the MGSE is not fully thermally decoupled from the spacecraft. This leads to a marginal thermally conductive link between MGSE and TVAC chamber, that is in the same order of magnitude as the thermal link between spacecraft and solar wings.
Functional tests The most important conclusion from the TVAC tests is, of course, that the functional tests could be executed successfully during all phases. This shows, that OPS-SAT and its components can be successfully operated under vacuum conditions and under the thermal limits of the individual components. It shall be noted at this point, that all payloads that were not switched on during the TVAC test campaign have been validated in the respective preceding unit and subsystem tests under vacuum conditions and thermal limit conditions.
Lessons learned Improvements for future test campaigns will be based on the lessons learned during the OPS-SAT TVAC tests and preparation. A critical point was the selection of powered payload components during the respective hot and cold operational tests and the corresponding functional tests. Ideally, all components should be included in future tests, and this requires appropriate planning, as not all components may be powered simultaneously. Addition of a Sun simulator would yield a more realistic on-orbit scenario, rather than a uniform temperature environ-22 Thermal vacuum tests for the ESA's OPS-SAT mission K Originalarbeit ment and might reveal attitude-dependent thermal behaviour that could not be observed with a uniform environment.
Planning and construction of the MGSE should be considered well in advance of the test campaign and in coordination with the available TVAC facilities. The mechanical interface between MGSE should be planned with care and as little contact surface as possible. Additionally, any blocking of the view factor between spacecraft and TVAC chamber needs to be considered, in particular if a Sun simulator is used, to avoid any shading of the solar arrays.
Finally, sufficient time and margin is important for such a test. In particular the time required for the spacecraft to reach the respective hot and cold plateaus is unknown prior to the tests.
Funding Open access funding provided by Graz University of Technology.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. He was project member in the OPS-SAT project and is the systems engineer for the PRETTY satellite. | 5,780.6 | 2022-02-01T00:00:00.000 | [
"Physics"
] |
A Poisson Process-Based Random Access Channel for 5G and Beyond Networks †
: The 5th generation (5G) wireless networks propose to address a variety of usage scenarios, such as enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low-latency communications (URLLC). Due to the exponential increase in the user equipment (UE) devices of wireless communication technologies, 5G and beyond networks (B5G) expect to support far higher user density and far lower latency than currently deployed cellular technologies, like long-term evolution-Advanced (LTE-A). However, one of the critical challenges for B5G is finding a clever way for various channel access mechanisms to maintain dense UE deployments. Random access channel (RACH) is a mandatory procedure for the UEs to connect with the evolved node B (eNB). The performance of the RACH directly affects the performance of the entire network. Currently, RACH uses a uniform distribution-based (UD) random access to prevent a possible network collision among multiple UEs attempting to access channel resources. However, in a UD-based channel access, every UE has an equal chance to choose a similar contention preamble close to the expected value, which causes an increase in the collision among the UEs. Therefore, in this paper, we propose a Poisson process-based RACH (2PRACH) alternative to a UD-based RACH. A Poisson process-based distribution, such as exponential distribution, disperses the random preambles between two bounds in a Poisson point method, where random variables occur continuously and independently with a constant parametric rate. In this way, our proposed 2PRACH approach distributes the UEs in a probability distribution of a parametric collection. Simulation results show that the shift of RACH from UD-based channel access to a Poisson process-based distribution enhances the reliability and lowers the network’s latency.
Introduction
An enormous increase in the demand for capacity in mobile communication devices has led wireless communication industries to prepare to support up to a thousand-fold increase in total internet traffic [1][2][3]. The 3rd Generation Partnership Project (3GPP) suggests that connecting the user equipment (UE) to an existing cellular network, such as Long-Term Evolution-Advanced (LTE-A), 5th generation (5G), and beyond 5G (B5G) networks [4], requires the higher layer connections between the UEs. In general, a considerable amount of data needs to be distributed from many UEs on a 5G network. In this way, the UEs perform a random access (RA) mechanism for transmitting resource requests to the base station, known as evolved Node B (eNB) [5]. The UEs execute RA using the physical random access channel (RACH) through a four-step handshake process. Several UEs attempt to communicate over the same channel resources in a dense UE deployment. The UEs contend to control the common radio resources, which creates a massive collision problem. Due to simultaneous UE channel access, preamble collisions can obstruct the RA process. The problem of successful RA is crucial due to the increasingly growing number of connected UEs in the network [6]. A standard 5G network consists of two parts: the enhanced packet core (EPC) network and the radio access network (RAN) [7]. A high-level architecture of a typical 5G network with linked UEs' connectivity is shown in Figure 1, where the UEs are linked to the eNBs. The EPC is responsible for the ultimate regulation of mobile devices and creating an Internet Protocol (IP) packet transmission path. The RAN is responsible for wireless networking and radio resource usage. The RAN, which provides the requisite protocols for the user and control plane to communicate with mobile devices (UEs) in 5G network, is composed of eNBs. The eNBs are interconnected through the X2 interface. In addition, the eNB is connected to the EPC using an S1 interface [8]. In a standard 5G network, the minimal resource scheduling unit for downlink (DL) and uplink (UL) transmission is referred to as a resource block (RB). An RB consists of 12 subcarriers in the frequency domain (FD), each size of 180kHz and one subframe in the time domain (TD), length of 1 ms. This time-frequency resource is called RACH, and it is the RB on which RA is performed. RA helps UEs initialize an association, known as a contention-based RA (CB-RA) method [9]. In a CB-RA, UEs utilize preambles to launch the RA transmission attempt. There is a total of 64 preambles divided into two categories; preambles of contention-free RA (CF-RA) and preambles of CB-RA. For CF-RA, the eNB incorporates a few preambles and designates specific preambles for various UEs. Residual preambles are used for CB-RA, where every UE randomly chooses one preamble from a set of predefined uniform random variables (RV) [7].
This uniform distribution (UD) of RVs is used to prevent the inevitable collisions in the 5G network when multiple UEs attempts to access the channel resources. However, in a UD-based channel access mechanism, every UE has an equal chance to choose an identical contention preamble close to the mean value of the UD, that is, for a lower bound a and an upper bound b, which may cause an increase in a collision among the UEs. However, we may use a Poisson process-based distribution, which expresses the probability of a given RV of events independently and distributively occurring in a fixed interval of time or space with a known constant rate. Figure 2 shows a random variable (X) between a uniform random value and a Poisson process-based random value (that is an ED). In a Poisson process-based method, an ED distributes random values between two boundaries. Random variables exist constantly and independently with a constant average rate of 1 λ , for the λ as a constant rate parameter.
Contributions of the Paper
The motive to introduce a Poisson process-based RA framework is to spread the UEs in a parametric set of a probability distribution. The parametric distribution approach allows the system to disperse RVs exponentially. Therefore, in this paper, we suggest using RVs with Poisson distribution, such as a continuous exponential distribution (ED). The proposed mechanism is named as Poisson process-based RACH (2PRACH). The contributions of this paper are twofold: • This paper assesses the strengths of Poisson distribution RVs as compared to the uniform distribution RVs. • We propose a 2PRACH mechanism, which suggests replacing UD with ED in random access mechanism for 5G/B5G networks.
The rest of the paper is organized as follows. In Section 2, we present related work on enhancing the existing RACH procedure in 5G cellular networks. Section 3 describes the existing RACH mechanism in 5G networks. In Section 4, we present our proposed 2PRACH mechanism for random access in 5G networks. Simulation results and performance evaluation is discussed in Section 5. Finally, in Section 6, a conclusion is given, along with future work considerations.
Related Research Works
In related research contributions, many researchers have proposed mechanisms to decrease the delay at the RACH procedure. One of the proposals from 3GPP is an early data transmission (EDT) as a feature of the Release 15 specification [8]. According to EDT, data transmission services from the UL channel are sent sooner, enabling data packet transmission to be piggybacked with the RACH system. In Reference [9], authors gave some underlying findings on the execution of EDT, showing that it shows improvements at the edge of the network in-data packet latency by 85 ms. Condoluci et al. [10] conducted performance studies to show that a two-way handshake RACH mechanism is based on an exceptionally structured RA preamble that guarantees a 10-50% delay reduction for 5G macro-cell networks and 50-70% femtocell 5G networks compared to the standard RACH method. A specific RACH resource method for ultra-reliable low-latency communications (URLLC)-related traffic is proposed in Reference [11], designated as resource allocation priorities. Authors propose that by reserving RA preambles twice the amount of URLLC-based UE requests, a channel access latency of less than 10 ms can be obtained for 95% of URLLC-based UEs. Jiang et al. [12] developed a stochastic geometry model for evaluating the effect of diversity by repeating RA preambles to increase the probability of success in RACH. Their analysis shows that repetition of the RA preamble leads to inefficient use of wireless channel resources in considerably dense UE implementation. Besides, Vural et al. [13] submitted that the benefits of using multiple RA preambles in the RACH process could be seen for smaller preamble group size, up to 20 as the resource usage saturates with repeated transmissions. Their analysis shows that repetition of the RA preamble leads to inefficient use of wireless channel resources in considerably dense UE implementation. Furthermore, Vural et al. [14] showed that the benefits of using multiple RA preambles in the RACH method could be seen for smaller preamble group size, which is up to 20 as resource usage saturates with repeated transmissions. Therefore, RA improvements include short transmission slots of 5G numerology, the quicker transmission of UL data packets, efficient backoff times, and dedicated resource allocation for URLLC applications to reduce the channel access latency. However, maintaining high reliability needs progressively channel resources, such as consistency, diversity redundancy, and retransmission, while extending latency over URLLC application requirements. The authors propose a novel RA enhancement in Reference [15], including parallel RA preambles, dynamic, reserved RA preambles, and enhanced backoff mechanisms to diminish the channel collision probability. Arouk et al. [16] developed an analytical method to model RACH procedure output in machine type communication (MTC) networks, which is also a promising 5G network. Their proposed model is essentially used with any system to manage congestion that affects RA procedure. In Reference [17], the authors recommend avoiding the RACH cycle without needing network synchronization to achieve smooth mobility. Their proposed generalized RACH-less handover scheme reduces the latency considerably. Ali et. al. [18] proposed a contention-resolution-based RACH (CRB-RACH) system that dynamically adapts backoff times to allow further improvements compared to a fixed-back-off scheme. Hsu et al. [19] propose a random-access scheme for multi-radio access technologies (RAT), named Multi-RAT RA, which uses traffic offloading configuration parameters utilizing the licensed and unlicensed bands. Although the authors achieve a higher average success probability with their proposed Multi-RAT RA scheme, the use of dual or multiple RATs is required to complete this efficiency. Liu et al. [20] enhanced the RACH procedure by obtaining the approximate characterization of UEs' interference in a wireless system. Their derived analytical expression of success probability helps to consider the channel collision and the preamble transmission. The authors further extend their proposed RACH success probability analysis for multiple time slots by modeling the queue evolution. In Reference [21], the authors address excessive congestion and channel collision in the RACH due to massive users' access. They propose a dynamic adjustment of the backoff parameters based on the number of contending devices. The vibrant use of backoff parameters in a RACH scheme achieves the enhanced channel access success probability for statics access and random access with a slight increase in the access delay. Another work in Reference [22] proposes a Timing Advance-based Preamble Resource Expansion (TAPRE) scheme for RACH procedure, which adjusts time slot for preamble transmission to reduce the collision probability effectively. The authors achieved this with a Resource Allocation Wait (RAW) protocol, which efficiently reduces RA failures. However, these works considered enhancing the existing uniform distribution-based RACH mechanism.
Problem Statement
The aforementioned related research works propose significant changes to currently implemented RACH, which improve its performance in various applications and context. However, neither of the works propose to replace UD and discusses the limitations posed by the UD distribution. It has been observed that with the continuous evolution of wireless communication technologies and the massive increase in connected devices, UD-based channel access mechanisms have already been proved less efficient. One of the reasons behind the still use of UD-based mechanisms is the backward compatibility and ease of use. However, due to channel scarcity and resource constraints, we need to move on to better options. Keeping this in mind, in this paper, we expand the capabilities of the current RACH by implying Poisson process-based exponential distribution. As described earlier, from Figure 2, the users' dispersion in the earlier channel access slots allows the system to permit early channel access with low collision chances.
Existing Contention-Based Random-Access Mechanism
When a UE is switched on or awakens, it initially synchronizes with the DL channels by reading the primary synchronization signal (PSS) and secondary synchronization signal (SSS) from the eNB. The UE separates the Master Information Block (MIB) at that point, which contains data on the DL and UL carriers' configuration so that the eNB receives data from the Sender Information Block (SIB). All RA parameters are included in this SIB, such as the number of available RA slots, RA preamble classes, and preamble setup. Subsequently, UEs generate CB-RA transmission attempts in order to decode the SIB. For association initialization in a 5G network, the CB-RA conducts four main phases. Figure 3 shows a CB-RA procedure in a 5G network.
Preamble Transmission (UE → eNB)
By choosing a randomly generated one of the accessible CB preambles from a uniform distribution, a UE initiates CB-RA and sends it to the eNB at the next available RACH slot. The eNB regularly broadcasts SIB messages that encourage the UEs to select a good preamble from them. The physical properties of RA preamble in a PRACH contain the RA radio network temporary identification (RNTI) and the preamble data configuration. The UE waits for an RA response (RAR) window once the preamble is submitted.
Random Access Response (RAR) (eNB → UE)
The eNB calculates the power delay profile (PDP) of the obtained preamble on the PRACH. A predefined threshold is tested for this calculated PDP, and, if it is found to be greater than the threshold, it is called an active RA preamble. The eNB decodes the RNTI for each active RA preamble to discover the RA slot where the preamble has been submitted. Later, the eNB sends a RAR message to the decoded UEs on the DL control message channel. The RAR message includes a timing advance (TA) instruction to synchronize eventual UL transmissions, a UL resource award for radio resource channel (RRC) requirements, and a short RNTI, which could be made stable at the collision resolution period (CRP). However, if different UEs transmit a similar preamble at a similar RA slot, a collision occurs.
RRC Connection Request (UE → eNB)
Channel services are delegated to the UE as specified in the previous step; hence, the UE sends an RRC link request and a scheduling request to eNB. In step 3, a message is forwarded to the temporary RNTI assigned in Step 2 of the RAR message and conveys either a particular RNTI if the eNB already has one RRC-associated UE, or an initial UE identity, or a randomly chosen number. However, colliding UEs seek to retransmit RA channel requests using the same UL procedure due to a collision in phase 2; consequently, further collisions occur in the network.
RRC Connection Setup (eNB → UE)
This phase is also known as CRP, in which an eNB acknowledges the UE after decoding the RRC request. RRC link configuration messages are sent using the dedicated RNTI. After this, an acknowledgment is submitted by the active UEs to the eNB and proceed with data transmission. However, once the limit of retransmission attempts is approached, the colliding UEs must wait to begin a new CB-RA process.
Proposed Poisson Process-Based RACH (2PRACH) Mechanism
Since, a very few numbers of preambles for CB-RA are used in each RA slot. The use of uniformly distributed random variable-based RA for preamble collection has a restriction on assembling contending UEs close to an estimated mean value of µ = (a+b) 2 , resulting in higher collisions after a long run or concurrently accessing multiple UEs. Therefore, we recommend that a Poisson process-based RACH (2PRACH) can be used instead of UDbased RACH. A continues exponential distribution is one of the probability distributions which deals with the time events in a Poisson process. The probability density function (PDF), ψ(.), of such an exponential distribution with a random variable x (preamble in a RACH) and constant parameter λ can be defined as, In this equation, λ > 0 is the constant rate parameter of a Poisson process-based exponential distribution. A Poisson process-based exponentially distributed random variable X with a constant rate parameter λ allows UEs to stay with the boundaries of mean given by: which is evident to make sense that, if a UE try to access the channel at an average rate of λ = 2 per data frame transmission, then the UE expects to wait E[X] = 1/2 = 0.5 for every next transmission attempt. In addition, the variance of such a UE with random variable X to access the channel resources is given by, Hence, the standard deviation of the UE remains same as of its mean value. In 2PRACH mechanism, every UE follow the memory-lessness property of the ED. According to this property, a time domain ED random variable, for example T, follows the relation given by: This relationship can be formulated by considering the tail distribution, that is the complementary distribution function, and is given by, Thus, the time spent by the UE waiting to access the channel relative to its previous or initial time is conditioned on its failure to access the resources at timeslot i, while the remaining channel access time is distributed same as the original unconditional distribution. This can be explained as, if a UE fails to access the channel at timeslot i, the conditional property that channel access will take place at least after j timeslot and is equal to the unconditional probability of accessing the resources more than j timeslot after the i timeslots.
Performance Evaluation
The proposed 2PRACH approach decreases the collision incidence and improves the performance rate of the existing RA of the UE association without altering the 3GPP recommended RA procedure. We conducted simulations in a discrete-event network simulator release 3.30.1 (ns-3.30.1) [23] to evaluate the performance of the 2PRACH approach. The network topology used in simulations includes a radio access network part of a typical UE communication, as shown in Figure 1. The proposed approach's efficiency is measured in terms of network stability (reliability) and end-to-end latency. These two evaluation parameters are tested for three different scenarios. First, we conducted simulations with increasing number of UEs in the network, that is N = {2, 4, 8, 16, 32, 64}. Later, we assess the efficiency with varying data packet sizes and interarrival packet speeds. The objective of conducting simulations of various packet sizes and rates of interarrival is to evaluate the impact on the proposed mechanism's real data transmissions. We observe from Figure 2 that the users' distribution tends towards the similar density function as of a uniform distribution. Therefore, one can choose the average rate parameter (λ) according to the conditions and requirements. In this paper, we use λ = 8 as our rate parameter, which distributes the users near the initial channel access slots. Detailed simulation parameters and their used values are described in Table 1. Figure 4 compares the efficiency of our proposed 2PRACH mechanism with the existing uniform distribution-based RACH procedure, and the CRB-RACH mechanism [18], where the number of contending UEs varies. In Figure 4a, we show that the 2PRACH mechanism achieves higher reliability than existing RACH and CRB-RACH procedures, also in dense UEs deployments, which is 64 UEs. Similarly, the network's end-to-end latency is also reduced for the proposed 2PRACH mechanism, as shown in Figure 4b. In the 2PRACH scheme, improved reliability and reduced latency are evident that for a denser UE environment, choosing earlier RA preambles with a constant parametric rate decreases collision among the UEs. The improved efficiency is because a Poisson process-based distribution manages the number of events in a fixed time frame and the time between occurrences of those successive events. It fits our RACH scheme's settings because it is one of the distributions with the "lack-of-memory" property. It means that, after waiting to access the channel without successful transmission, the probability of a UE to access the channel in the next contention is the same as was the probability (in previous transmission attempt) of accessing the channel in the following two transmission attempts. Thus, as a UE in the system continues to wait, the chance of successful transmission neither increases nor decreases based on the parameter selected. Although the CRB-RACH procedure improves the efficiency compared to the existing RACH with the use of dynamic backoff adjustment, due to the use of uniformly distributed backoff parameters, it achieves lesser reliability and higher latency as compared to 2PRACH. The efficiency of the proposed 2PRACH mechanism is also measured with various data packet sizes. The motivation to evaluate an RA process with different data frame sizes is that the UEs' channel capital occupancy time very much depends upon the data frame size to transmit. Figure 5a,b show the effect on the network's stability and latency of various data frame sizes. The figures reveal that the 2PRACH procedure works well for both; reliability and end-to-end latency relative to existing UD-based RACH when considering the different types of data frame sizes.
However, the influence of data frame interarrival rate has fewer effects on the network's stability and latency efficiency, as seen in Figure 6a,b, respectively. The importance of using a Poisson process-based distribution is evident from the figures (that is, Figures 4-6). The main purpose of the proposed 2PRACH for the RA procedure is to enable the UEs in the network to carry out their initialization of the association more effectively, where reliability is accomplished by reducing network collisions.
Discussion on the Substantial Impact
As we described above in the problem statement section, neither of the related research works propose to replace the traditional uniform distribution-based channel access procedure due to its ease of use and backward compatibility. Therefore, the limitations and challenges posed by the uniform distribution are always ignored. Our proposed 2PRACH protocol expands the current RACH capabilities by implying parametric channel access, which is a more dynamic and robust technique. It opens new ways for researchers and industrialists to think beyond the conventional RACH mechanism and overcome the channel scarcity challenges. As shown in the results, the achieved higher reliability, and the reduced latency proves that the shift of the RACH mechanism from a uniform distribution to a Poisson process-based mechanism has potentials for the next-generation 5G and beyond networks.
Limitations of the Work
In our current article, we focus on highlighting the issues related to the currently in-use RACH mechanism due to a uniform distribution-based channel access. We propose using a Poisson process-based RACH mechanism to enhance the channel access in terms of increased reliability and reduced end-to-end delays, which is evaluated based on several simulations and experiments. However, it is more convincing for the readers to see the match of analytical results with the simulation results. We understand the importance of comparing analytical results with simulation results to affirm the performance improve-ment of the proposed solution. However, our current manuscript does not include such analytical modeling due to the related complexities in designing a Markov Chain-based analytical model for our proposed solution. We are already working to fix this limitation and hope to develop a novel analytical model for Poisson process-based channel access mechanisms.
Conclusions and Future Work
One of the challenges for 5G cellular communication networks is to provide effective channel connectivity, especially for denser UE scenarios. In a 5G network, the random access channel (RACH) procedure is the core channel access mechanism to set up the wireless communication association between a UE and eNB. However, the efficiency of the currently deployed RACH system is greatly affected by the rise in the number of contending UEs in a network. It is due to the limited available channel contention preamble set. The selection of contention preambles based on the uniformly distributed randomaccess mechanism of the RACH system is one reason for this efficiency loss. In a uniform distribution, each UE has an equal opportunity to select identical contention preambles close to the mean value of the distribution, creating a rise in collisions among the UEs. Since there is only a single contention stage for the UEs to access the channel, we may consider alternate solutions to allow the UEs to access the channel as early as possible. For this purpose, we propose a Poisson process-based RACH, named 2PRACH, which is based on continuous exponential distribution. The proposed 2PRACH distributes contention preambles between two bounds in a Poisson point method, in which random variables exist continuously and independently with a constant average rate allowing UEs to access the channel resources at their earliest slots. In this way, the 2PRACH mechanism distributes the UEs in a parametric collection of the probability distribution. The performance evaluation results of simulation experiments show that 2PRACH significantly improves the reliability of the network. The increased reliability is achieved due to the enhanced capacity of the UEs to transmit their data packets. At the same time, the long waiting period of the uniformly distributed preamble is eliminated to achieve reduced latency, as well.
In the future, we plan to apply a reinforcement learning-enabled framework to improve the efficiency of 2PRACH. The behaviorist appraisal feature of reinforcement learning models is the incentive to incorporate reinforcement-learning to refine the RA procedure in 5G networks. Besides, we are also working to develop a novel analytical model for Poisson process-based channel access mechanisms. | 5,915 | 2021-03-02T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Phytochemical investigation of Volutaria lippii and evaluation of the antioxidant activity
Abstract Volutaria lippii (L.) Cass. ex Maire, syn. Centaurea lippii (L.), (Asteraceae) is a plant from the central region of Algeria, considerably distributed in all Mediterranean areas. Herein, the antioxidant activity of the three derived fractions [chloroform (CHCl3), ethyl acetate (EtOAc) and n-butanol (n-BuOH)] of the 70% methanol extract of the aerial parts (leaves and flowers), was assessed by using CUPRAC, ABTS, DPPH free radical scavenging, and β-carotene bleaching methods. The results obtained allowed to guide the fractionation of EtOAc and n-BuOH fractions by CC followed by purification by TLC and reverse phase HPLC. A guaianolide glucoside, 3β-hydroxy-11β,13-dihydrodehydrocostuslactone 8α-O-(6'-acetyl-β-glucopyranoside) (1), never reported in the literature, was isolated together with other known compounds (2–14). Their structures were elucidated by the extensive use of 1 D- and 2 D-NMR experiments along with ESI-MS analyses and with comparison with literature data. Graphical Abstract
Introduction
The genus Centaurea of the family Asteraceae includes more than 500 species widespread worldwide, among which 45 are found in Algeria (Quezel andSanta 1963, Labed et al. 2019).
Previous studies revealed the richness of this genus in sesquiterpene lactones and flavonoids (Fernandez et al. 1989;Marco et al. 1992).Centaurea species are known for their important activities like antidiabetic, antirheumatic, anti-inflammatory, colagog, choleretic, digestive, diuretic, antipyretic and antibacterial (Aktumsek et al. 2013, Ugur et al. 2009, Shakeri et al. 2019).Quite close to the Centaurea genus, the genus Volutaria Cass., tribe Cardueae, subtribe Centaureinae of the Asteraceae family, comprises eighteen species growing in semiarid to arid zones in the Mediterranean and Irano-Turanian region from Arabia and Iran to Morocco (Kadereit and Jeffrey 2007).In Algeria there are five species of Volutaria distributed in the south region, two of them are endemic to the Sahara (Quezel and Santa 1963).Volutaria lippii (L.) Cass.ex Maire (Asteraceae), synonyms; Centaurea lippii (L.); Amberboa lippii (L.) DC., is considerably distributed in all Mediterranean areas.Previous phytochemical investigations of this species have led to the isolation of sesquiterpene lactones and flavonoids (Mezache et al. 2010;Rafrafi et al. 2021).In the present work, the antioxidant activity of the three derived fractions of this plant was evaluated.The obtained results prompted us to investigate the chemistry of the EtOAc and n-BuOH fractions.So, the extracts were purified by different chromatographic steps, affording the sesquiterpene lactone glucoside (1), never reported in the literature, together with 13 known compounds (2-14), among which compounds 2, 4, 6-9, 11-14 were identified for the first time in the aerial parts of this species.Their structures were established by a combination of oneand two-dimensional NMR techniques, and mass spectrometry.
Results and discussion
The antioxidant activity of the CHCl 3 , EtOAc and n-BuOH fractions was evaluated using CUPRAC, ABTS, DPPH free radical scavenging, and b À carotene bleaching methods (Apak et al. 2004, Montoro et al. 2013, Blois 1958, Marco 1968).Butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and a-tocopherol were used as positive controls.The tests were performed at different concentrations to calculate the IC 50 and A 0.50 values.The EtOAc fraction showed the highest activities followed by n-BuOH fraction.The results were statistically significant (p < 0.05) compared to the controls in each test (Table S1).
The negative ESI-MS spectrum of compound 1, isolated as a white solid, showed a chlorinated adduct ion [M (1) þ Cl] -at m/z 503.17, supporting the molecular formula C 23 H 32 O 10 .The presence of chlorine was deduced from the natural isotope distribution pattern of chlorine 35 Cl/ 37 Cl $3/1.This spectrum also showed another adduct ion at m/z 513.20 corresponding to [M (1) þ HCOO] -which confirmed the molecular formula of compound 1.
The 1 H and 13 C NMR analysis suggested the presence of a sesquiterpene derivative of a guaianolide class (Shakeri et al. 2019).
The 1 H NMR spectrum of 1 showed signals for two exomethylene at d 5.33 (brs, H-15a), and 5.29 (brs, H-15b), and d 5.07 (2H, s, H 2 -14).In addition, the proton spectrum showed three oxygenated methine protons at d 4.51 (brt, J ¼ 9.8 Hz, H-3), d 4.15 (t, J ¼ 9.8 Hz, H-6), and d 3.80 (m, H-8), an acetyl group at d 2.07 (3H, s), and a signal for a secondary methyl group at d 1.43 (3H, d, J ¼ 7.0 Hz, Me-13).In the 1 H NMR spectrum, a signal corresponding to one anomeric proton at d 4.49 (d, J ¼ 7.4 Hz) was present.The chemical shifts of all the individual protons of the sugar unit were ascertained from a combination of 1 D-TOCSY and DQF-COSY spectral analysis, and the 13 C NMR chemical shifts of their attached carbons could be unambiguously assigned by the analysis of HSQC spectrum experiment.These data demonstrated the presence of a b-glucopyranosyl unit.The chemical downfield shifts of C-6 0 at d 64.4 and H 2 -6 0 at d 4.22 and 4.47 were indicative of an acylation at this position, further confirmed by the HMBC correlations between the proton signal at 2.07 (CH 3 of the acetyl group) with the carbon resonance at d 172.0.Based on these data, and in particular the absence of other carbonyls other than that of the lactone function at d 180.0, an acetate function was located at C-6 0 of the glucopyranosyl unit.A detailed analysis of 2 D-NMR HSQC, HMBC and COSY) experiments revealed that the structure of compound 1 was almost comparable with those of the molecule 11b,13-dihydrodehydrocostuslactone 8a-O-(6 0 -acetyl-b-D-glucopyranoside) (Li et al. 2007) except for a further hydroxy function evident for compound 1 at d H 4.51.This additional hydroxy function was located at C-3 of the guaianolide skeleton based on the HMBC correlations between the proton resonances of exomethylene group (C-15) at d 5.33 (H-15a), 5.29 (H-15b) and the carbon resonance at d 73.4 (C-3).A b À orientation for the hydroxy group at C-3 was established in according with the carbon resonance at d 73.4 (Shimizu et al. 1988;Yang et al. 2008).In fact, the value of the C-3 chemical shift is reported higher for the same class of molecules with a group 3a-hydroxy oriented (Li and Jia 1989).So, compound 1 was identified as 3b-hydroxy-11b,13-dihydrodehydrocostuslactone 8a-O-(6 0 -acetyl-b-D-glucopyranoside) never reported before in literature.
Conclusion
The results presented in this study were the first report on the evaluation of the antioxidant activity of Volutaria lippii (L.).The phytochemical investigation of the ethyl acetate and n-butanol fractions of the 70% methanol extract of aerial parts from V. lippii, led to the isolation of fourteen compounds among which eleven (1,2,4,(6)(7)(8)(9)(11)(12)(13)(14) were described for the first time from this species.It is important to note that compound 3b-hydroxy-11b,13-dihydrodehydrocostuslactone 8a-O-(6 0acetyl-b-glucopyranoside) (1) was a natural compound for the first time described in the literature.The nature of the isolated and described components was in good agreement with the results of studies carried out on Centaurea and Volutaria species. | 1,576.4 | 2022-10-26T00:00:00.000 | [
"Medicine",
"Chemistry",
"Environmental Science"
] |
Design, Synthesis, Antibacterial, and Antifungal Evaluation of Phenylthiazole Derivatives Containing a 1,3,4-Thiadiazole Thione Moiety
To effectively control the infection of plant pathogens, we designed and synthesized a series of phenylthiazole derivatives containing a 1,3,4-thiadiazole thione moiety and screened for their antibacterial potencies against Ralstonia solanacearum, Xanthomonas oryzae pv. oryzae, as well as their antifungal potencies against Sclerotinia sclerotiorum, Rhizoctonia solani, Magnaporthe oryzae and Colletotrichum gloeosporioides. The chemical structures of the target compounds were characterized by 1H NMR, 13C NMR and HRMS. The bioassay results revealed that all the tested compounds exhibited moderate-to-excellent antibacterial and antifungal activities against six plant pathogens. Especially, compound 5k possessed the most remarkable antibacterial activity against R. solanacearum (EC50 = 2.23 μg/mL), which was significantly superior to that of compound E1 (EC50 = 69.87 μg/mL) and the commercial agent Thiodiazole copper (EC50 = 52.01 μg/mL). Meanwhile, compound 5b displayed the most excellent antifungal activity against S. sclerotiorum (EC50 = 0.51 μg/mL), which was equivalent to that of the commercial fungicide Carbendazim (EC50 = 0.57 μg/mL). The preliminary structure-activity relationship (SAR) results suggested that introducing an electron-withdrawing group at the meta-position and ortho-position of the benzene ring could endow the final structure with remarkable antibacterial and antifungal activity, respectively. The current results indicated that these compounds were capable of serving as promising lead compounds.
Introduction
Plant pathogens possess an extremely infectious ability, resulting in a high incidence of plant mortality, which severely affect agricultural production and significantly reduce crop yield and quality [1][2][3][4].For instance, R. solanacearum, belongs to a common Gramnegative opportunistic pathogen, can infect an array of crop species including rice, ginger, tomato, potato and tobacco [5,6].S. sclerotiorum, a necrotrophic phytopathogenic fungus distributed globally, is capable of causing various symptoms, such as stem rot and pod reduction, leading to annual production losses of 10-50% [7,8].At present, the application of chemical pesticides is the most effective measure to manage bacterial and fungal diseases of crops [9,10].However, the misuse and overuse of many conventional pesticides for a long time have created ever-rising resistance, resulting in a significant decrease in the control efficacy of commercial agents [11,12].Thus, it is urgently needed to develop novel and efficient antibacterial and antifungal agents for the control of crop bacterial and fungal diseases.
Structural optimization based on natural products has been an important way to discover novel pesticides, which is of great significance to delay the development of resistance and enhance pharmacodynamic effects [13,14].Thiasporine A (Figure 1) is a heterocyclic natural product initially isolated from marine-derived Actinomycetospora chlora SNC-032 by MacMillan in 2015 [15].In our previous work, we synthesized several series of thiasporine A derivatives with potent antifungal activity and demonstrated that phenylthiazole was a promising antifungal activity skeleton [16].Notably, Shi [17] introduced an 1,3,4-oxadiazole thione into phenylthiazole to synthesize compound E1 (Figure 1), exhibiting the most excellent antifungal activity against S. sclerotiorum with an EC50 value of 0.22 µg/mL, which was superior to that of the commercial fungicide Carbendazim (EC 50 = 0.70 µg/mL).Regrettably, compound E1 did not perform excellent antibacterial activity, but this result provides valuable guidance for carrying out subsequent molecular design.
for a long time have created ever-rising resistance, resulting in a significant decrease in the control efficacy of commercial agents [11,12].Thus, it is urgently needed to develop novel and efficient antibacterial and antifungal agents for the control of crop bacterial and fungal diseases.
Structural optimization based on natural products has been an important way to discover novel pesticides, which is of great significance to delay the development of resistance and enhance pharmacodynamic effects [13,14].Thiasporine A (Figure 1) is a heterocyclic natural product initially isolated from marine-derived Actinomycetospora chlora SNC-032 by MacMillan in 2015 [15].In our previous work, we synthesized several series of thiasporine A derivatives with potent antifungal activity and demonstrated that phenylthiazole was a promising antifungal activity skeleton [16].Notably, Shi [17] introduced an 1,3,4-oxadiazole thione into phenylthiazole to synthesize compound E1 (Figure 1), exhibiting the most excellent antifungal activity against S. sclerotiorum with an EC50 value of 0.22 µg/mL, which was superior to that of the commercial fungicide Carbendazim (EC50 = 0.70 µg/mL).Regrettably, compound E1 did not perform excellent antibacterial activity, but this result provides valuable guidance for carrying out subsequent molecular design.1,3,4-thiadiazole is an important class of five-membered heterocycle ring containing nitrogen and sulfur with broad biological activities as antimicrobial [18], anti-inflammatory [19], antibacterial [20], insecticidal [21] and herbicidal [22], widely applied in the fields of pharmaceutical and agricultural chemistry.Especially in agricultural antibacterial agents, considering 1,3,4-thiadiazole as a "privileged" scaffold, large numbers of commercial pesticides have been developed ceaselessly since 1950s, such as Bismerthiazol and Thiodiazole copper (Figure 1) [23][24][25].Motivated by the above observation, we intended to introduce an antibacterial scaffold 1,3,4-thiadiazole into phenylthiazole active skeleton to discover and synthesize a series of compounds with antibacterial and antifungal potency (Figure 2).All the target compounds were assayed for their antibacterial activities against R. solanacearum and Xoo, as well as their antifungal activities against S. sclerotiorum, R. solani, M. oryzae and C. gloeosporioides, and the preliminary SARs of these compounds were discussed.1,3,4-thiadiazole is an important class of five-membered heterocycle ring containing nitrogen and sulfur with broad biological activities as antimicrobial [18], anti-inflammatory [19], antibacterial [20], insecticidal [21] and herbicidal [22], widely applied in the fields of pharmaceutical and agricultural chemistry.Especially in agricultural antibacterial agents, considering 1,3,4-thiadiazole as a "privileged" scaffold, large numbers of commercial pesticides have been developed ceaselessly since 1950s, such as Bismerthiazol and Thiodiazole copper (Figure 1) [23][24][25].Motivated by the above observation, we intended to introduce an antibacterial scaffold 1,3,4-thiadiazole into phenylthiazole active skeleton to discover and synthesize a series of compounds with antibacterial and antifungal potency (Figure 2).All the target compounds were assayed for their antibacterial activities against R. solanacearum and Xoo, as well as their antifungal activities against S. sclerotiorum, R. solani, M. oryzae and C. gloeosporioides, and the preliminary SARs of these compounds were discussed.
2.
Results and Discussion.
Chemistry
The synthetic steps of the target compounds 5a-5p are outlined in Scheme 1.In brief, benzonitrile 1a was provided by professional suppliers and used as the starting material.Intermediate 2a was harvested via the reaction of benzonitrile 1a, magnesium chloride hexahydrate and sodium hydrosulfide hydrate in N, N dimethylformamide [26].Subsequently, intermediate 3a was afforded via the reaction of intermediate 2a and ethyl
Chemistry
The synthetic steps of the target compounds 5a-5p are outlined in Scheme 1.In brief, benzonitrile 1a was provided by professional suppliers and used as the starting material.Intermediate 2a was harvested via the reaction of benzonitrile 1a, magnesium chloride hexahydrate and sodium hydrosulfide hydrate in N, N dimethylformamide [26].Subsequently, intermediate 3a was afforded via the reaction of intermediate 2a and ethyl 3-bromopyruvate in ethanol [27].The corresponding intermediate 3a was reacted with hydrazine hydrate in methanol to synthesize intermediate 4a by a hydrazinolysis reaction [28].Finally, intermediate 4a was ring-cyclized with carbon disulfide and potassium hydroxide under 98% sulfuric acid under ice-salt bath conditions to obtain the target compound 5a [29].It was of great importance for the last step to keep the temperature at 0 • C. The synthesized target compounds were obtained with yields of 80-90%.The structures of all target compounds were confirmed utilizing 1 H NMR, 13 C NMR, and HRMS.All corresponding signals of protons and carbons were recorded in the 1 H NMR and 13 C NMR spectra.In the 1
Chemistry
The synthetic steps of the target compounds 5a-5p are outlined in Scheme 1.In brief, benzonitrile 1a was provided by professional suppliers and used as the starting material.Intermediate 2a was harvested via the reaction of benzonitrile 1a, magnesium chloride hexahydrate and sodium hydrosulfide hydrate in N, N dimethylformamide [26].Subsequently, intermediate 3a was afforded via the reaction of intermediate 2a and ethyl 3-bromopyruvate in ethanol [27].The corresponding intermediate 3a was reacted with hydrazine hydrate in methanol to synthesize intermediate 4a by a hydrazinolysis reaction [28].Finally, intermediate 4a was ring-cyclized with carbon disulfide and potassium hydroxide under 98% sulfuric acid under ice-salt bath conditions to obtain the target compound 5a [29].It was of great importance for the last step to keep the temperature at 0 °C.The synthesized target compounds were obtained with yields of 80-90%.The structures of all target compounds were confirmed utilizing 1 H NMR, 13 C NMR, and HRMS.All corresponding signals of protons and carbons were recorded in the 1 H NMR and 13 C NMR spectra.In the 1 H NMR spectra of target compounds 5a-5p, the signals around δ = 17.37-13.71ppm suggested the appearance of the N-NH group.The signals δ = 191.81-177.23 ppm in the 13 C NMR indicated the presence of the thione group (C=S).Additionally, the OCF3 was observed as a quartet with a large coupling constant.More detailed information is shown in the Supplementary Materials.
Antibacterial Activity
The preliminary antibacterial activities of the target compounds 5a-5p against R. solanacearum and Xoo were evaluated at concentrations of 200 and 100 µg/mL via the turbidimeter test.The results in Table 1 suggested that most of the target compounds exhibited moderate to remarkable antibacterial activities against R. solanacearum and Xoo.Among them, compounds 5b, 5h, 5i and 5k possessed excellent antibacterial activities against R. solanacearum at 100 µg/mL with inhibition rates of 92.00%, 93.81%, 94.00% and 100%, Scheme 1.The synthetic route of target compounds 5a-5p.
Antibacterial Activity
The preliminary antibacterial activities of the target compounds 5a-5p against R. solanacearum and Xoo were evaluated at concentrations of 200 and 100 µg/mL via the turbidimeter test.The results in Table 1 suggested that most of the target compounds exhibited moderate to remarkable antibacterial activities against R. solanacearum and Xoo.Among them, compounds 5b, 5h, 5i and 5k possessed excellent antibacterial activities against R. solanacearum at 100 µg/mL with inhibition rates of 92.00%, 93.81%, 94.00% and 100%, respectively, which were superior to those of compound E1 (79.77%) and the commercial agent Thiodiazole copper (70.22%).Meanwhile, compound 5k also performed good activity against Xoo at 100 µg/mL with an inhibition rate of 72.63%, which was higher than that of compound E1 (53.96%), but it had a certain gap compared with Thiodiazole copper (94.61%).Based on the preliminary screening results, the EC 50 values of compounds 5b, 5h, 5i and 5k against R. solanacearum were further tested.The results were statistically analyzed and shown in Table 2. Satisfactorily, the EC 50 values of all tested compounds ranged from 2.23 to 40.33 µg/mL.Notably, compounds 5h, 5i and 5k possessed excellent antibacterial activities against R. solanacearum, with EC 50 values of 6.66, 7.20 and 2.23 µg/mL, respectively, which were lower than those of compound E1 (EC 50 = 69.87µg/mL) and the commercial agent Thiodiazole copper (EC 50 = 52.01µg/mL).The EC 50 value of compound 5b was relatively large, but it was still lower than that of compound E1 and the Thiodiazole copper.
Antifungal Activity
The antifungal activities of the target compounds 5a-5p against four phytopathogenic fungi (S. sclerotiorum, R. solani, M. oryzae and C. gloeosporioides) were determined at 50 µg/mL utilizing the mycelial growth rate method.The results of antifungal activities were outlined in Table 3 and indicated that most of the target compounds displayed moderate to excellent antifungal activities against each of the test fungi.Particularly, compounds 5b, 5h, 5i and 5p possessed remarkable antifungal activities against S. sclerotiorum with the inhibition rates of 90.48%, 82.14%, 83.63% and 80.06%, respectively, which surpassed that of the commercial agent Thifluzamide (72.92%).Meanwhile, compound 5b also exhibited good activity against R. solani, M. oryzae and C. gloeosporioides, with the inhibition rates of 72.32%, 55.36% and 52.98%, respectively.The inhibition rates of compounds 5b, 5e, 5h, 5i and 5j against R. solani were more than 50%.Furthermore, selected compounds with activity higher than 80% were further tested for EC 50 values to evaluate their excellent antifungal activities more accurately.The results in Table 4 indicated that all of them showed more potent antifungal activities compared with Thifluzamide (EC 50 = 27.24µg/mL).Especially, the EC 50 value of compound 5b against S. sclerotiorum was 0.51 µg/mL, which was equivalent to that of the commercial fungicide Carbendazim (EC 50 = 0.57 µg/mL).From the pictures in Figure 3, it was displayed the antifungal effects of compound 5b and Carbendazim against S. sclerotiorum at different concentrations.It can be intuitively observed that the antifungal activity of compound 5b against S. sclerotiorum was equivalent to Carbendazim at the same concentration.
Preliminary Analysis of Structure-Activity Relationship (SAR)
The preliminary SAR results were deduced from the inhibitory activity data of the antibacterial and antifungal activities shown in Tables 1-4.The results indicated that the type and position of substituents on the benzene ring had a great impact on antibacterial and antifungal activities.Briefly, introducing an electron-withdrawing group at the metaposition can endow the final structure with more potent antibacterial activity.For example, the antibacterial activity of compound 5k (R 1 = 3-OCF3) was superior to that of com-
Preliminary Analysis of Structure-Activity Relationship (SAR)
The preliminary SAR results were deduced from the inhibitory activity data of the antibacterial and antifungal activities shown in Tables 1-4.The results indicated that the type and position of substituents on the benzene ring had a great impact on antibacterial and antifungal activities.Briefly, introducing an electron-withdrawing group at the metaposition can endow the final structure with more potent antibacterial activity.For example, the antibacterial activity of compound 5k (R 1 = 3-OCF 3 ) was superior to that of compound 5i (R 1 = 3-CH 3 ).In addition, introducing the same substituents at different positions of the benzene ring, the meta-position was of great benefit for improving antibacterial activity.The inhibition rates of compounds 5k (R 1 = 3-OCF 3 ), 5g (R 1 = 2-OCF 3 ), and 5o (R 1 = 4-OCF 3 ) against R. solanacearum at 100 µg/mL were 100%, 48.26% and 35.11%, respectively.Similarly, introducing an electron-withdrawing group could contribute to promoting antifungal activity, but the same substituent at the ortho-position of the benzene ring took a stronger advantage in improving activity.For instance, the inhibition rates of compounds 5b (R 1 = 2-F), 5h (R 1 = 3-F) and 5l (R 1 = 4-F) against S. sclerotiorum were 90.48%, 82.14% and 32.14%, respectively.Moreover, by introducing the ortho-fluoro group of the benzene ring, the antifungal activity of compound 5b (R 1 = 2-F) was more potent than that of compound 5c (R 1 = 2-Cl) and compound 5d (R 1 = 2-Br).
Fortunately, some of the target compounds possessed potent antibacterial and antifungal potencies against R. solanacearum and S. sclerotiorum, respectively, and the SAR results were summarized.In this study, we introduced an antibacterial active scaffold into the phenylthiazole active skeleton, hoping to further improve antibacterial activity while retaining antifungal activity as far as possible.Compounds 5k and 5b exhibited excellent activities against R. solanacearum and S. sclerotiorum with EC 50 values of 2.23 and 0.51 µg/mL, respectively.From the results, the purpose of the design has been preliminarily achieved, and it heralds that these compounds have the potential to serve as lead compounds.Regrettably, it is not the same compound that simultaneously possesses optimal antibacterial and antifungal activity.On the other hand, this study lacks further determination of bioactivity as well as the exploration of mechanisms.We will further optimize the structure to enhance activity and explore the mechanism of action in future work.
Materials and Instruments
A commercial bactericide of Thiodiazole copper, commercial fungicides Thifluzamide and Carbendazim were supplied by the College of Agriculture, Yangtze University.All reagents and solvents used in the experiment were provided by professional suppliers and utilized without further purification unless otherwise indicated. 1H NMR and 13 C NMR of all target compounds were performed on a AVANCE DPX 400 spectrometer (Bruker Co., Ltd., Fällanden, Switzerland) using tetramethylsilane (TMS) and dimethyl sulfoxided 6 (DMSO-d 6 ) as an internal standard and solvent, respectively (2.50 ppm for 1 H and 39.52 ppm for 13 C).High-resolution mass spectrometry (HRMS) data were acquired from Thermo Scientific Q Exactive (Thermo Fisher Scientific, Bremen, Germany).All reactions were indicated via thin-layer chromatography (TLC) utilizing silica gel 60 GF254 (Qingdao Hai Yang Chemical Co., Ltd., Qingdao, China).The melting point of the target compounds were measured by WRR melting point apparatus, provided by Shanghai Precision Scientific Instrument Co., Ltd., Shanghai, China.
General Procedure for the Preparation of Intermediate 2a
In a 250 mL flask, the compound 1a (5.0 g, 42.3 mmol) and MgCl 2 •6H 2 O (10.9 g, 53.4 mmol) were dissolved in N, N dimethylformamide (DMF, 20 mL), and stirred at room temperature for 15 min.Then, NaHS•H 2 O (5.9 g, 106.7 mmol) was added, and the reaction solution was maintained at room temperature for 16 h.After the reaction was completed (indicated by TLC), the solution was extracted 3-4 times directly with ethyl acetate and saturated salt water.Finally, the organic layers were dried with anhydrous sodium sulfate, filtered, and concentrated to acquire crude intermediate 2a [26].
General Procedure for the Preparation of Intermediate 4a
In a 250 mL flask, the intermediate 3a (3.0 g, 12.9 mmol) was dissolved in methanol (30mL), 80% hydrazine hydrate (3.9 g, 77.4 mmol) was slowly added.The mixture was reacted at room temperature for 4 h.After completion of the reaction (indicated by TLC), the ice water was added to the reaction solution until a white solid was precipitated out.Eventually, the crude product was further washed several times with water to afford intermediate 4a [28].
General Procedure for the Preparation of Target Compound 5a
Under the condition of an ice salt bath, 98% sulfuric acid (15 mL) was added to a 250 mL flask.When the temperature dropped below 0 • C, intermediate 4a (2.3 g, 7.2 mmol), potassium hydroxide (0.77 g 15.4 mmol), and carbon disulfide (1.95 g, 25.6 mmol) were added successively.The reaction was stirred for 2 h at 0 • C.After the reaction was completed (indicated by TLC), the solution was poured into a large amount of ice water to precipitate a white solid.This was the target product that we wanted.Subsequently, the target compound 5a was purified by recrystallizing with methanol and water [29].The chemical structures of the target compounds were accurately confirmed using 1 H NMR, 13
Antibacterial Activity Test In Vitro
The preliminary antibacterial activities of the target compounds 5a-5p against R. solanacearum and Xoo were assayed at 200 and 100 µg/mL by the turbidimeter test [30].Nutrient broth (NB) mediums containing 0.5% Dimethylsulfoxide (DMSO) and 0.1% Tween 80 served as blank control, whereas compound E1 and commercial agent Thiodiazole copper were used as positive controls.Firstly, each tested compound (20 mg) was dissolved in a sterilized tube with a mixture solution of DMSO (200 µL) and 0.1% Tween 80 (10 µL).The solution was diluted with sterile water and added to NB mediums to obtain the final concentrations (200, 100 µg/mL).Subsequently, NB mediums containing R. solanacearum and Xoo were added to the sterilized test tubes and then incubated on a shaker for 24-72 h at 30 • C and 180 rpm.Each treatment was tested in triplicate.The optical density at 595 nm was measured when the R. solanacearum or Xoo was in the untreated culture in the logarithmic phase.The inhibition rates were calculated via the following Formula (1).
Inhibition rate (%) = (C − T)/C × 100 (1) where C is the corrected turbidity value of the untreated NB medium, and T is the corrected turbidity value of the treated NB medium.The growth inhibition rates and the standard errors were calculated by Microsoft Excel 2016 (Version 16.0.17029.20028)software.
Antifungal Activity Test In Vitro
The antifungal activities of the target compounds 5a-5p against four plant pathogenic fungi (S. sclerotiorum, R. solani, M. oryzae and C. gloeosporioides) were evaluated at 50 µg/mL utilizing the mycelial growth inhibitory rate method [31].PDA mediums containing 0.5% DMSO and 0.1% Tween 80 were employed as a blank control, and the commercial fungicides Thifluzamide and Carbendazim served as positive controls.Firstly, each tested compound (15 mg) was dissolved in DMSO (200 µL) containing 0.1% Tween 80 and diluted with sterile water.Secondly, the prepared solution was added to sterile molten PDA mediums to obtain a final tested concentration (50 µg/mL).The PDA mediums containing the corresponding medicinal solution were poured into sterile Petri plates per plate (15 mL).Thirdly, the 7-mm-diameter mycelial discs of fungi were inoculated in the center of the PDA Petri plates after cooling.The inoculated medium was cultured at 26 ± 2 • C. Three replicates were conducted for each treatment.The mycelium diameters (mm) of each treatment were accurately measured when the blank control reached two-thirds of the Petri plates.The inhibition rates of the tested compounds were calculated by the following Formula (2).
Inhibition rate (%) = [(C 1 − T 1 )/(C 1 − 7 mm)] × 100 (2) where C 1 is the average colony growth diameter of the blank control, T 1 is the average colony growth diameter of treatment, and 7 mm is the diameter of mycelial discs.The growth inhibition rates and the standard errors were calculated by Microsoft Excel 2016 (Version 16.0.17029.20028)software.
Figure 1 .
Figure 1.The structures of thiasporine A and its derivatives.
Figure 1 .
Figure 1.The structures of thiasporine A and its derivatives.
Molecules 2024 , 13 Figure 2 .
Figure 2. Design strategy for the target compounds in this work.This figure was reproduced with permission from [15].
Figure 2 .
Figure 2. Design strategy for the target compounds in this work.This figure was reproduced with permission from [15].
H NMR spectra of target compounds 5a-5p, the signals around δ = 17.37-13.71ppm suggested the appearance of the N-NH group.The signals δ = 191.81-177.23 ppm in the 13 C NMR indicated the presence of the thione group (C=S).Additionally, the OCF 3 was observed as a quartet with a large coupling constant.More detailed information is shown in the Supplementary Materials.
Figure 2 .
Figure 2. Design strategy for the target compounds in this work.This figure was reproduced with permission from [15].
a
TA: Thifluzamide, b CB: Carbendazim.Each treatment was tested in triplicate.
Figure 3 .
Figure 3.In vitro antifungal activity of compound 5b against S. sclerotiorum.
Figure 3 .
Figure 3.In vitro antifungal activity of compound 5b against S. sclerotiorum.
Table 1 .
In vitro the inhibition rates of the target compounds against R. solanacearum and Xoo at 200 and 100 µg/mL.
a TC: Thiodiazole copper.Each treatment was tested in triplicate.
Table 2 .
EC 50 of some target compounds against R. solanacearum.
a TC: Thiodiazole copper.Each treatment was tested in triplicate.
Table 3 .
In vitro the inhibition rates of the target compounds 5a-5p against four fungi at 50 µg/mL.
a TA: Thifluzamide, b CB: Carbendazim.Each treatment was tested in triplicate.
Table 4 .
EC 50 of some target compounds against S. sclerotiorum.
Table 4 .
EC50 of some target compounds against S. sclerotiorum. | 5,216.4 | 2024-01-01T00:00:00.000 | [
"Chemistry",
"Biology",
"Environmental Science"
] |
High MLL2 expression predicts poor prognosis and promotes tumor progression by inducing EMT in esophageal squamous cell carcinoma
Background MLL2 has been identified as one of the most frequently mutated genes in a variety of cancers including esophageal squamous cell carcinoma (ESCC). However, its clinical significance and prognostic value in ESCC has not been elucidated. In the present study, we aimed to investigate the expression and role of MLL2 in ESCC. Methods Immunohistochemistry (IHC) and qRT-PCR were used to examine the expression profile of MLL2. Kaplan–Meier survival analysis and univariate and multivariate Cox analyses were used to investigate the clinical and prognostic significance of MLL2 expression in Kazakh ESCC patients. Furthermore, to evaluate the biological function of MLL2 in ESCC, we applied the latest gene editing technique CRISPR/Cas9 to knockout MLL2 in ESCC cell line Eca109. MTT, colony formation, flow cytometry, scratch wound-healing and transwell migration assays were performed to investigate the effect of MLL2 on ESCC cell proliferation and migration. The correlation between MLL2 and epithelial–mesenchymal transition (EMT) was investigated by Western blot assay in vitro and IHC in ESCC tissue, respectively. Results Both mRNA and protein expression levels of MLL2 were significantly overexpressed in ESCC patients. High expression of MLL2 was significantly correlated with TNM stage (P = 0.037), tumor differentiation (P = 0.032) and tumor size (P = 0.035). Kaplan–Meier survival analysis showed that patients with low MLL2 expression had a better overall survival than those with high MLL2 expression. Multivariate Cox analysis revealed that lymph node metastasis and tumor differentiation were independent prognostic factors. Knockout of MLL2 in Eca109 inhibited cell proliferation and migration ability, induced cell cycle arrest at G1 stage, but it had no significant effect on apoptosis. In addition, knockout of MLL2 could inhibit EMT by up-regulation of E-Cadherin and Smad7 as well as down-regulation of Vimentin and p-Smad2/3 in ESCC cells. In cancer tissues, the expression of E-Cadherin was negatively correlated with MLL2 expression while Vimentin expression was positively correlated with MLL2 expression. Conclusion Our results indicate that overexpression of MLL2 predicts poor clinical outcomes and facilitates ESCC tumor progression, and it may exert oncogenic role via activation of EMT. MLL2 may be used as a novel prognostic factor and therapeutic target for ESCC patients.
Introduction
Esophageal cancer is one of the most common malignant tumors with high incidence and mortality worldwide (Ferlay et al. 2015). Esophageal squamous cell carcinoma (ESCC) accounts for the most of the esophageal cancers and is the fourth leading cause of death from cancer in China (Lin et al. 2013). Xinjiang is one of the high-risk areas in China, where the incidence of ESCC in Kazakh minority is significantly higher than the national average (Zheng et al. 2010). Despite the advances in treatment of ESCC, the 5-year 1 3 overall survival rate is still very poor. Deep invasion and metastasis are main reasons for poor prognosis of ESCC and it is important to elucidate the underlying mechanisms to improve the outcomes of ESCC patients.
MLL2 (also known as KMT2D/ALR/MLL4), which is located to 12q12-13, encodes a histone methyltransferase that is mainly responsible for the methylation of histone H3 lysine 4 (H3K4), and plays an important role in epigenetic regulation of gene transcription (Bögershausen et al. 2013;Ruthenburg et al. 2007). Recently, many exome sequencing studies have revealed the MLL2 gene as one of the most frequently mutated genes in a variety of human cancers, including follicular lymphoma, diffuse large B-cell lymphoma, renal carcinoma, prostate cancer, bladder carcinoma, gastric carcinoma, breast cancer, lung carcinomas (Dalgliesh et al. 2010;Grasso et al. 2012;Gui et al. 2011;Morin et al. 2011;Pasqualucci et al. 2011;Pleasance et al. 2009;Stephens et al. 2012;Zang et al. 2012), suggesting that MLL2 may play an important role in tumorigenesis in a variety of tumors. As most of the mutations were inactivated and predicted to produce protein products without the key methyltransferase domain, it was considered as a tumor-suppressor (Morin et al. 2011;Parsons et al. 2011;Pasqualucci et al. 2011). However, studies on its role in some cancers showed contradictive results that whether it is an oncogene or tumorsuppressor gene remains to be further elucidated (Guo et al. 2013;Issaeva et al. 2007;Zhang et al. 2015).
MLL2 was also found to be frequently mutated in ESCC and conjectured as a tumor suppressor due to the inactivated mutations (Gao et al. 2014;Song et al. 2014). However, the role of MLL2 in ESCC remains unknown. In this study, we examined the expression level of MLL2 and evaluated its prognostic value in ESCC patients. Moreover, we knocked out MLL2 in Eca109 cells by CRISPR/Cas9 gene editing system to further explore the role of MLL2 and the possible mechanism underlying its involvement in ESCC cell progression, and further confirmed the result of in vitro study by IHC in cancer tissues.
Patients and samples
To investigate the mRNA levels and protein expression of MLL2 in Kazakh patients with ESCC, we selected 42 samples for qRT-PCR and 67 samples for immunohistochemistry (IHC), respectively. All the patients underwent curative surgical resection at the department of Thoracic Surgery of the First Affiliated Hospital, Medical University of Xinjiang, China, and confirmed by histopathology. For PCR, the tissue samples were obtained during surgery and frozen immediately after resection, then stored at − 80 °C until use.
Paraffin-embedded ESCC tissue sections were acquired for IHC from the pathology Department. Both tumor samples and matched adjacent normal tissues (≥ 5 cm away from the tumor) were available for each patient. None of the patients received preoperative chemotherapy, radiotherapy or other cancer-related treatments. The disease stage of the ESCC patients were determined based on the TNM classification of AJCC Cancer Staging Manual (7th edition). Other relevant clinicopathological information was available for all the patients. All patients were enrolled with written informed consents, and the study was approved by the Ethical Committee of the Affiliated Hospital of Xinjiang Medical University.
Due to the limitation of samples, for the IHC staining of E-cadherin, Vimentin and Smad7, we selected 26 samples for each group, respectively. All the 26 cases were among the ESCC samples mentioned above that used for MLL2 expression.
RNA Extraction and qRT-PCR
The total RNA was extracted from fresh frozen tissues with TRIzol reagent (Invitrogen Life Technologies, CA, USA). Purity and concentration of RNA were detected by Nan-oDrop ND 1000. RNA was reversely transcribed into cDNA using prime SCRIPT™ RT-PCR kit (TaKaRa, Dalian, China). The qRT-PCR analysis was performed on an IQ5 system (Bio-Rad, USA) with SYBR Green reagents (Boster, Wuhan, China) according to the manufacturer's instructions. β-actin was used as an internal reference for normalization and the relative expression of MLL2 mRNA was evaluated by the 2 − ΔΔCT method. The primers used in this study were as follows: MLL2, forward: 5′-TGA CAA GTG TGA ATC CCG TGAAG-3′ and reverse: 5′-AAC CAT TTC ATC CGT TGT TACG AAG-3′; β-actin, forward: 5′-ATG ATG ATA TCG CCG CGC TC-3′ and reverse: 5′-TCG ATG GG GTA CTT CAGGG-3′.
Immunohistochemistry (IHC)
Protein expression of MLL2, E-Cadherin, Vimentin and Smad7 was performed by the IHC and it was followed as previously described (Gambichler et al. 2016). In brief, 4 µm tissue sections were cut from the paraffin-embedded blocks and transferred to glass slides. Then the slides were incubated in 60 °C for 1 h, followed by deparaffinization and hydration with xylol and gradient alcohol, respectively. Then the antigen retrieval step was performed by heating in a microwave oven at high fire mode for 5 min in citrate buffer (pH 6.0). After cooled at room temperature, the sections were treated with 3% hydrogen peroxide for 10 min to remove the endogenous peroxidase. Subsequently, anti-MLL2 goat polyclonal antibody (ab15962, Abcam, Cambridge, USA; dilution 1:100), anti-E-Cadherin mouse monoclonal antibody (ab76055, Abcam; dilution 1:300), anti-Vimentin rabbit monoclonal antibody (ab76055, Abcam; dilution 1:500), anti-Smad7 mouse monoclonal antibody (ab55493, Abcam; dilution 1:500) were used to incubate the sections as the primary antibody overnight at 4 °C in a moist chamber. Polink-2 plus HRP Detection Kit was used as the secondary antibody following the manufacturer's instructions. The sections were washed with TBST (Tris-buffered saline with Tween) for 3 × 5 min after each step mentioned above. Finally, DAB was used to visualize immunoreactivity and counterstained with hematoxylin. The primary antibody was replaced by TBST as negative control.
IHC staining results were blindly evaluated by two independent pathologists. For MLL2, nuclear staining of MLL2 was considered as positive. And > 50% positive tumor nuclei was defined as high expression while ≤ 50% positive tumor nuclei was defined as low expression.
For E-cadherin, Vimentin and Smad7, the IHC staining was scored by combination of the percentage and intensity of positively stained tumor cells (Goumans et al. 2014). The scoring criterion for percentage of positive stained cells was as follows: 0, < 5%; 1, 6-25%; 2, 26-50%; and 3, > 50%. The scoring criterion for staining intensity: 0, negative; 1, weak staining; 2, moderate staining; and 3, strong staining. The final score was determined by multiplying the percentage score and the staining intensity score (the total score ranging from 0 to 9).
Cell culture and transfection
The human ESCC cell line Eca109 was purchased from WuHan University (Hubei, WuHan, China) and cultured in RPMI-1640 medium plus 10% fetal bovine serum and penicillin and streptomycin in a 5% CO 2 humidified incubator at 37 °C.
CRISPR/Cas9 genome-editing technique can induce frame shift mutations at specific sites in the genome through a synthetic sgRNA that result in a loss-of-function allele. And it has been reported to be a very efficient way to knock out genes (Ran et al. 2013;Shalem et al. 2014). To investigate the biological role of MLL2 in ESCC, we used the CRISPR/Cas9 gene editing system to generate MLL2knockout cells in ESCC cell line Eca109. First, Eca109 cells were transfected with Lenti-Cas9 lentivirus and screened with puromycin to select the cells that stably expressed Cas9 after 3 days of transfection. Then, the selected cells were transfected with the sgRNA lentivirus and the EGFP expression was detected with a fluorescent microscope (Olympus, Tokyo, Japan). The cells were collected for further experiment when the transfection efficiency reached greater than 80%. The detailed CRISPR/Cas9 knockout procedure is described in supplementary material.
MTT assay
MTT assay was used to evaluate the effect of MLL2 knockout on cell proliferation of Eca109 cell line. Briefly, cells were seeded into a 96-well plate at a concentration of 2000 cells/well. After growing for 24, 48, 72, 96 and 120 h, cells were treated with 20 µl MTT solution (Genview, JT343, 5 mg/ml). The medium was removed and replaced with 100 µl DMSO to dissolve formazan precipitates after incubated at 37 °C for 4 h. Then the OD value at 490 nm was measured with a microplate reader (Tecan infinite, M2009PR).
Colony formation assay
Cells were inoculated in 6-well plates at a density of 600 cells/well and the culture medium was replaced every 3 days. After growing for 10 days, cells were fixed with 4% paraformaldehyde and stained with Giemsa staining solution (Dingguo Biotechnology Co., Ltd. Shanghai). The number of clones was counted manually.
Apoptosis assay and cell cycle analysis
The apoptosis assay was detected by Annexin V-APC (eBioscience, cat.No. 88-8007) single staining method in the fifth day after transfection. Briefly, cells were washed with cold D-Hanks solution and binding buffer, respectively. Then cells were resuspended with 200 µl binding buffer and incubated for 15 min at room temperature avoided light after adding 10 µl Annexin V-APC. Cell apoptosis of the stained cells were measured by flow cytometry (Millipore, Guava easyCyte HT).
For cell cycle analysis, cells were washed with cold D-Hanks solution and fixed with 75% cold ethanol for 2 h. Then the cells were washed and resuspended with staining solution containing propidium iodide (PI, 50 µg/ml) and RNase (50 µg/ml). The cell cycle was analyzed by flow cytometry and the distribution of different phases (G1, S and G2/M) was measured.
Scratch wound-healing assay and migration assay
Wound-healing assay and transwell migration assay were used to assess the cell migration ability. For wound-healing assay, cells (5 × 10 4 ) were seeded in a 96-well plate and grew till they reached more than 90% confluence. A 96-well mechanical floating pin tool (VP Scientific, VP-408FH) was used to make the scratches. A fluorescent microscope (Olympus, Tokyo, Japan) was used to take the images at appropriate time (0, 24, 48 h).
For transwell migration assay, cells (1 × 10 5 ) were seeded in the upper chamber of a 24-well plate (Corning) and cultured with 100 µl serum-free medium. 600 µl of culture medium containing 30% FBS was added to the lower chamber. After incubated for 36 h, the non-migrated cells that remained in the upper chamber were erased by a cotton swab. The migrated cells in the lower chamber were fixed and stained with Giemsa. Pictures of nine random fields (magnification, × 200) were taken with an inverted microscope and counted. ALL the experiments were repeated three times.
Statistical analysis
SPSS 21.0 software (SPSS Inc., Chicago, IL) and GraphPad Prism 5.0 were used for data analysis and figure process. The results of the experiments were presented as mean ± SEM. Chi-square test or Fisher's exact test, one-way analysis of variance, as well as Student's t-test were used as appropriate. The survival time started from the date of surgery to death or the last follow-up date. Survival analysis was evaluated by Kaplan-Meier and log-rank test. Further multivariate analysis was performed by Cox proportional hazards regression model to identify the independent risk factors for ESCC. All the analyses were two-sided test and considered statistically significant at a P < 0.05 level.
Expression of MLL2 is up-regulated and associated with clinicopathological factors in ESCC patients
First, the mRNA expression of MLL2 was assessed by qRT-PCR in 42 ESCC tissues and paired adjacent normal tissues. As shown in Fig. 1a, mRNA expression of MLL2 was significantly up-regulated in ESCC compared with adjacent normal tissues (P < 0.001). Then, we examined the protein expression status of MLL2 by IHC. The staining results indicate that MLL2 mainly expressed in nuclear, cytoplasmic staining was considered non-specific and not included in the evaluation (Juhlin et al. 2015) (Fig. 1b). Among the 67 ESCC tissues, high expression rates of MLL2 in tumor and adjacent normal tissues were 43.3% (29/67) and 11.9% (8/67), respectively (Table 1). A significant overexpression of MLL2 was found in tumor tissues in contrast to adjacent normal tissues (P < 0.05).
We further examined the correlation of MLL2 protein expression and the clinicopathological characteristics as shown in Table 1. The results demonstrated that high MLL2 expression was significantly correlated with TNM stage,
High expression of MLL2 predicts poor prognosis in ESCC patients
The statistical analysis of overall survival was performed by Kaplan-Meier method. The result showed that the patients with low MLL2 expression had a better prognosis than those with high MLL2 expression (P = 0.011, Log-rank test, Fig. 2). In addition, the univariate Cox regression analysis showed that lymph node metastasis, depth of invasion, tumor differentiation and MLL2 expression were significantly associated with overall survival of ESCC patients (Table 2). Multivariate Cox analysis was used to further evaluate the prognostic factors of ESCC and the results revealed that lymph node metastasis and tumor differentiation were independent prognostic factors.
Knockout of MLL2 suppresses ESCC cell proliferation in vitro
The effect of MLL2 on Eca109 cell proliferation was determined. The MTT assay and colony formation assay results showed that knockout of MLL2 significantly reduced the proliferation ability of Eca109 cells compared with negative control (P < 0.05, Fig. 3a, b). Furthermore, we assessed the effect of MLL2 on cell apoptosis and cell cycle by flow cytometry. As shown in Fig. 3c, MLL2 knockout significantly changed cell cycle distribution. The percentage of the cells in S phase was significantly decreased (KO, 19.71%; NC, 32.38%; P < 0.01) while it was increased in G1 phase (KO, 52.58%; NC, 44.21%, P < 0.01) in the MLL2 knockout cells compared with the negative control group. In addition, knockout of MLL2 slightly increased the apoptosis rate of Eca109 cells, but there was no statistical difference between the two groups (P > 0.05, Fig. 3d). This result indicated that knockout of MLL2 could inhibit cell cycle progression by inducing cell cycle arrest at G1 stage, but did not significantly alter cell apoptosis. Taken together, these data suggested that knockout of MLL2 suppressed proliferation of ESCC cells.
Knockout of MLL2 inhibits ESCC cell migration
We used scratch wound-healing assay and transwell migration assay to examine the effect of MLL2 on cell migration. First, we performed wound-healing assay and the results showed that knockout of MLL2 significantly attenuated the migration ability of Eca109 cells (P < 0.05, Fig. 4a). The transwell migration assay also revealed that MLL2 knockout group markedly weakened the migration ability of Eca109 cells (P < 0.01, Fig. 4b), further confirmed the result of wound-healing assay. These results indicated that MLL2 could promote ESCC cell migration in vitro.
MLL2 facilitates EMT in ESCC in vitro
To determine whether MLL2 was associated with EMT in ESCC cells, we examined the expression of EMT-related genes (E-Cadherin and Vimentin) by Western blot. The results showed that knockout of MLL2 significantly increased the expression of E-Cadherin and decreased the expression of Vimentin (P < 0.01, Fig. 4c). These results suggested that MLL2 may induce EMT in ESCC cells. TGF-β/Smad signaling pathway is a key inducer of EMT. We further examined the expression of the genes involved in the TGF-β/Smad signaling pathway (Smad7, Smad2/3 and p-Smad2/3) to determine whether MLL2 mediated EMT via Smad signaling pathway. The results showed that the expression of Smad7 and Smad2/3 was markedly increased, while p-Smad2/3 expression was decreased in MLL2 knockout group than the control group (P < 0.01, Fig. 4d). Taken together, these data suggested that MLL2 may induce EMT in ESCC cells via activating the TGF-β/ Smad signaling pathway.
MLL2 expression were associated with EMT in ESCC tissue
To confirm the results of the in vitro study, we further investigated the correlation of MLL2 expression and EMT in ESCC tissues. Among the 26 cases used for IHC staining of E-Cadherin, Vimentin and Smad7, there were 11 patients with MLL2 high expression and 15 patients with MLL2 low expression. We calculated the mean score of IHC staining of the above proteins in each group and evaluated their correlation with MLL2 expression. High IHC staining score represented high expression. Representative IHC staining images of E-Cadherin, Vimentin and Smad7 were shown in Fig. 5. The positive staining of E-Cadherin was mainly located in the membrane (Fig. 5a, b). And E-cadherin expression was significantly down-regulated in ESCC. Moreover, the expression of E-cadherin in MLL2 low expression group was significantly higher than MLL2 high expression group (Table 3). The positive staining of Vimentin was mainly located in the cytoplasm (Fig. 5c, d). In contrast to E-cadherin expression, Vimentin was significantly up-regulated in ESCC. And the expression of Vimentin in MLL2 high expression group was significantly higher than MLL2 low expression group (Table 3). These results indicated that MLL2 was positively associated with EMT in ESCC, consistent with the results of in vitro study by Western blot. As for Smad7, the positive staining was observed both in the cytoplasm and nucleus (Fig. 5e, f). The expression of Smad7 was significantly down-regulated in ESCC. However, its expression did not show significantly difference between MLL2 high expression and low expression group (Table 3).
Discussion
In the present study, we examined the expression status of MLL2 in ESCC patients and found that both mRNA and protein levels of MLL2 exhibited significantly higher expression in tumor tissues than adjacent normal tissues. The high expression of MLL2 was closely associated with worse clinical outcomes in ESCC patients. In addition, MLL2 promoted the proliferation and migration abilities of ESCC cells by inducing EMT. The extensive mutation of MLL2 suggests that it may be involved in the development of various cancers. Zhang et al. (2015) found that knockdown of MLL2 at the early stage of B cell development could lead to an increase in germinal-center (GC) B cells and enhanced B cell proliferation in mice, ultimately resulted in the occurrence of GC-derived lymphomas similar to human tumors, suggesting a tumor suppressor role for MLL2. However, studies in solid tumors such as breast and colorectal cancer emerged contradictory results. Knockdown of MLL2 in Hela cells significantly altered the growth characteristics resulting in reduced proliferation and migration capacity, and decreased tumorigenicity in mice (Issaeva et al. 2007). Another study involving colorectal and medulloblastoma cancer cell lines showed a similar result (Guo et al. 2013). These studies collectively indicate that MLL2 may have distinct roles in different tumors and its biological consequences are dependent on cancer type.
MLL2 has been found to be involved in tumor progression and associated with poor prognosis in several cancers. However, to our knowledge, the clinical significance and biological function of MLL2 in ESCC remains unknown. Juhlin et al. (2015) found that MLL2 expression was upregulated in pheochromocytoma (PCC) compared to normal adrenals, and overexpression of MLL2 positively affected cell migration. In addition, PCCs with MLL2 mutations exhibited significantly larger tumor size than those with other gene mutations. Another study in gastrointestinal diffuse large B-cell lymphoma showed that high expression of MLL2 was associated with higher clinical stage and poor patient survival (Ye et al. 2015). High level of MLL2 was also associated with poor prognosis in breast cancer (Kim et al. 2014). In consistent with these results, we also found that MLL2 expression was significantly higher in tumor tissues than adjacent normal tissues in ESCC patients. And the high expression of MLL2 was correlated with TNM stage, tumor differentiation and tumor size. On the other hand, though other malignancy risk factors such as tumor invasion, lymph node metastasis and vascular invasion showed no significant relation with MLL2 expression, there was a tendency that patients with deeper invasion, lymph node metastasis and vascular invasion appeared to have a higher expression rate of MLL2. These results indicated that MLL2 may be involved in tumor malignancy in ESCC. In addition, we found that MLL2 expression was negatively associated with patient survival. Patients with high MLL2 expression had significantly poorer overall survival than those with low MLL2 expression, suggesting that high MLL2 expression may serve as a predictive marker of poor prognosis and may be a potential therapeutic target in ESCC.
To further explore the biological role of MLL2 in Eca109 cells, we used the CRISPR/Cas9 gene editing system to knock out MLL2 in Eca109 cells. Then we investigated the effects of MLL2 on Eca109 cell proliferation and migration. The results showed that knockout of MLL2 significantly reduced the proliferation ability of Eca109 by arresting the cell cycle in the G1 phase rather than affecting cell apoptosis. The results of scratch woundhealing assay and tranwell migration assay also indicated that knockout of MLL2 attenuated the migration ability of Eca109 cells, consistent with the previous studies (Guo et al. 2013;Issaeva et al. 2007). Therefore, knockout of MLL2 inhibited the cell growth and migration of Eca109 cells. In other words, overexpression of MLL2 may promote cell growth and metastasis in ESCC cells, which may lead to the unfavorable prognoses in ECSS patients.
EMT has been confirmed to play an important role in tumor progression and TGF-β/Smad signaling is a key inducer of EMT (Xu et al. 2009). Guo et al. (2013) found that knockout of MLL2 in colorectal cancer cells could lead to altered expression of a variety of genes, including the decreased expression of Vimentin and increased expression of Smad7, which is the negative regulator of TGF-β/Smad signaling through blocking the phosphorylation of Smad2/3 (p-Smad2/3) (Luo et al. 2014). Therefore, we speculated that MLL2 might promote ESCC cell metastasis through EMT via regulating the Smad signaling pathway. Our data showed that the expression levels of E-Cadherin and Smad7 were increased while the expression levels of Vimentin and p-Smad2/3 were decreased in MLL2 knockout group than the control group, which illustrated that knockout of MLL2 attenuated the EMT process and inhibited the TGF-β/Smad signaling. The IHC results in tissue also showed that MLL2 expression was inversely correlated with E-cadherin and positively correlated with Vimentin. Though Smad7 expression did not differ significantly between MLL2 high expression and low expression group, the expression score of Smad7 was higher in MLL2 low expression group than MLL2 high expression group, consistent with the results of in vitro study. These results indicated that MLL2 might induce EMT through activating the TGF-β/Smad signaling pathway, and contribute to the subsequent cancer progression in ESCC. The possible underlying mechanism might be that MLL2 down-regulated Smad7 expression, which led to the hyperactivation of TGF-β/Smad signaling and the promotion of cancer progression. However, our sample size was relatively small, and the patients enrolled our study only included Kazakh minority patients. Therefore, further studies involving more samples and different ethnic groups as well as more ESCC cell lines would be necessary to further validate our results.
In summary, we found that expression of MLL2 was upregulated in ESCC patients, and high expression of MLL2 was significantly correlated with worse clinical outcomes. MLL2 may play an oncogenic role as a negative prognostic factor for patients with ESCC. We also found that knockout of MLL2 not only inhibited the proliferation and migration, but also suppressed the EMT process of ESCC cells. These findings suggest that MLL2 may be used as a novel prognostic factor and therapeutic target for ESCC.
Funding This study was supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region (Grant No. 2016B03054).
Compliance with ethical standards
Conflict of interest All the authors declare that they have no conflict of interest.
Ethical approval This study was approved by the Ethical Committee of the First Affiliated Hospital of Xinjiang Medical University. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 6,154.4 | 2018-03-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
Multi-Output Sequential Deep Learning Model for Athlete Force Prediction on a Treadmill Using 3D Markers
: Reliable and innovative methods for estimating forces are critical aspects of biomechanical sports research. Using them, athletes can improve their performance and technique and reduce the possibility of fractures and other injuries. For this purpose, throughout this project, we proceeded to research the use of video in biomechanics. To refine this method, we propose an RNN trained on a biomechanical dataset of regular runners that measures both kinematics and kinetics. The model will allow analyzing, extracting, and drawing conclusions about continuous variable predictions through the body. It marks different anatomical and reflective points (96 in total, 32 per dimension) that will allow the prediction of forces (N) in three dimensions ( F x , F y , F z ), measured on a treadmill with a force plate at different velocities (2.5 m/s, 3.5 m/s, 4.5 m/s). In order to obtain the best model, a grid search of different parameters that combined various types of layers (Simple, GRU, LSTM), loss functions (MAE, MSE, MSLE), and sampling techniques (down-sampling, up-sampling) helped obtain the best performing model (LSTM, MSE, down-sampling) achieved an average coefficient of determination of 0.68, although when excluding F z it reached 0.92.
Introduction
The development of reliable methods to estimate generated forces during physical tasks is critical in the field of biomechanical sports research [1]. The study and analysis of forces exerted by athletes when conducting their activities can improve performance and technique, and reduce the risks of injuries and fractures [2]. Wrongly executed selfperformed movements in sports might result in unexpected injuries and/or fractures, for which the causes include bad posture [3], inadequate technique [4], generation of dangerous biomechanical forces in joints [5], and impact [6], among others.
To prevent undesired injuries, the refinement of methods to estimate biomechanical forces in a reliable manner is needed. Traditional setups of force analyses in biomechanics include the use of force platforms to obtain three-dimensional force measurements when jumping [7], walking, running [8]; flexible force sensors (FFS) for gait detection and sport training [9]; inertial measurement units (IMUs) [10]; and strain sensors [11].
Although these sensors provide insight into the generated forces to some extent, it is limiting due to the confined space (platform) and the limitation of movement caused by the attachment of sensors. A less limiting approach is the use of video tools to estimate the generation of forces in physical tasks; however, its progress in biomechanical research has been relatively slow when compared to its use in the fields of robotics [12,13] and automated navigation [10]. According to a recent review on video-based biomechanics tools for injury assessment in sports, there is a gap in the development of real-time applications in this field [14]. The aim and scope of this study was to demonstrate the results of a proposed framework using a database of physical and biomechanical markers of joints during treadmill exercises.
The use of video for biomechanics research is not new; there have been various studies analyzing biomechanical parameters from videos of athletes. Two sub-fields exist when discussing video-based biomechanical tools: marker and markerless analysis. Markerbased-tools rely on placing markers onto the athlete's joints to track them in the video (and thenceforth process relevant information); markerless research is not constrained by this and instead uses artificial vision strategies to identify the joints in the video [10,15].
Although more complex (computationally speaking), markerless analysis is better suited for real-life scenarios (e.g., daily activities outside the laboratory). Although some studies reported biomechanical sports research using both methods, the vast majority focused on obtaining joint angles (to assess and correct technique [16], or risk of injuries [4]) and the position/speed of relevant markers [17] and did not analyze the generated forces in a detailed manner.
A recurrent neural network (RNN) is a type of artificial neural network (ANN) which is composed of a series of neurons that learn from data and adapt to it by changing their weights, hence being dynamic in the feed-forward process. More specifically, RNNs have the peculiarity that their structure allows them to learn temporal sequences [18]. This, in turn, makes them the gold standard for analyzing biomechanics, kinetics, and kinematics in a three-dimensional motion environment. Recent neuroscience findings have suggested that motor cortices can be modeled as a dynamical system. This perspective is opposite to the traditional view that neural activities in motor cortices represent the kinetic parameters of movements [19].
The adoption of neural networks allows the computation of joint contact force and muscle force via musculoskeletal modeling techniques, which offer valuable metrics for describing movement quality, thereby having numerous clinical applications and research applications. Critical applications include exercise-induced fatigue (considered one of the essential factors leading to musculoskeletal disorders), detection of fall-risk-linked balance impairment, and the ability to measure movement behavior through "skeletonized" data and 3D shapes.
Image analysis in [20] is used to extract biomechanical features in swimming; this is a common practice for stroke correction, technique analysis, and as a visual aid for the athletes. Due to how different the conditions in the water are, this leads to problems such as blurred vision from the camera and bubbles from cavitation effects. Image analysis techniques are fundamental to enhancing clarity and overcoming this problem, and to automating the detection of limb segments related to important metrics.
Fall prediction could use markers' positions [21] or bone maps estimations via a deep learning (DL) approach, composed of a convolutional neural network (CNN) with 97.1% accuracy [22]. In [23], an ANN predicted lower extremity joint torque based on ground reaction force (GRF) features, using the GRF and related parameters derived by the GRF during counter-movement jump (CMJ) and squat jump (SJ), calculating joint torque using inverse dynamics and an ANN.
Based on a database of 28 regular runners, physical marker positions based on 32 anatomical and reflective markers, resulting in 96 tags, were sampled at 150 Hz. A model used these markers to predict three-dimensional forces (F x , F y , F z ) sampled at 300 Hz. However, due to differences between sampling frequencies, for each marker datum (source), two force data were given (target), making it impossible to merge both datasets and generate a prediction using a model. In this sense, two methods solved data granularity to fit a machine learning (ML) model, thereby matching both data streams and merging them into a single dataset, divided into down-sampled (force) and up-sampled (positions).
In the sections of this report: • We describe the methodology of the study: a description of the database and variables used (in the analysis), the architectures of the evaluated models, and their performance tests, in Section 2.
•
The results from the proposed analysis are shown in Section 3. • In-depth discussion going into aspects such as future work and limitations is in Section 4. • Finally, this paper concludes in Section 5 with the discoveries of this work.
Dataset
A public dataset was the primary resource for processing and analysis in this work. This public dataset from [8] consists of a sample of 28 regular runners, which had familiarity and comfort with running on a treadmill, weekly mileage greater than 20 km, and a minimum average running pace of 1 km in 5 min during 10-km races. The dataset contains 48 anatomical and reflective markers (mm), of which 20 are anatomical, 20 technical, and 8 technical/anatomical; these were tracked using a 3D motion-capture system of 12 cameras, which had 4 Mb resolution, and Cortex 6.0 software (Raptor-4, Motion Analysis, Santa Rosa, CA, USA).
During a trial, researchers recorded only data from technical and technical/anatomical markers. In contrast, most anatomical markers were used for a standing calibration trial during the calibration phase, except for the first and fifth metatarsal: (R.MT1, L.MT1) and (R.MT5, L.MT5), respectively; thus, all fully available 20 + 8 + 4 = 32 markers were considered as source features when using their three dimensions. These resulted in 32 × 3 = 96 features. Unfortunately, as shown in Section 3, it was found that some markers (2 anatomical and 1 technical/anatomical) have missing values in the dataset; these are: left 1st metatarsal (L.MT1), right 1st metatarsal (R.MT1), and left anterior superior lilac spine (L.ASIS); these 9 (3 × 3) features were removed from the initial source features considered; hence, data from only 87 features were used (96 − 9 = 87).
Physical marker protocol in Figure 1-the figure created by the authors of the dataset [8]. Table 1 is a descriptive table of each marker used, ans was also extracted from the dataset's authors [8]. By only considering technical (20), technical/anatomical (8), and 4 anatomical markers, the dataset then contained recording data from 32 markers in 3 dimensions (XYZ), which resulted in a total of 96 marker positions (32 × 3 = 96), sampled at a frequency of 150 Hz and used as source features. A predictive model would use such features to predict the target features, which were forces (N) in three directions (F x , F y , F z ), sampled at a frequency of 300 Hz. An instrumented, dual-belt treadmill (FIT, Bertec, Columbus, OH, USA) collected these forces while the subjects performed the requested trials. Trials consisted of three phases: the subject walking at 1.2 m/s for 1 min; then the treadmill increased its speed incrementally to 2.5 m/s, and after 3 min, data were recorded for 30 s, which was repeated at speeds of 3.5 m/s and 4.5 m/s; after the trials, the speed was set back to 1.2 m/s for a 1-min cool-down period before stopping completely.
The sampling frequency of target features was doubled to 300 Hz, compared to the sampling frequency of source features of 150 Hz, generating more data for the target features concerning source features. Thus, for each sample in the markers (source) dataset, two examples are in the forces (target) dataset, making it impossible to merge both datasets and create a prediction using a model. Two methods solved the data granularity issue, matched both datasets, and combined them into a single dataset to fit an ML model: • Down-sampling: Given that the forces dataset has double the samples of the markers dataset, index numbers could eliminate odd index numbers and thus cut the dataset by half. This sequential elimination of samples removed data from the doubled dataset but allowed us to join both datasets without bias in predicting a sample using the current data. • Up-sampling: Given that the markers dataset has half the samples of the forces dataset, blank numbers were created in odd indices, thereby expanding the dataset with empty data to predict the missing values. In this sense, linear interpolation made the dataset, given a series of odd numbers and predictions for even numbers, having one valid number, one blank number, and one valid number. This method can introduce bias due to the creation of new points, and they might not follow a linear trend as assumed, although it conserves all the data.
The dataset is composed of forces and 3D marker locations of running trials for multiple treadmill velocities, 2.5, 3.5, and 4.5 m/s for each regular runner. The total number of subjects determined the data division, based on a 70:10:20 split for training, validation, and testing datasets. Considering the total number of participants, n = 28, and the aforementioned data-split, n training = 19, n validation = 3, n testing = 6. Figure 2 represents the applied data division, where proportional data from each velocity were extracted based on randomly sampled subject ID, and the original seed was kept for reproduction purposes. The model's performance was obtained using a single data split, with fixed participants' IDs assigned to each dataset independently of treadmill velocity, thereby ensuring the three datasets had equal data proportions of each speed.
Loss Functions
In order to measure the performance of continuous variable predictions, a set of evaluation metrics were used as loss functions so that the DL-based model's performance could be evaluated, and hence its weights updated. A cost function compares a model's predictions and the actual data; in a regression model, the function computes the distances between reference and predicted values [27]. The objective is to minimize the evaluation metrics, as they measure the differences between the reference values and their predictions.
The first loss function used was the MAE, which represents the average absolute of the differences between predicted values and observed values, calculated as shown in Equation (1), where the difference between the reference value y i and the predicted valueŷ i is summed over n testing samples. MAE follows the L1-norm of normalization or the Manhattan norm, which measures the distance between two points by summing the absolute differences between measurements in all dimensions [28].
Another metric used was the MSE, which represents the average of squares of the differences between predicted values and observed values, calculated as shown in Equation (2), where the squared difference between the reference value y i and the predicted valueŷ i is summed over n testing samples. MSE follows the L2-norm of normalization or the Euclidean norm, which measures the Euclidean distance between two points in a given vector space; this distance is significantly affected by outliers when compared to the L1-norm, which takes the absolute value of differences between two points as 2 > [28].
Finally, MSLE measures the ratio between predicted and observed values; it is quite different from the other metrics because it measures the percent of difference and thus is better for extreme values [28]; it is calculated as shown in Equation (3), where the squared difference between the logarithm of the reference value y i , and the predicted valueŷ i is summed over n testing samples-regularized using the log function on predictions and reference values. The +1 operation removes errors due to log being a 0-sensitive function. In addition, negative values are not possible in this metric (one of the reasons why all the data was scaled using MinMaxScaler).
Performance Metrics
Coefficient of determination (R 2 ) is a widely used regression performance metric to measure how well the predictions of a model fit the actual data [28]. The metric's domain is R 2 ≤ 1, where R 2 = 1 means perfect predictions, R 2 = 0 is the baseline model that predictsȳ, and R 2 < 0 means poor predictions; it is calculated as shown in Equation (4). It is composed of both the sum of squared estimates of error (SSE) and the sum of squares total (SST), Equations (5) and (6), respectively, and was calculated for each target feature (F x , F y , F z ) using the testing dataset. When the error between predictions and the reference value is minimized, SSE ≈ 0, so R 2 = 1, which is concordant with the aforementioned fact, as the minimal error between samples would mean the predictive model is perfect, and hence, R 2 = 1.
SSE, known as the deviation between predicted and reference values, was calculated as shown in Equation (5).ŷ i refers to the ith prediction and y i to the reference value, and their squared difference is summed across n testing samples.
SST, known as the total variation in the data, was calculated as shown in Equation (6). y i refers to the ith sample andȳ to the target feature's mean, and their squared difference is summed across n testing samples.
The Pearson correlation coefficient (r) was also used as a performance metric to measure the linear relationship between predictions and reference values [28]. The metric's domain is −1 ≤ r ≤ 1: r = 1 means a strong positive linear relationship, r = −1 means a strong negative linear relationship, and r = 0 means a poor linear relationship. It is calculated as shown in Equation (7). x i is the ith sample of X series (in this caseŷ for predictions) and y i is the ith sample of Y series (in this case y for reference).
Data Pre-Processing
Finite impulse response (FIR) is a digital filter which was applied using the lfilter function from the scipy package in Python, with parameters n = 10, a = 1, and an additional parameter b calculated as shown in Equation (8). It is worth also noting that parameter b (numerator coefficient in a 1D sequence) must be an array of length n, and so should be repeated n times, whereas a (denominator coefficient in a 1D sequence) remains constant.
The filter helped avoid noisy signals being used to train the predictive model, applied before normalization to have cleaner data on each force (F x , F y , F z ).
Min-max feature scaling used MinMaxScaler class from the scikit-learn package in Python to normalize data depending on both the subject and the feature (independently). Data normalization helps transform each subject's data into a shared space; moreover, the scaler transforms values into a domain 0 < x < 1, which works best when using a DL approach to avoid exploding gradients. Feature scaling would need two equations: Firstly, a X std value calculated as shown in Equation (9).
Secondly, max and min for each subject and feature were also calculated; thus, for each X std value, we transformed it into an X scaled value, which was calculated as shown in Equation (10).
2.5. Deep Learning Models 2.5.1. RNN Instead of having a feed-forward network (from input to output), an RNN also takes into account past output h (t−1) and thus is connected to the future input x (t+1) . The neuron is connected to itself when not considering time t, as shown in Figure 3 on the left, which is the same structure unrolled through time on the right, where it is more apparent that output h 0 is not only the output of time 0, but is part of the operation of time 1. This process continues until frame t, and thus an ANN can process sequences via the inclusion of past output into the next operation. x t represents a sample at time t, A is a neuron with a certain activation function (usually tanh) that behaves like an ANN's neurons, and h t is not only the output at time t but also the input at time t + 1, with its respective sample x t+1 .
A 1-layer RNN would be used with a fixed set of standard hyper-parameters to test the model's performance without requiring great training times: 25 epochs, 128 batch size, and 5 sequence size. Additionally, the number of neurons was adjusted via trial and error, considering the training and validation loss graph across epochs, as the chart tells if the model is under-fitting or over-fitting the training dataset. The number of neurons changed in powers of 2 according to f (x) = 2 x , while adjusting x and keeping it always as a power of 2, as it helps the process. It finally reached 8 neurons per layer, where the model neither over-fitted the training dataset nor under-fitted it.
The recurrent activation function used was sigmoid, as shown in Equation (11), and the final activation function was tanh. This functions are useful when explaining both LSTM cell and GRU cell in Sections 2.5.2 and 2.5.3, respectively.
LSTM Cell
Long short-term memory (LSTM) cells have a different architecture than simple cells. Their structure is in Figure 4, and the set of equations required is in Equation (12). Three gates compose the cell: forget gate, input gate, and output gate.
Hidden [27]. It has a combination of σ(x) and tanh(x) functions, to create three main gates (forget, input, output) and so generate the short-term h (t) and long-term c (t) vectors.
The processing starts on the lower part with x (t) and the previously generated h (t−1) . They are used to calculate This controls the forget gate to determine which information of the long-term state should be erased, as a σ(x) function is used, and its values range from 0 (erase all) to 1 (erase nothing). On the other hand, x (t) and h (t−1) are also used to calculate i (t) and g (t) , as shown in (13a) and (13d), respectively. This consists of the input gate, where the σ(x) function in i (t) controls the information of g (t) that should be included in the long-term state. The addition of the forget gate and input gate is calculated in c (t) , as shown in Equation (12e), which is further used in the output gate with o (t) , which controls how the information from the long-term state should be read and output. The element-wise multiplication of both o (t) and c (t) makes the output y (t) , which is equal to h (t) (used in the next iteration as h (t−1) ), as calculated in Equation (12f).
GRU Cell
A gated recurrent unit (GRU) cell is a simplified version of the LSTM cell; see the set of equations in Equation (13).
There are some simplifications that are done by the GRU cell with respect to the LSTM cell: The long-term state vector c (t) is merged into the short-term state vector h (t) . Then, z (t) controls both the input gate i (t) and forget gate f (t) . The output gate o (t) is replaced by r (t) , which shows the information that will be shown to the main gate g (t) .
The process starts on the left-hand with the previous state h (t−1) and the data x (t) , which is used to calculate both r (t) and z (t) in Equations (13b) and (13a), respectively. The σ(x) function is used to control the amount of past information input to the main gate g(t) by r (t) , and the proportion of information that would be forgot and learned by z (t) and its complement (1 − z (t) ). In this sense, g (t) is the output of new information learned, calculated by r (t) and h (t−1) , and applying tanh in Equation (13c). The current output h (t) is then calculated in Equation (13d) using both the past information h (t−1) and the present information g (t) , regularized by z (t) in order to forget and learn at a complementary rate of information, such as (z (t) + (1 − z (t) ) = 1), because 0 < σ(x) < 1.
Model Creation
The DL models used combinations of parameters (loss function, sampling method, type of layer); they are compared in Figure 5. Firstly, raw data were extracted from the .txt files from forces and markers, using either up-sampling or down-sampling depending on the sampling rate. This was done to solve granularity and then merge both datasets into a single dataset that contained XYZ markers and forces for each velocity and subject. Then, an FIR filter and min-max scaling pre-processed data for each combination of subject and velocity. Data were further split into training, validation, and testing datasets according to the data division in Figure 2. Thereafter, data passed through a process of model creation, where the training and validation datasets trained and hence validated the model, based on n epoch = 25 to train, compute the loss function, and update its weights. According to the loss on the final epoch, for training and validation datasets, the model could over-fit, under-fit, or ideal-fit the training dataset; only when the model had a perfect fit could it be used with the testing dataset to compute the performance. Hence, we chose the best model using these metrics. XYZ Figure 5. Flow diagram of the DL models' creation. The process starts at the top when solving granularity and merging the raw .txt files of forces and markers. Then, data were pre-processed using FIR and min-max based on the combination of subject and velocity. Afterwards, the model was created using n epoch and regularized (over-fit, under-fit), and performance metrics were used to determine the best model.
As for the evaluation of the model using training and validation datasets, the model at the ith epoch, e i , computed training and validation loss. These computations across n epochs created a line plot with two curves. These curves validated the model before testing with the testing dataset, as they first required parameters' modifications. The three types of "fits" are:
1.
Under-fit: The validation curve is under the training curve, as the model is not complex enough to learn the training dataset. The parameters are not good enough to model the relationship between source and target features, so the loss function does not decrease at a stable rate. This model would not be valid, as the relationship between variables is not completely understood; the model should be more complex, adding more layers of neurons.
2.
Over-fit: The training curve is under the validation curve, as the model is overlearning from the training dataset by analyzing the tiniest details that do not appear when testing the model's performance on the validation dataset. The model becomes too specific for the training dataset, making it not helpful in a real-life scenario on an unseen dataset (validation or testing dataset). 3.
Ideal fit: Training and validation curves are similar; neither under-fits data nor overfits them. The learning rate is relatively equal to the speed of generalization, so it can learn new information while still understanding the training dataset.
Results
Each model was structured with a similar kind of architecture, as each model has: an input layer, a 1-layer RNN type of cell (LSTM, GRU, Simple), and a dense layer for one-dimensional outputs. Each RNN model has a specific type of layer on the second layer; the loss function does not change the architecture but how weights are updated. Table 2 represents the architectures of these models, where the output shape for the first layer (input) defines the form of input data: n seq represents the sequence size (5), and thus the set of samples used; n f eat means the number of source features used (87). While the dataset was composed of 96 features (32 data-available markers and 3 dimensions, 32 × 3 = 96), missing values were present in 9 features (L.MT1X, L.MT1Y, L.MT1Z, L.ASISX, L.ASISY, L.ASISZ, R.MT1X, R.MT1Y, R.MT1Z); hence, these features were dropped, and thus the number of features used was 87 (96 − 9 = 87). The output shapes for the second and third layers were their numbers of neurons; the only modifiable parameter was the number of neurons in the second layer, as the number of neurons in the third layer had to be three due to the three force dimensions that were being predicted (F x , F y , F z ). Table 2. In each RNN architecture, for each model that has a specific type of layer, the output shape is used for each force prediction on a given dataset, where n seq = 5 and n f eat = 87. Even though each model employs the same architecture, each type of layer has a different number of trainable parameters; thus, the number of trainable parameters varies depending on the model's architecture. With LSTM having the highest number of n param (3072) and Simple having the lowest (768), the difference comes down to the cell's architecture. As the cell's complexity increases, the number of parameters also increases. However, this does not necessarily mean that the model would take more time to train as the number of parameters increases, as the number of parameters only has to do with the modification of the cell depending on the weights; thus, it mainly represents its flexibility. (14), is used to calculate t elapsed until results, depending on the combinations of layer type, sampling method, and loss function in Table 3.
Type of
It is essential to see which combination has the fewest t elapsed , as DL models could take a considerable amount of time to train. The only things that changed were the layer type, sampling method, and loss function, as other parameters and data remained constant. It is noticeable in Table 3 that layer type is a determinant factor for the t epoch , as the Simple layer took 76, 66, and 73 s with the up-sampling method, which contrasts greatly with LSTM's times (31, 30, and 33 s), and GRU's times (33, 30, and 32 s), so, at first glance, it seems that the Simple layer is less efficient than LSTM and GRU layers. It is worth noting that the more data, the greater the t epoch would be as more data are processed; this is represented in each combination, as down-sampling times mostly are half of the up-sampling times because the up-sampling, in this case, was conserving the 9000 samples, whereas down-sampling was dropping even samples to reduce the number to 4500 samples. This phenomenon is represented in Simple's down-sampling t epoch (32,40,32) when compared with up-sampling t epoch (76, 66, 73). This can also be seen in the GRU layer down-sampling t epoch (16,17,33) when compared with its up-sampling (33,30,32). On the other hand, the loss function seems not to have a significant role in changing the t epoch , although in some cases of down-sampling using MSLE, t epoch seems to be less in LSTM and GRU layers (≈4 s) when comparing (19 s, 19 s) with 15 s in the case of LSTM, and (16 s, 17 s) with 13 s in the case of GRU.
Moving on to prediction performance, two metrics were used: the Pearson correlation coefficient (r), shown in Equation (7), and the coefficient of determination (R 2 ), shown in Equation (4). The models used the following set of parameters: sampling method (upsampling, down-sampling); type of layer (GRU, LSTM, Simple); loss function (MAE, MSE, MSLE); in addition, performance metrics obtained based on training, validation, and testing datasets-for the force on every axis (F x , F y , F z ), an additional average performance metric was calculated using all forces (Avg).
Based on the trained RNN's performance, the coefficient of determination (R 2 ) was used as a performance metric to measure a model's performance (higher is better). Results are in Table 4, divided by the type of performance metric (R 2 , |r|) and sampling technique applied to the data (up-sampling, down-sampling). In ML, testing the dataset's performance is usually considered the most valid result because it reassembles a real-world scenario where the model would not have access to the new data beforehand. Therefore, the given table shows only testing's dataset performance. Table 4. Coefficient of determination (R 2 ) and the absolute value of Pearson correlation coefficient (|r|) on the testing dataset, while varying RNN's loss function L(ŷ, y), sampling technique, and type of layer, for each force dimension (F x , F y , F z ), and the average coefficient overall dimensions (Avg). Highest performance coefficients are in bold, which indicates the model with the best combination of parameters: up-sampling, MSE as the loss function, and LSTM as the type of layer. This average performance for each metric is in Figure 6. A single plot could plot both metrics, as their domains are the same when considering the absolute value of r, as −1 ≤ r ≤ 1 and R 2 ≤ 1, but 0 ≤ |r| ≤ 1, thereby not showing if the Pearson correlation is positive or negative, but only measuring how strongly correlated are both sets of data. There is a vertical division on the plot: by the sampling method, then by the type of dataset, and finally by the loss function used. Moreover, the color represents the type of layer, and the type of dot the metric (|r|, R 2 ). If both metrics are at maximum, the model would achieve the best performance, as higher is better for both metrics. . They were evaluated on several datasets (training, validation, testing) and performance was averaged for the three force dimensions (F x , F y , F z ) into a single metric (Avg). Each type of marker represents a performance metric (circle, triangle), and color represents the type of layer used (red, green, blue). The x-axis shows the combinations of loss functions, datasets, and sampling techniques; thus, the combination of marker and color reflects the performance of a given model in the y-axis. In this case, the combination with the highest R 2 on the testing dataset was (up-sampling, MSE, LSTM), which is the same combination that had the best |r|, as shown in Table 4.
Metric
As shown in Figure 6, the best |r| performance was achieved using up-sampling (right side) on the validation dataset, via LSTM, which also had the best R 2 's performance and MSE. However, performance should be evaluated on the data unknown to the model, the testing dataset, as that reassembles a real-life situation. When considering only the testing dataset, the LSTM layer delivered the best performance, using the up-sampling method and MSE loss function, by both |r| and R 2 performance metrics. Even though the performance was better using the up-sampling method, models trained using up-sampled data took double the time used for training using down-sampled data. Considering the time factor, differences in metrics on the testing dataset are not extensive, considering the best performance delivered by LSTM and MSE.
Overall, the type of layer seems to be a determinant factor in model performance: LSTM showed the best performance in both metrics, with all combinations of loss functions and sampling methods; followed by GRU with medium performance in most cases, with a few exceptions in which the Simple layer performed better, such as on the validation dataset using MAE as the loss function; and finally, the Simple layer, which not only took longer for training, but also performed worst when compared to the other types of layers. Moving on to loss functions, MSE performed best, and both MAE and MSLE decreased the model's performance depending on the dataset, type of layer, and sampling method, as MSLE with Simple on the testing dataset performed best when using down-sampled data, whereas the same combination performed worst using up-sampled data. Now moving on to the training-validation graphs concerning the number of epochs, only for the model with the best performance without considering training time (upsampling, LSTM, MSE), did the training process generate four graphs (loss function value, and the other loss functions). Even though the parameters were changed only due to the loss function, keeping track of different metrics also helps to diagnose better how well the model is adapting to the training dataset, and the generalization process that it is following on the validation dataset, as it would be helpful later on the testing dataset.
For the best performing model, the MAE function graph of 4, 8, and 16 neurons is in Figure 7. Although the loss function is MSE, the MAE plot shows more clearly how the model was over-fitting when using 4 and 16 neurons, as the validation curve tends to be higher than the training curve; that it is to say that when using n epoch = ∞, the trend of MAE validation > MAE training would most likely still be present. On the other hand, when using eight neurons, it seems to converge on similar MAE values for both validation and training datasets MAE validation ≈ MAE training ; this would then represent an ideal fit rather than over-fit of the training dataset, and thus validates the use of eight neurons, based on f (x) = 2 x to draw possible neuron values. Based on the previously mentioned descriptions, Figure 7 would be ideal because both training and validation loss converged to a similar loss when considering 25 epochs, even though validation loss was lower than training loss when n epochs < 5. The under-fit behavior was corrected in the following epochs because the neural network learned from the training dataset, lowering the training loss while still lowering the validation loss at a similar rate to not over-fit the dataset. Most of the models created showed a similar trend, which validates the models, as they fitted the training dataset ideally and thus are capable of predicting.
To visually evaluate predictions done by the best performing model (up-sampling, LSTM, MSE), a line plot for each force dimension (F x , F y , F z ) contrasting the measured and predicted values is shown in Figure 8. It comprises three plots with scaled force values (N) on each vertical axis, and the horizontal axis represents time in seconds (s). In this case, a plot of 500 s was enough to show a pattern and how well the model was performing at predicting each force. Figure 6 and Table 4 was used to predict the values using the testing dataset. F x and F y seem to have clear cyclical and predictable trends, with which the model performed better when compared to F z , which appears to have some randomness involved.
Based on predictions and measured values in Figure 8, force predictions are close to measured values on F x and F y . This might be because measured values behave cyclically, and thus an RNN with a specific sequence size can detect the trend and adapt their parameters more quickly than when no movement is present. Moreover, predicted values in F y reached values < 0; this is an impossible value for scaled forces, as the min-max scaler changes value's domain to be between 0 and 1, this would only create faulty, impossible predictions, and so an improvement in forecasts would be to limit the predicted value to be between 0 and 1, improving performance metrics when comparing with the reference values. This could be achieved by using a σ(x) function as the activation function of the final dense layer, given the proposed architecture in Table 2, as its range 0 ≤ σ(x) ≤1, thereby guaranteeing that the output is between 0 and 1, which is perfect with a min-max scaler's domain.
Discussion
It is interesting noting the inverse relationship between n params and t epoch , as the number of trainable parameters within the layer only makes the model more flexible, not slower to train. In fact, when relating trainable time, Table 3, and trainable parameters, Table 2, the Simple cell had the highest t epoch but the least n params , while LSTM had the lowest t epoch and the highest n params . Another reason that could have been behind the short training times of LSTM and GRU is the graphical processing unit (GPU) optimization for those layers when using default cell parameters (σ(x) as recurrent activation function and tanh(x) as activation function).
LSTM and GRU not only had the shortest training times in Table 3, but also performed best according to the performance metrics in Table 4. This suggests that an increase in complexity, and hence an increase in the number of parameters, would let the cell adapt better to the training data, and thus deliver better performance with less training time (when considering LSTM and Simple). The relations between the complexity of cell between the used cells, according to trainable parameters in Table 4, and the overall structure of each cell explained in Section 2.5, are the following: GRU is more complex than Simple, as it uses Equation (13) to control the flow of information with z (t) and r (t) , and also requires more trainable parameters than only changing weights of A, as seen in Figure 3. LSTM is more complex than GRU, as it consists of three gates (input, forget and output) controlled by Equation (12), which separates the long-term memory c (t) from the short-term memory h (t) . This would then increase the number of trainable parameters and lets it better fit the training data to not only determine which information to remember, but also how long the information should be remembered.
Moving on to force prediction, F x and F y had higher performance metrics than F z in Table 4, and some even predictions had an R 2 < 0, which means bad predictions; this was the case when predicting F z using down-sampled data, Simple layer type, and MAE/MSE as loss functions, with R 2 of −0.19 and −0.04, respectively. Although terrible performance was not present for F x and F y , the previously mentioned combination had positive R 2 values of 0.57 and 0.56 for F x and 0.93 and 0.91 for F y . The significant differences between these forces may be due to the cyclical pattern they display, as shown in Figure 8. F y and F x shared a more cyclical pattern than F z , even though all forces were filtered using an FIR filter. Another reason may have been the orientations of the XYZ axes in the treadmill used in [8]. In this study, the x-axis is parallel to the walking direction of the participants, the y-axis follows the opposite direction of gravity force, and the z-axis is orthogonal to them. Under this axis definition, the lowest quantity of movement and force would be expected in the z-axis during a walking exercise. The average of forces performance (Avg) is greatly affected by the low performance when predicting F z , when considering R 2 of the best performing model (LSTM, up-sampling, MSE), it had Avg = 0.68, although when removing F z and only taking the average performance of F x and F y , R 2 = 0.92, which suggests that great predictions could further improve.
Many changes could be done to improve model's predictions, including: (1) Increasing the number of subjects in the dataset; hence, more data could be used to train the model, and so it could generalize more. (2) Using the subjects' meta-data dataset, as [8] included a dataset that provides for more information regarding the persons, such as weight and height, which could be used to adjust the force prediction for each subject, as the force not only depends on acceleration, but also on mass, which is not considered on the current model. This could be used when using an architecture of two neural networks, the first consisting of an RNN that makes predictions for each force (F x , F y , F z ), and the second an ANN that takes as input both the predicted forces and the subject's meta-data to adjust the predictions. (3) Increasing the number of epochs, as in this work, n epoch = 25 was used to find the best combination of parameters to avoid spending longer than 30 min on a model's training; however, increasing the number of epochs would most likely increase the model's performance as it would learn better on the training data; although, if not evaluated correctly, this could lead to over-fitting or under-fitting by the model, but the training/validation curves in Figure 7 seem to show an ideal fit trend as the number of epochs increases. (4) Trying other ML algorithms [29] or DL architectures [30]. CNN [31], which is usually used to analyze images, could also be used in time-series analysis [32].
The current work presented a quick grid search that used a one-layer RNN with n epoch = 25 to determine which model performed best, according to a grid search that iterated over the combination of a given set of parameters (loss function, type of layer, sampling technique). The model's predictions could be improved when considering more complex DL architectures, such as the 4-layered DL models used in [32], which combined CNN, RNN, and ANN layers to achieve a 99% accuracy fall-risk prediction using a force plate. However, the model had 362, 243 parameters, as opposed to our 1-layer RNN with LSTM using 3099 parameters; it also used force, moment, and center of pressure (CoP) measurements, whereas our model only used markers positions without further processing. On the other hand, other studies used 1-layered ANNs biomechanical features, such as [23], which used GRF, vertical displacement, velocity of the center of mass, and jump power to predict joint torque. The correlation coefficient was 0.967, making it efficient but not very practical, as data have to be processed and converted to biomechanical features. In contrast, [21] used a similar approach concerning the current work on using raw XYZ positions and body weight to predict joint angles and accelerations, reaching an R 2 > 0.95.
The presented work can be used to predict in real-time forces generated at specific joints with given video; and can be applied to applications in ergonomy [33], posture correction, safety in the workplace [34], training, sports performance, and injury prevention [14], among others. The proposed framework can be implemented easily with open-source programming tools (e.g., Python); however, knowledge of deep learning and biomechanics is needed. Therefore, in order to provide insight into athletes' performance, it would be ideal to use the support of a biomechanical analyst to act as a mediator for a sports-biomechanics translation [35]. Furthermore, to allow a more user-friendly approach, the method could be embedded into a mobile application so that athletes and coaches can use it in a more direct manner, similarly concept of commercial applications such as Kinovea [36] and Dartfish [37].
Conclusions
In this work, the use of RNN structures in the context of sports biomechanics was successful, as the comparisons between the original and the predicted forces were satisfactory, as observed in Figure 8. While investigating the RNN training performance using different architectures, the t epoch and t elapsed parameters allowed us to confirm that simple layering (in this application) was less efficient than GRU and LSTM. In our case, the more complex architectures were the ones that generated the best results.
Rather than proposing a unique solution, in this work, many parameters were explored to obtain the best model. For example, performance was measured using metrics such as the Pearson correlation coefficient (r) and coefficient of determination (R 2 ). Additionally, combinations of modeling parameters, such as sampling methods (up-sampling, downsampling), type of layer (GRU, LSTM, Simple), and loss function (MAE, MSE, MSLE), were added to the evaluation of the models to find the best possible model.
Considering all the work presented in this manuscript, it is clear that the proposed framework was successful at correctly predicting the forces generated on a force platform, using positions from markers. This is a significant result in sports biomechanics because it means that using similar methods, applications can be built to predict real-time forces generated in athletes' joints, and coaches could use such predictions for injury prevention, technique assessment, and personalized coaching, among others. Future work will include the implementation of the proposed framework, but using data obtained from more realistic scenarios (e.g., using wearables and cameras) instead of using information from a database. The real-world proposal would also be tested using a similar approach to the one presented in order to find the best model and then test it in real-time (where upper limit constraints on prediction times are essential to take into account due to the dynamics of the real-time application).
Finally, it is essential to remark that this work has increased our understanding of automatic force prediction in human movements and may help develop new clinical approaches to avoid the development of fatigue fractures and implementation of proper physical training regimes. Institutional Review Board Statement: Ethical review and approval were waived for this study because no experimental procedures with humans were carried out. The data analyzed in this work came from a publicly available dataset described in [8], and the information regarding the ethical committe approval of the study are in there.
Informed Consent Statement: Patient consent was waived for this study because no experimental procedures with humans were carried out.
Data Availability Statement:
The data used in this work are available in [8].
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 11,218.2 | 2022-05-27T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Human Resources in the Knowledge Economy: Training and Developing Modern Management Skills
: This paper investigates the role of human resources in the knowledge economy and the methods for training and developing modern management skills. The study analyzes training models and clarifies the needs and solutions for enhancing management capabilities. The results indicate that investing in human resource training and development is a crucial factor for businesses to maintain competitive advantages in a globalized and high-tech economy.
INTRODUCTION
In the era of globalization and rapid technological advancements, the knowledge economy has emerged as a crucial driver of economic progress.This shift emphasizes the primacy of knowledge and information over traditional resources like natural reserves or manual labor.Human resources are now viewed as knowledge bearers, necessitating the urgent development of modern management skills.Information technologies are pivotal in this landscape, requiring continuous education and selfimprovement to manage them effectively.The integration of information technology in enterprise management, particularly in human resource management models, has seen significant advancements, propelled by the technological revolution and globalization.Ultimately, the quality of human resources is paramount for sustainable development in a knowledge-based economy.
To effectively analyze and enhance management skills in the knowledge economy, it is crucial to consider factors such as generic skills for employability (Bejinaru, 2013), the transformation of knowledge management in response to the digital economy (Roshchin et al., 2022), and the development of cloud computing skills to promote knowledge application (Sadik & Albahiri, 2020).Additionally, focusing on essential skills like basic knowledge, communication, digital proficiency, vocational skills, and leadership is vital for educational institutions to align their programs with the needs of the knowledge economy (Belooshi & Ma'amari, 2020).Understanding how students acquire entrepreneurial skills within the context of a knowledge economy is also essential for shaping training programs (Bejinaru, 2018).By synthesizing these perspectives, businesses and training institutions can tailor their approaches to improve human resource quality and meet the dynamic demands of the labor market.
This research hopes to improve human resource quality and promote economic development -a knowledge economy -in Vietnam by focusing on training and developing modern management skills.
LITERATURE REVIEW 2.1. Concept of the knowledge economy
As defined by Snellman (2004), the knowledge economy is characterized by knowledge-intensive activities that contribute to rapid technical and scientific progress, acknowledging the swift obsolescence that characterizes this environment.(Alhasadi & Demirel, 2020).It also highlights the role of knowledge-intensive activities in accelerating technological advancements within the economy.(Asongu & Kuada, 2020) Emphasize that in a knowledge economy, economic prosperity is intricately linked to the quality, quantity, and accessibility of information available rather than traditional means of production.
Furthermore, (Menezes et al., 2021) underscore the essence of intellectual capacities in the knowledge economy, emphasizing that raw materials are no longer limited to physical resources but encompass immaterial and non-consumable resources.(Muhammad, Ali, & Iliya, 2015) Further reinforce this perspective by highlighting how knowledge drives economic growth, employment, and wealth creation in a knowledge-based economy.
In essence, the knowledge economy represents a transformative phase in which knowledge, innovation, and information are the primary drivers of economic development and prosperity.It signifies a departure from traditional production-centric models and a more dynamic and knowledge-driven approach to fostering growth and competitiveness.
The Organization for Economic Co-operation and Development (OECD) further elaborates on the concept, describing knowledge-based economies as fundamentally reliant on producing, distributing, and utilizing knowledge and information (Gangi, 2017).This definition aligns with the notion that in a knowledge-based economy, knowledge serves as a primary driver of economic progress, employment generation, and wealth creation (Muhammad et al., 2015).
Moreover, the literature underscores that a knowledge-based economy is distinguished by its heavy reliance on intellectual capital in economic production (Husna, 2023).This shift towards intellectual capabilities over physical inputs or natural resources is a defining feature of such economies (Yokhaneh & Baghoumian, 2014).Additionally, the role of universities in fostering a knowledge-based economy is highlighted, emphasizing their importance as critical players in advancing modern economies (Salem, 2014).
The literature emphasizes the critical role of reflective teaching in enhancing instructional delivery within a knowledge-based economy, particularly in the context of education, particularly technical and vocational training (Oviawe, 2020).This underscores the importance of pedagogical approaches that align with the requirements of an economy driven by knowledge and innovation.
The knowledge economy represents a transformative stage in economic development where knowledge and information are paramount.It underscores the critical role of education, training, and intellectual assets in driving productivity, economic growth, and value creation.
Human resources in the knowledge economy
In the context of the knowledge economy, human resources play a pivotal role in driving sustainability and business success (Zubović et al., 2015).This shift towards a knowledge-based economy has highlighted the significance of human capital over traditional physical and financial resources (Veselinović et al., 2022).Companies operating in competitive markets rely heavily on the organization and management of human resources to maintain competitiveness (Zubović et al., 2015).The sustainable development of nations in a knowledge-based economy is intricately linked to the quality and development of human resources (Kojić et al., 2020).As the nature of work evolves in the contemporary economy, personal resources have emerged as a critical factor in enhancing job engagement for knowledge workers (Toth et al., 2019).
In the era of the digital economy, enterprise competitiveness is closely tied to effective human resource management practices (Liao & Zhang, 2022).The knowledge economy presents vast opportunities as human creativity and capacity for innovation are considered limitless resources (Eftimoski & Milenkovski, 2012).Organizations striving to achieve their goals require competent, knowledgeable, and productive human resources to drive performance and success (Nursiani et al., 2023).In a knowledge economy, knowledge is the primary resource, and the innovation capacity of employees serves as a critical competitive advantage (Drašković et al., 2020).This emphasis on intellectual capabilities over physical inputs characterizes the essence of a knowledge economy (Powell & Snellman, 2004).
Technological progress in a knowledge economy underscores the critical role played by human resources in driving innovation and economic growth (Csugány, 2018).The transition toward knowledge economies necessitates a shift toward intellectual capabilities as the primary driver of success (Abu-Shawish et al., 2021).The interplay between knowledge, human capital, and economic growth is evident in developing countries, where investments in R&D, human resources, and technology diffusion are crucial for progress (Poorfaraj & Keshavarz, 2011).Initiatives like the Human Resource Development Council aim to enhance labor productivity, technology transfer, and innovation through lifelong learning and skill development (Awang et al., 2010).
The concept of human capital has long been recognized as a fundamental element influencing innovation, technology adoption, and overall economic prosperity (Madariaga, 2022).The transformation towards a knowledge economy underscores the need for advanced skills, continuous learning, and a focus on intellectual capabilities to thrive in a competitive global landscape.In this paradigm, human resources are a support function and a strategic asset that drives organizational success and sustainability.The ability to attract, develop, and retain top talent becomes a critical differentiator for companies seeking to excel in the knowledge economy.
Modern management skills
In the realm of the knowledge economy, modern management skills have evolved to encompass a diverse array of elements crucial for organizational success.The interplay between strategic thinking and competitive advantage is a focal point in the research (Dixit et al., 2021).The study establishes a direct link between creativity, corporate culture, knowledge management, and strategic thinking, showcasing how these factors synergistically contribute to gaining a competitive edge in the market.This connection underscores the strategic importance of cultivating a conducive environment that nurtures creativity, values knowledge management, and fosters a culture that supports strategic thinking.Additionally, (Gross, 2017) explores the relationship between innovative behavior and strategic thinking, emphasizing strategic thinking as a dynamic capability that serves as a competitive tool.By understanding the factors influencing strategic thinking, organizations can harness this capability to drive innovation and maintain a competitive edge in dynamic market environments.
Emotional intelligence, another critical aspect of modern management skills, has garnered significant attention in the literature.Studies by (Lysytsia et al., 2020) and (Halder, 2023) delve into emotional intelligence's gender-specific and sectorspecific implications in HR management and managerial effectiveness, respectively.These studies highlight how emotional intelligence can influence decision-making, conflict resolution, and overall managerial effectiveness, underscoring its relevance in contemporary management practices.Furthermore, the research by (Anjum et al., 2015) emphasizes the cultural implications of emotional intelligence, showcasing how emotionally intelligent managers are more inclined towards engaging in innovative entrepreneurial activities.This cultural perspective underscores the universal relevance of emotional intelligence in driving managerial success and organizational innovation.
Emotional intelligence is crucial in shaping organizational dynamics and leadership effectiveness in educational leadership.Amelia (2021) explores the impact of emotional intelligence management on leadership quality, emphasizing how emotional intelligence skills can enhance leadership capabilities and contribute to organizational goal attainment.This underscores the multifaceted influence of emotional intelligence on leadership effectiveness across diverse organizational contexts.
The evolving landscape of the knowledge economy necessitates a nuanced understanding of modern management skills, encompassing elements such as emotional intelligence, strategic thinking, and data-driven decision-making.By integrating insights from reputable sources, it is evident that modern managers must possess a diverse skill set to navigate complex organizational challenges, foster innovation, and drive sustainable competitive advantage.Emotional intelligence and strategic thinking are pivotal to modern management skills, shaping managerial effectiveness, organizational performance, and leadership quality in diverse contexts.As organizations strive to thrive in the knowledge economy, cultivating these skills among managers becomes imperative for long-term success and strategic growth. .This model allows organizations to assess not only the immediate reactions of participants to the training but also the extent of knowledge and skills acquired, the application of these skills in the workplace, and the overall impact of the training on organizational outcomes.By incorporating these evaluation levels, organizations can understand the effectiveness of their training programs and make informed decisions for future training initiatives (Alsalamah & Callinan, 2021).
While the Kirkpatrick model has been widely used for training evaluation, Cahapay (2021) points out some limitations of its application in higher education evaluation.The historical context of the Kirkpatrick model was to aid managers in systematically accounting for outcomes among employees and organizational systems.However, in the context of higher education, where the goals and outcomes may differ, the model's applicability may be constrained.This highlights the importance of considering training programs' specific context and objectives when selecting an evaluation model (Cahapay, 2021).
In a practical application of training needs assessment, Jiyenze et al. ( 2023) conducted a study to identify the actual training needs in Tanzania related to health management, leadership, and governance capacities.Through thematic analysis and qualitative research, the study aimed to determine the expressed training needs of health managers, essential competencies for managerial roles, and topics crucial for management, leadership, and governance training.This approach underscores the importance of conducting thorough needs assessments to tailor training programs to the specific requirements of the target audience (Jiyenze et al., 2023).
Sahni (2020) delved into assessing managerial training effectiveness using the Kirkpatrick framework.By investigating the impact of managerial training through the lens of the Kirkpatrick model, the study aimed to evaluate the training program's effectiveness in enhancing managerial skills and knowledge.This research contributes to the body of knowledge on evaluating training outcomes and underscores the value of using established models like Kirkpatrick for assessing training effectiveness (Sahni, 2020).
In the context of developing countries like Ecuador, it highlighted the need to assess statistical knowledge and training needs among business professionals.The study emphasized the importance of equipping business managers with the necessary statistical skills to support decision-making processes effectively.This underscores the significance of identifying and addressing specific skill gaps through targeted training interventions to enhance managerial capabilities in diverse settings (Mosquera-Gutierres, 2024).
It furthermore explored kindergarten teachers' perceptions of management training issues and needs.The study revealed that teachers viewed management training as a critical factor for effectiveness and expressed a preference for a combination of introductory and periodic training organized by educational institutions and policy bodies.This underscores the importance of understanding the perspectives and preferences of training participants when designing and delivering management training programs (Παναγιωτόπουλος et al., 2019).Vishwakarma and Tyagi (2017) examined the post-reform training needs of frontline managers in Indian power distribution companies.By assessing managers' perceptions of training-related factors such as clarity, budget, scheduling, and resource availability, the study aimed to identify areas for improvement in training programs.This research underscores the importance of aligning training initiatives with frontline managers' specific needs and expectations to enhance their performance and effectiveness in their roles (Vishwakarma & Tyagi, 2017).
In a healthcare setting, Omondi (2020) investigated the influence of training programs on the performance of health workers at Kakamega County General Hospital.The study utilized purposive and stratified sampling methods to assess the impact of training on different categories of health workers.By evaluating the relationship between training programs and performance outcomes, the research aimed to provide insights into the effectiveness of training interventions in improving healthcare delivery.This highlights the critical role of training in enhancing the skills and performance of healthcare professionals (Omondi, 2020).
Overall, the synthesis of these studies underscores the importance of structured training models, comprehensive needs assessments, and rigorous evaluation mechanisms in designing practical management skills training and development programs.By incorporating elements such as training needs assessment, program design, implementation, and evaluation, organizations can ensure that their training initiatives are targeted, impactful, and aligned with the specific requirements of their workforce and organizational goals.Additionally, using established evaluation models like the Kirkpatrick model enables organizations to measure the effectiveness of training programs across different levels and make data-driven decisions to enhance training outcomes and organizational performance.
The literature review indicates that human resources in the knowledge economy need to be equipped with specialized knowledge and modern management skills.Continuous training and development are crucial to ensure that the workforce can meet the increasingly demanding requirements of the economy.Effective training models and the integration of technology with traditional training methods will enhance the quality of human resources, thereby contributing to the sustainable development of businesses and the economy.
Result evaluation
The demand for modern human resource management skills is rising in the current knowledge economy.This emphasis on digital skills is supported by (Amalia, 2024), who discusses how adopting digital-based HR technologies like HRIS and artificial intelligence can effectively manage and motivate the workforce, albeit facing challenges such as technology integration complexity and cultural changes.
Moreover, (Taha, 2024) underscores the significant correlation between human resource skills and organizational innovation, indicating that organizations aiming for innovation rely on these skills to enhance products.(Lin, 2024) further emphasizes the profound changes in human resource management due to digitalization, especially in the context of organizational management evolution.(Wang, 2024) delves into the impact of employees on enterprise development within innovative and entrepreneurial enterprises, stressing their critical role in driving organizational success.
Strategic human resource management is crucial in addressing business challenges and long-term objectives, as discussed (Alsaadat, 2019).It also highlights the increasing demand for digital skills in HR, aligning with the need for digitalization in the field.Additionally, Sutrisno (2024) points out the crucial role of human resources in the success of digital business strategy implementation, especially in SMEs.
The relationship between strategic thinking and strategic human resource management is evident in the works (Alomari, 2020) and (Bahrampour et al., 2021), emphasizing the importance of good HR management in enhancing company skills and values.Furthermore, (Parsehyan, 2020) discusses how HR management can contribute to organizational innovation through mechanisms that drive change.
The literature supports the notion that in the knowledge economy, modern human resource management skills, particularly digital skills, strategic thinking, and flexible leadership abilities, play a pivotal role in enhancing organizational productivity, efficiency, and innovation.
Meaning and practical applications
In today's rapidly evolving business landscape, training and developing modern management skills are crucial for businesses to maintain competitiveness and adapt to market changes (Alabdulaziz et al., 2022).Training programs should prioritize enhancing digital capabilities, data analysis skills, and project management abilities to align with the demands of the knowledge economy (Alabdulaziz et al., 2022).Educational institutions also play a vital role in preparing students for the future workforce by updating and innovating curricula to ensure students acquire the necessary skills (Ashraah & Yousef, 2020).The integration of theory and practice, coupled with the application of modern educational technologies, is essential for students to meet the requirements of the new economy (Ashraah & Yousef, 2020).
Research emphasizes the importance of incorporating knowledge economy skills into educational curricula to equip students with the competencies needed in the modern workforce (Belooshi & Ma'amari, 2020).It is recommended that students, especially at the master's level, receive training on knowledge economy skills and stay abreast of recent developments to enhance their preparedness for the evolving job market (Alabdulaziz et al., 2022).Additionally, the study on knowledge economy skills in Oman highlighted the significance of basic, life and professional, digital, interpersonal, and communication skills for future education (Belooshi & Ma'amari, 2020).
Furthermore, the study on entrepreneurial skills needed in the knowledge economy underscores the essential contribution of economics and business education to developing entrepreneurial skills among students (Bejinaru, 2018).This highlights the importance of educational institutions offering programs that foster entrepreneurial abilities to meet the demands of the evolving economy (Bejinaru, 2018).Moreover, the application of modern information systems in educational courses has been shown to enhance the productivity of the educational process and positively impact students' knowledge levels and the development of crucial competencies (Tukshumskaya et al., 2020).
Incorporating modern technology into teaching methods has become prevalent in colleges and educational institutions, indicating a shift towards leveraging technology to enhance the learning experience (Ghory & Ghafory, 2021).The utilization of information and communication technologies in innovative teaching methods has been recognized as a tool for socio-economic development, emphasizing the importance of technology in advancing educational practices (Nemchenko et al., 2021).Additionally, the study on modern communication technologies in professional education highlights how these technologies form the foundation for activating the educational process and improving graduates' competency levels (Smirnova et al., 2019).
The circular economy skills play a significant role in the regional dimension, emphasizing the importance of a comprehensive approach that integrates theoretical methods with empirical analysis (Nikitaeva, 2024).This holistic approach to developing skills aligns with the need for a well-rounded skill set encompassing various aspects of the modern economy (Nikitaeva, 2024).
Moreover, understanding the skill provision in the gig economy from a network perspective sheds light on the implications for gig economy workers and platforms, emphasizing the importance of adapting skills to the changing nature of work (Huang et al., 2019).
In conclusion, the evolving landscape of the knowledge economy necessitates a proactive approach from businesses and educational institutions to equip individuals with the skills required to thrive in a rapidly changing environment.By focusing on digital capabilities, data analysis skills, project management abilities, and entrepreneurial skills, businesses, and educational institutions can contribute to developing a highly skilled workforce ready to meet the challenges of the new economy.
Limitations of the study
Although the study produced many significant results, some limitations still exist.First, the scope of the study mainly focused on a few businesses and educational institutions in large cities, which may limit the ability to generalize the results to the entire country.Second, due to limited time and resources, the research mainly used qualitative methods, which did not fully demonstrate the quantitative aspects of the problem.To overcome these limitations, future studies should expand the scope of investigation to different regions and apply quantitative research methods to have a more comprehensive view.In addition, the use of advanced data analysis tools will also help improve the accuracy and objectivity of research results.
Recommended for the future
Based on the results and limitations of the current study, several recommendations for future research include: Expanding the scope of research to different regions to assess the feasibility and effectiveness of modern management skills training programs in more diverse contexts.
I am applying a mixed-methods research approach, combining qualitative and quantitative methods to provide a more comprehensive and in-depth understanding of the needs and effectiveness of management skills training in the knowledge economy.
It is conducting further studies on the influence of cultural, social, and technological factors on the development of modern management skills, thereby offering tailored training solutions that align with the specific characteristics of each region and industry.
These recommendations aim to enhance the current study and open new research avenues, contributing to the improvement of training quality and human resource development to meet the increasing demands of the knowledge economy.
CONCLUSION
This study elucidates the essential role of human resources in the knowledge economy, particularly in the training and development of modern management skills.The research findings affirm that in a globalized and digitized economy, possessing a high-quality workforce with modern management skills is crucial for enterprises to maintain and enhance their competitive position.
Firstly, the study identifies and clarifies the modern management skills necessary for human resources in the knowledge economy.These skills include leadership, strategic thinking, project management, and proficiency in using digital technologies.These skills not only improve the operational efficiency of enterprises but also foster innovation and creativity.
Secondly, the study demonstrates that investing in human resource training and development is a long-term strategy that brings sustainable benefits to enterprises.Training programs designed to align with the practical needs of businesses and market development trends will help employees enhance their capabilities and be ready to face new challenges.
Lastly, the study emphasizes applying advanced and flexible training methods.These methods include online learning, modular training, and continuous training programs.This not only saves costs but also optimizes the enterprise's time and resources.
These research findings suggest that training and developing modern human resources management skills is a decisive factor in the success of enterprises in the knowledge economy.To achieve this, enterprises must have a clear training strategy, invest reasonable resources, and apply effective training methods.
The findings from this study provide practical suggestions for businesses and managers in designing and implementing human resource training programs.Additionally, the study opens up new research directions on the factors influencing the effectiveness of training programs in the ever-developing knowledge economy.
2. 4 . 2 . 5 .
Human resource training and development Human resource training and development are crucial in enhancing an organization's competitiveness in the knowledge economy.The studies by (Chalise, 2020), (Nwali & Adekunle, 2021), and (Binh, 2021) underscore the critical role of human resource development in enhancing productivity, efficiency, and institutional governance.They emphasize that organizations must invest in training and development to stay abreast of industry best practices and remain competitive globally.Furthermore, research by Burrichter et al. (2022) highlights the importance of modern technology in human resource management for sustainable development.Leveraging technology in human resource management is essential for organizational success and growth in the current business landscape.The synthesis of these references underscores the vital role of human resource training and development in improving organizational competitiveness in the knowledge economy.Organizations can enhance productivity, efficiency, and overall performance by aligning training programs with business needs, leveraging technology, and investing in employee development, thereby gaining a competitive edge in the global market.Modern management skills training and development model In modern management skills training and development, following a structured model encompassing various vital stages is imperative.One of the widely recognized models in training and human resource development is the ADDIE model, which stands for Analysis, Design, Development, Implementation, and Evaluation.This model provides a systematic framework for designing and implementing effective training programs (Sayed & Agha, 2015).Moreover, Kirkpatrick and Kirkpatrick (2006) shed light on evaluating training effectiveness through different levels.They propose a four-level model for training evaluation, which includes Reaction, Learning, Behavioral, and Results Glomb et al. (2018) conducted a needs assessment for simulation training for prehospital providers in Botswana, focusing on improving assessment and clinical management skills in high-risk situations.The study demonstrated the effectiveness of simulation-based training in enhancing providers' skills, particularly in challenging scenarios.This highlights the value of utilizing innovative training methods like simulation to address specific skill requirements in specialized fields such as emergency medical services(Glomb et al., 2018). | 5,524.4 | 2024-07-18T00:00:00.000 | [
"Business",
"Economics",
"Education"
] |
The effect of surface temperature on dynamics of water droplet in minichannel with gas flow
The experiments have been carried out to study dynamics of liquid droplets, blown by the gas flow in a mini-channel. The mean velocity at which the droplet motion over the substrate starts was determined depending on the surface temperature at different droplet volumes. The shadow method was the main method of measurement. The advancing and receding contact angles were measured depending on the gas flow rate. The friction force was determined using the advancing and receding contact angles and droplet size. A motion of a droplet was also observed from the top. The local velocity and acceleration of droplet were calculated.
Introduction
The development of microelectronics is closely related to the problem of heat removal from the semiconductor chip.To date, the most affective cooling systems are the systems based on impact jets and sprays, two-phase flows in minichannels and fluid flows in microchannels.These systems enable removal of heat fluxes of up to 500 W from the chip size 10 to 10 mm [1,2,3,4].However, already now the electronic industry is ready to produce components where the heat flux density can reach values of 1 kW/cm 2 and higher [6].Even the most effective systems that use the two-phase flows are unable to cool such components, which is a technological barrier to the further development of microelectronic systems.One of the technical solutions to achieve significant intensification of heat transfer is a device, forming near-wall droplet flows in micro -and minichannels.The transition from a continuous film flow to the droplet flow with an increase in the length of the contact lines leads to the intensification of heat transfer at evaporation [7].
Above all the dynamics of a droplet in the gas flow is studied in the absence of evaporation.There are studies about dynamics of liquid droplet in turbulent flow in a sufficiently high channel comparing with height of droplet [8] and including vibration effect [9].Numerical work [10] reports the dynamic and interaction of two droplets in a channel with gas flow, where the key parameter is the distance between the droplets.This interaction between the droplets has to be considered for designing cooling systems with forming near-wall droplet flows.
Significantly, changing viscosity and surface tension in liquid droplet on heated substrate may have a significant impact on its dynamics.But there is lack of data in the literature devoted to this aspect.In the scope of present work the dynamics of single droplet has been explored depending temperature of substrate and size of droplet in mini-channel with gas flow.
The experimental setup
The scheme of the experimental setup is shown in figure 1.It includes a channel with height 6 mm, removable substrates, the substrate temperature controlling and the air supply systems.Optical access for visualisation provided through the windows from side walls and the top cover.Ultrapure water obtained with the use of Milli-Q system is used as working fluid.The droplet is placed on surface into the channel with a syringe.The experiments have been carried out at the initial droplet volume of 60-150 l.All experiments were realized on a substrate of stainless steel with polished surface.The Reynolds number of the gas flow was varied from 0 to 7000.The source gas is dry air from special air compressor.The temperature is maintained 25C using a thermoregulation system.The substrate was heated by Peltier elements, installed in contact with the substrate.The substrate temperature during the experiment was regulated with a precision of 0.2C using the PR-59 controller.To register the contour of a sessile droplet the shadow method was used.Its principle is based on the fact that the physical object is illuminated by a parallel light beam, and its shadow is recorded by the camera, as shown in Fig. 2. Optical equipment allows obtaining images with a resolution of 6 µm/pixel (see Fig. 3).The obtained images were processed by different methods with the help of software (the Drop Shape Analysis by KRÜSS).Before the experiments, the hysteresis of the contact angle on the substrate was measured.The advancing angle of wetting was around 80 degrees, and the receding angle was around 50 degrees.Images obtained above droplet were used for calculation of the local velocity and acceleration (see.Fig. 6).
Results
At an increased velocity of the gas flow, a droplet lost symmetry (see Fig. 3), and at some velocity values it started moving over the substrate (see Fig. 6).Flow velocity required for moving droplet decreasing with increasing of droplet volume that evidenced by the data which are presented in Figure 4.It should be noted that increasing temperature of the substrate surface, leads a decrease in the receding contact angle, in other words, the stack of droplet to the surface.One of the explanations that must be taken into consideration is that the viscosity and the surface tension of water significantly depend on temperature.Figure 5 shows the friction force affecting the contact line.It was determined by the formula 2r(cos(A)-cos(R)), where is the surface tension, and the angles A and R, r is radius as are shown in Fig. 2 b).The larger symbols correspond to the values at which the droplet began its motion.In the work [8] the dynamics is studied for different droplets on the silane substrates.The values of velocity required to move droplet and correspondent friction forces in [8] were very similar to those obtained in our work.In our work it was found that with increasing temperature the droplet motion starts at somewhat larger gas flows, and the friction force affecting the droplet on the heated surface, calculated by the above formula, is a few times larger than in the case without heating.A local velocity rate of water droplet was determined using analysis of images, which were recorded from top.After droplet was starting movement, its velocity reaches about 12 mm per second for 50 mm path, as shown in
Conclusion
In this paper, we have carried out a detailed study of the dynamics of a single droplet in the 6 mm channel with gas flow.New data on the dynamics of water drops on a heated surface have been obtained.It has been found that with increasing temperature, the droplet starts to stick to the substrate surface.
The authors express their gratitude to the Russian Science Foundation for the support of this work (project No. 14-19-01755). Flow
Fig. 5 .
Fig. 5. Friction force affecting the droplet contact line, depending on the air flow rate and substrate temperature.
Figure 7 . 4 EPJFig. 6 .Fig. 7 .
Fig. 6.Series of images of the movement of droplet on surface at temperature T = 27 C and T = 50 °C (top view).The arrow indicates the direction of the gas flow. | 1,547 | 2017-10-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Fuzzy Logic in Carbonate Reservoir Quality Assessment : A Case Study from Tarim Basin , China
Received: January 14, 2017 Revised: April 13, 2017 Accepted: May 15, 2017 Abstract: Introduction: To address reservoir quality assessment in highly complex and heterogeneous carbonate reservoirs, a methodology utilizing fuzzy logic is developed and presented in this paper. Based on carbonate reservoir characteristics, three parameters reflecting the macroscopic and microscopic of storage abundance, permeability, and median of pore throat radius were selected to establish the factor set and the evaluation criteria. After analysis of core and test data, a membership function is constructed by semi-drop trapezoid method and the weight formula is also determined by reservoir factor sub-index. The developed method then is used to evaluate a carbonate reservoir in the Tarim Basin in China. Based on the result of single well evaluation, the plane classification map of the carbonate reservoir quality is constructed. Results obtained from reservoir quality assessment in the K32 well show that Ilevel, II-level, and III-level reservoir qualities account for 58%, 37%, 5% of the reservoir, respectively. The results are consistent with the actual production data demonstrating reliability of the proposed method for reservoir quality assessment practices in usually very complex and heterogeneous carbonate reservoirs.
Introduction:
To address reservoir quality assessment in highly complex and heterogeneous carbonate reservoirs, a methodology utilizing fuzzy logic is developed and presented in this paper.Based on carbonate reservoir characteristics, three parameters reflecting the macroscopic and microscopic of storage abundance, permeability, and median of pore throat radius were selected to establish the factor set and the evaluation criteria.After analysis of core and test data, a membership function is constructed by semi-drop trapezoid method and the weight formula is also determined by reservoir factor sub-index.The developed method then is used to evaluate a carbonate reservoir in the Tarim Basin in China.Based on the result of single well evaluation, the plane classification map of the carbonate reservoir quality is constructed.Results obtained from reservoir quality assessment in the K32 well show that Ilevel, II-level, and III-level reservoir qualities account for 58%, 37%, 5% of the reservoir, respectively.The results are consistent with the actual production data demonstrating reliability of the proposed method for reservoir quality assessment practices in usually very complex and heterogeneous carbonate reservoirs.
Background:
Carbonate reservoirs are complex and heterogeneous and this makes their evaluation a difficult task.
Objective:
To overcome the uncertainties associated with evaluation of complex carbonate reservoirs a reliable method to accurately evaluate carbonate reservoirs is presented.
Methods:
Fuzzy logic is used to evaluate a carbonate reservoir from Tarim Basin in China.Based on carbonate reservoir characteristics, three parameters reflecting the macroscopic and microscopic of storage abundance, permeability, and median of pore throat radius are selected to establish the factor set and to evaluate the criteria of carbonate reservoir.After the analysis of core and test data, a membership function is reasonably constructed by semi-drop trapezoid method and the weight formula is also determined by reservoir factor sub-index.
Results:
An effective methodology for the evaluation of reservoir quality in carbonate reservoirs is established by using fuzzy logic.In addition, an example reservoir from China is used to demonstrate the applicability of the developed method.
Conclusion:
Based on the result of single well evaluation, the plane classification map of the carbonate reservoir is constructed.Favorable zones in the reservoir are also delineated.Evaluation results are consistent with the actual production data of gas and oil which proves that
INTRODUCTION
Due to the complexity and heterogeneity of carbonate reservoirs, adequate reservoir evaluation is always a difficult problem.At present, carbonate reservoir evaluation is mainly a single factor evaluation method or multi factor scoring method, but there are limitations [1,2].The single factor evaluation method cannot accurately represent the characteristics of carbonate reservoirs.Multi factor scoring method is also prone to conflict and evaluation result is not very clear [3,4].To overcome these issues in way of adequate reservoir quality assessment mainly caused by uncertainties associated with the reservoir geology fuzzy logic is used to develop an effective method for reservoir quality assessment in carbonate reservoirs.Fuzzy mathematics is not binary logic, but fuzzy logic for analysis and reasoning.It is not the absolute "zero" or the absolute "one", but the interval between "zero" and "one".The process of knowing the geological problem is similar.In accordance with this logic, the principle of merit can be used to make judgments.Fuzzy mathematics in reservoir evaluation can accurately evaluate the corresponding reservoirs, and objectively reveal the original distribution of characteristics of carbonate reservoirs.It also provides a new method for the carbonate reservoir evaluation.
The Middle Ordovician Carbonate Reservoir is the main target stratum of oil and gas exploration and development in the new area of the Middle Tarim Basin in China.It is comprised of shallow marine sediments mainly composed of limestone, dolomite, dolomitic limestone, and dolomite.The carbonate reservoir is very heterogeneous because of tidal effects and also diagenesis.At present, the type of distribution of reservoir parameters is not clear affecting the optimum development of the reservoir [5,6].Due to the combined effect of sedimentation, diagenesis, and tectonics, the reservoir space type, the micro and pore structure and the physical properties vary a lot which leads to huge uncertainties in reservoir quality assessment.
Effectively integration of these factors and objective evaluation of the reservoir are the key elements in determining favorable zones (high quality) in the reservoir.In this paper, fuzzy logic is used to quantitatively evaluate quality of a heterogeneous and complex carbonate reservoir from China.Parameters such as storage abundance, permeability, and median of pore-throat-radius are selected which reflect the microscopic and macroscopic characteristics of the reservoir.This can help to improve the development and management strategy of the reservoir.Well data were used in evaluation of theI-level and II-level reservoirs and the results are compared to the production data.Optimal development of carbonate reservoirs is demonstrated via evaluation of a carbonate reservoir from Tarim Basin in China using the proposed method.
Fuzzy Logic in Reservoir Evaluation
Fuzzy logic only takes 0 or 1 of the specific function in the ordinary set, promotes the (0, 1) interval to obtain the membership function.The membership function is a generalization of the indicator function in a fuzzy set.In fuzzy logic, it represents the degree of truth as an extension of valuation.Degrees of truth are often confused with probabilities, although they are conceptually distinct, because fuzzy truth represents membership in vaguely defined sets, no likelihood of some event or condition.It is the absolute belonging or not belonging to the gradual expansion of flexible relationship [7 -9].Thus, the mathematical methods to deal with the fuzzy concept of the middle transition are very convenient.Because of the uncertainty of fuzzy mathematics, it can determine the occurrence of the event or not.Therefore, it has the advantage of comprehensive evaluation in complex carbonate reservoirs.Carbonate reservoirs are complex and heterogeneous.Fuzzy mathematics can overcome many uncertain factors in the process of reservoir evaluation.As a result, the fuzzy method of reservoir evaluation can be more objective method for evaluation of complex and heterogeneous carbonate reservoirs and finding adequate distribution of reservoir parameters.
According to the principle of fuzzy mathematics, the evaluation model for carbonate reservoirs is established which is composed of three element sets including factors set, evaluation set, and weight set.
Factors set U:
(1) Where u m is the m-th evaluation factor.
Evaluation set V:
(2) Where v m is the m-th judging level of evaluation factor.
Weight set A: Where a m is the m-th Weight of the evaluation factor.
To make sure all the factors play a role in reservoir evaluation, the relationship between the factor set U and the evaluation set V is established.Definition fuzzy mapping R from the factor set U to the evaluation set V is a transformation matrix of the reservoir evaluation.The transformation matrix R is as follows: Where r ij is the membership degree of the i-th factor of the j-th evaluation level.Then, the weight set matrix A by the role of various factors in comprehensive reservoir evaluation and the transformation matrix R of the comprehensive reservoir evaluation are synthesized by fuzzy transformation and the comprehensive reservoir evaluation matrix is obtained: Where B is the evaluation grade of V on rank fuzzy subsets, "○" is the fuzzy operator of two fuzzy matrixes [15], b m is rank v m comprehensive evaluation of the reservoir level of the resulting fuzzy subset B of a membership.On the basis of the maximum membership principle, the level vi of the maximum bi is chosen as the result of the comprehensive reservoir evaluation.Here, the choice evaluation parameter, the structure membership function, and the determination weight are the key factors in carbonate reservoir evaluation with fuzzy mathematics.
Parameters Evaluation and Selection
Because of complex nature of carbonate reservoirs, the factors that affect the productivity of oil and gas wells in the study area are analyzed comprehensively.The parameters which are the most closely related to the carbonate reservoir are selected as the index of comprehensive reservoir evaluation.According to the analysis of carbonate reservoirs in Middle Ordovician of Middle Tarim Basin, three parameters reflecting the macroscopic and microscopic characteristics of storage abundance, permeability, and median pore throat radius were selected.Storage abundance reflects the reserves capacity of the carbonate reservoirs.Permeability reflects the carbonate reservoir's ability to conduct fluids.Median pore throat radius reflects the microscopic pore structure and pore throat size of the carbonate reservoir.These three parameters constitute a set of factors which can provide an accurate and proper understanding of characteristics of the carbonate reservoirs in the study area.
Because of the scattered distribution of the data there is high variance in the database.If the original data is directly used, the role of the orders of magnitude larger of indicators will be highlighted, and the role of the orders of magnitude smaller of highly sensitive indicators will be weakened.Therefore, it is necessary to normalize the original data [10 -12], which can be normalized to the same range.The normalization formula is as follows:
Where
is the data after normalization, x ij is the j-th parameter of the i-th sample in the original data, n is the total number of samples.
Structure Membership Function
How to construct the expression of the membership function is a theoretical problem that has not been solved by the fuzzy mathematics yet.After statistical analysis and preprocessing of the reservoir evaluation parameters of the study area, semi-drop trapezoid method is adopted to construct membership function of the three selected evaluation indexes to establish three evaluation grades for reservoirs: I-level reservoir, II-level reservoir, and III-level reservoir.The formula is as follows: Where t is the actual value of the evaluation parameter.r I , r II and r III are the standard values of the comprehensive reservoir evaluation of I-level reservoir, II-level reservoir, and III-level reservoir.Thus, the membership function values of the evaluation factors are obtained.
Determination of the Weight of Fuzzy Sets
The weight reflects the role of the evaluation factors in the comprehensive reservoir evaluation.At present, in practical application, the weight uses the scoring method [13 -15].To avoid the influence of subjective factors based on the statistics of the actual data in the study area reservoir factor sub-indexes are used to quantitatively determine the weights of fuzzy sets.The determined weights of the fuzzy sets are then used to weight the correlation coefficient between one factor and other factors.The calculation formula is as follows: Where A i is the i-th weight value of the reservoir evaluation factor, C i is the i-th actual value of the reservoir evaluation factor, S i is the i-th weighted average value of three standard levels of reservoir evaluation factor, A is the weight set of the reservoir evaluation factor.After each individual weight value standard, the weight value is limited to (0, 1).Thus, 3 evaluation indexes are obtained, which are composed of 1×3 order weight matrix A.
RESERVOIR EVALUATION RESULTS
Based on the above ideas and fuzzy logic method of evaluation of carbonate reservoirs, the Middle Ordovician carbonate reservoir in Middle Tarim Basin is used as an example.
Establishment of Factor Set and Evaluation Set
The factor set of the evaluation object is established after stepwise regression analysis: Where E sa is the storage abundance, K is the permeability, R mpt is the median pore throat radius.
Taking the K32 well as an example, the actual value of the reservoir parameters shows that the reservoir energy abundance is 0.57 m 2 , the permeability is 8.6×10 -3 μm 2 , and the median pore throat radius is 0.53 μm.The factor set is constructed: U={Esa, K, Rmpt} UK32 ={0.59, 8.7, 0.51} According to the characteristics of the carbonate reservoir in Middle Ordovician of middle Tarim area, the corresponding evaluation set is established: Carbonate reservoir evaluation standards of the three parameters for the studied reservoir are presented in Table (1).Based on the analysis of core and test data of the carbonate reservoir, the carbonate reservoir is divided into three levels in terms of reservoir quality: I-level reservoir as good quality reservoir, II-level reservoir as medium quality reservoir, and III-level as poor quality reservoir.
Formation of Fuzzy Relational Matrix
Based on the evaluation standards presented in Table (1), the actual parameter values used to constitute the factor set U K32 are substituted in equations ( 7), ( 8) and ( 9) to determine the evaluation set V. The membership function value of comprehensive reservoir evaluation for each single factor is then obtained.From this, the transformation matrix R K32 of K32 well is derived: In the same way, the transformation matrix of the other wells can be calculated.
Weight Set Determination
The weight should reflect the difference in material property and the difference between people's understanding weight or importance of different parameters.The weight value is calculated using equations (10) and (11) and the weight set is determined by the weight of each factor.Weight values for the K32 well are presented in Table (2).Thereby, the weight set of K32 wells is determined as the following:
Reservoir Quality Evaluation
After the calculation of R and A, the results are obtained by the formula (5).The level of quality of the carbonate reservoir is evaluated according to the principle of maximum membership degree.The transformation matrix R by the single factor membership function values obtained and the weight set A by weight value of each single factor obtained for K32 well and then substituted into the formula (5) to get the fuzzy matrix.Using the fuzzy synthetic operator comprehensive evaluation of the carbonate reservoir quality in K32 is conducted: Results obtained from reservoir quality assessment in the K32 well shows that I-level, II-level, and III-level reservoir qualities account for 58%, 37%, 5%, respectively.Depending on the principle of the maximum membership, the evaluation result of K32 well is I-level reservoir distribution area.In a same way, the results of reservoir
DISCUSSION
Fuzzy logic reservoir evaluation can effectively integrate various reservoir parameters to avoid the uniqueness and inconsistency of single index evaluation method.The results of quantitative evaluation of carbonate reservoirs compare with the actual reservoir capacity in a Middle Ordovician Carbonate Reservoir in the middle Tarim Basin, in China.Representative production data is presented in Fig. (4).As one can see, production from wells K67 and K32 is higher than other wells by over 100 m 3 per day.Wells K22, K53, and K46 produce 30-70 m 3 /day and production from the well K17 is poor.It can be concluded that the methodology developed and the zones delineated are consistent with the production data.
Taking the carbonate reservoir of the Middle Ordovician in the Middle Tarim Basin as a case study, the complex carbonate reservoir is evaluated using the fuzzy logic analysis.The storage abundance, permeability, and median of pore throat radius are selected as the factors controlling reservoir characteristics.The carbonate reservoir is evaluated by constructing a membership function and determining weight of the fuzzy sets.The distribution of favorable reservoir is plotted by a single well evaluation result which reveals the distribution law of the reservoir.The results of fuzzy quantitative evaluation of the complex carbonate reservoir are consistent with the reservoir capacity.The classification of the reservoir into different zones in terms of reservoir quality can be used as the basis of optimal and effective reservoir development and management.
A
={0.43, 0.22, 0.35} B = A ○ R ={0.58 0.37 0.05} comprehensive evaluation in other wells can be obtained.The results obtained from comprehensive reservoir evaluation of representative wells of K17, K22, K32, K46, K53, and K67 in Middle Ordovician of middle Tarim area are presented in Table(3).The evaluation results are used to prepare a contour map for each level and different levels of regions are delineated.Figs.(1 to 3) show the I-level, IIlevel, and III-level reservoir qualities in the study area, respectively.
Fig. ( 1
Fig. (1).I-level reservoir distribution of reservoir evaluation in Middle Ordovician, contour for I-level reservoir, filling graphical range for I-level reservoir.
Fig. ( 2
Fig. (2).II-level reservoir distribution of reservoir evaluation in Middle Ordovician, contour for II-level reservoir, filling graphical range for II-level reservoir.
Fig. ( 3 )
Fig. (3).III-level reservoir distribution of reservoir evaluation in Middle Ordovician, contour for III-level reservoir, filling graphical range for III-level reservoir. | 3,959.2 | 2017-10-12T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Synchrotron radiation response characterization of coplanar grid CZT detectors
Commercial 15times15times7.5 mm3 coplanar grid CdZnTe detectors were studied on the micron-scale using a collimated high-energy X-ray beam provided by Brookhaven's National Synchrotron Light Source. This powerful tool enables simultaneous studies of detector response uniformity, electronic properties of the material, and effects related to the device's contact pattern and electric field distribution. The availability of a front-end Application Specific Integrated Circuit, developed at Brookhaven's Instrumentation Division, providing low noise amplification of grids and cathode signals, corresponding timing signal and adjustable relative gain, allowed to correlate performance mapping and fluctuations in collected charge. We observed the effect of the strip contacts comprising the coplanar grids on the energy resolution of the coplanar-grid device
I. INTRODUCTION 1 HE coplanar grid sensing technique [1] has shown a considerable enhancement in the spectral performance of large volume CdZnTe detectors removing the limitations due to poor hole collection and providing an adequate correction for electron trapping. This technique, combined with recent advances in CdZnTe manufacturing, yields large-volume high-resolution room temperature gamma-ray sensors for a wide range of applications such as nuclear material monitoring, radioisotope identification, gamma-ray astronomy, and medical diagnostics [2,3,4,5]. Coplanar grid technique can potentially provide an energy resolution of less than 1% FWHM at 662 keV for a cubic cm device [6]. However, the actually measured resolution, typically more than 2%, is still far from the statistical limit calculated based on the Fano factor [7]. In general several factors can limit the energy resolution of these devices: material non-uniformity, device geometry, surface effects, electronic noise, electron trapping, edge effects, etc. Many of these deleterious effects are not fully understood. In this work, we performed a micron-scale characterization of several commercial coplanar-grid devices with a goal to investigate the effect of the electrodes configuration on the device response uniformity.
II. EXPERIMENTAL SETUP
A low-noise low-power application specific integrated circuit (ASIC) developed at Brookhaven's Instrumentation Division in collaboration with Los Alamos National Laboratory was employed to read out the signals from the coplanar grid devices [8]. The commercial detectors acquired from eV Products were first evaluated by using a 137 Cs (662 keV) source to determine the optimal operating biases required on the device electrodes ( Fig. 1). Relative gain compensation method [9] has been employed to achieve the best energy resolution. Electronic noise contribution was evaluated yielding, with the detector connected, an equivalent noise charge (ENC) of 730 e -(7.9 keV). A typical interanode grid capacitance and an anode-cathode plus anodeground capacitance on the order of 15pF and 4pF were measured respectively. Then, the detectors were studied at the X12A beam-line.
During the scan, we varied the bias applied on the cathode and use different grids/cathode voltage ratios. The lowest cathode bias was 600 V with a corresponding bias on the non-collecting grid of 30 V. The highest applied biases were 1000 and 75 V respectively.
A schematic of the experimental set up is shown in Fig. 2. The beam-line could be configured as a monochromatic beam with photon energies up to 50 keV or as a white beam with photon energies up to 100 keV. We used a pseudomonochromatic beam produced by attenuating the white beam with a lead filter. The corresponding energies of the photons had a Gaussian-like distribution centered around 80 keV with ~7 keV FWHM. The data acquisition system included a multi-channel analyzer (MCA) to accumulate pulse-height spectra, a digital oscilloscope to store waveforms, and standard NIM electronics. To calibrate the spectroscopy electronics we used a standard 241 Am source. A SPEC [10] macro (a UNIX based software package for instrument control and data acquisition developed for X-ray diffraction) controls a X-Y stage and the data acquisition.
The detector, mounted inside a test-box, was placed on X-Y translation stage, with the cathode oriented perpen- Several raster scans, with typically less than 100 µm step size in both directions, were performed. For each point, a pulse-height spectrum was collected during a 3 seconds time interval. Due to the high brightness of the beam it was possible to accumulate spectra with good statistics in such short period of time.
III. RESULTS AND DISCUSSION
Gaussian fitting was applied to evaluate the peak position, FWHM, and total number of counts for each pulseheight spectrum generated during the scan (Fig. 3). The peak position is directly related to the total collected charge produced by the incident photons. Usually the nonuniformity of the device response is attributed to the nonuniform distribution of the traps inside a CdZnTe crystal. In this work, we investigate other effects that may also contribute to the response non-uniformity of the device. Our primary goal was to understand the role of the strips comprising the coplanar grids. 4 shows the variations in the collected charge which precisely correlate to the location of the coplanargrid contacts. The signal is higher when the X-ray beam is pointed over the collecting electrodes and lower when the beam is over the non-collecting ones. Similar behavior was observed with all detectors used in these measurements. position in the contact pattern. We found that the typical peak-to-peak difference averaged over a 3x3 mm 2 area is around 1.6%. This value slightly changes when the cathode voltage was increased from 600 to 1000 V. For example, the detector (for which we achieved with best resolution at the cathode and grid biases of 1000 V, 75 V respectively and relative gain G=0.86) a peak-to-peak difference was found to be 1.75%. When the biases were reduced to 800 and 60 V the peak-to-peak difference was found to be 1.78%. with the same relative gain value. Fig. 6. Electric field fine distribution near the strips. We assume that potential changes as linear function in the gap between the strips.
It should be mentioned, that similar variations in the device response were also observed with pixel [11], driftfield [12], coplanar-grid [13], and other devices that employ steering electrodes.
Two effects can be responsible for the observed variations. The first is related to the different length of the passes traveled by the electron clouds from the points of interactions to the collecting grid. Indeed, most of the photons interact close to the cathode. However due to the charge steering effect by the non-collecting electrode (see Fig. 6), the electron clouds have different travel lengths. The longer travel length the greater fraction of the electrons is lost due to trapping. This results in non-uniformity of collected charge. Similar explanations were also considered in Ref. 13 to explain the response variations measured with 1 cm 3 coplanar-grid device.
It is clear, however, that the above explanation cannot entirely explain the observed variations especially in the case of thick detectors where the difference in the path length (for X-rays interacting close to the cathode) becomes very small. Indeed, for a 7 mm thick crystal and the contact pattern shown in Fig. 6, the calculations predict the charge loss less than 1% for the paths originating at the cathode above the middle of the collecting and non-collecting strips (1000 V is on the cathode and 75 V is on the non-collecting grid). Moreover, the electron diffusion and electron cloud broadening makes this difference even smaller. The second effect that can cause the observed non-uniformity response is the charge loss at the surface between the strips. As shown in Fig. 6, even at high differential bias between the grids the field lines originated at the cathode intersect the surface between the strips. Hence, the electrons can reach the surface which has different electronic properties than CdZnTe bulk. The electron mobility at the surface is less while the concentration of the traps is high. As a result, some fraction of the charge is lost in the gaps between the strips, which gives variations in the device response.
Other non-uniformities of the contacts itself are exhibited in Fig. 7 that shows a scan made along the two strips of a detector. Such a behavior is probably related to the properties (resistivity, trapping levels, etc.) of the surface areas separating the strip contacts. In fact CdZnTe is very sensitive to surface effects and surface properties strongly influence detector performances [14]. Similar effect was reported by F. Zhang et al. [15].
IV. CONCLUSIONS
We performed the X-ray scans of the commercial coplanar grid detectors with micro-scale resolution. We found strong variations of the detector responses that correlate directly with the contact patterns of the devices. The amplitudes of the output signals diminish when the X-ray beam is pointed above the areas of the non-collecting strips. These reductions in amplitudes, which fluctuate over the detector area, affect the energy resolution of the device. The magnitude of the fluctuation was evaluated to be ~1.6%, which might explain the energy resolution limit typically measured with the commercial coplanar-grid devices 2.3% FWHM at 662 keV. New measurements with a smaller beam size (down to 10x10 µm 2 ), and waveform analysis of the collected signals are planned to better understand the cause of the variations of the collected charge and whether it may be an intrinsic effect that limits the energy resolution of coplanar-grid devices.
V. ACKNOWLEDGMENT | 2,126.6 | 2004-10-16T00:00:00.000 | [
"Physics"
] |
Unexpectedly High Capacitance of the Metal Nanoparticle/Water Interface: Molecular‐Level Insights into the Electrical Double Layer
Abstract The electrical double‐layer plays a key role in important interfacial electrochemical processes from catalysis to energy storage and corrosion. Therefore, understanding its structure is crucial for the progress of sustainable technologies. We extract new physico‐chemical information on the capacitance and structure of the electrical double‐layer of platinum and gold nanoparticles at the molecular level, employing single nanoparticle electrochemistry. The charge storage ability of the solid/liquid interface is larger by one order‐of‐magnitude than predicted by the traditional mean‐field models of the double‐layer such as the Gouy–Chapman–Stern model. Performing molecular dynamics simulations, we investigate the possible relationship between the measured high capacitance and adsorption strength of the water adlayer formed at the metal surface. These insights may launch the active tuning of solid–solvent and solvent–solvent interactions as an innovative design strategy to transform energy technologies towards superior performance and sustainability.
An overview of electrical double-layer components
When two conducting phases, such as an electrode and an electrolyte, come into contact, their fermi levels equilibrate and a potential is set up across the interface. Depending on this potential and the composition of the solution, one phase becomes negatively charged and the other one becomes positively charged and an electrical double-layer is developed at the electrode-solution interface. In the first model proposed by Helmholtz for the structure of the electrical double-layer, it is considered that counter-charges in the solution locate at a molecular order distance from excess charges on the electrode surface. 1 Such a structure resembles a parallel-plate capacitor, whose capacitance is described by: where is the dielectric constant of the medium, 0 is the permittivity of free space, is the surface area of the electrode, and is the distance between the two sheets of charges. As apparent by this equation, the Helmholtz model predicts a constant capacitance for the double-layer, however, experimental results show that it changes with potential and concentration. Accordingly, either or or both should depend on potential and concentration. Gouy 2 and Chapman 3 by considering the fact that in contrast to charges on the electrode, the ions in the solution are not confined at a specific location close to the electrode, suggested a diffuse layer of ions in the solution. The behavior of this layer is described by the Poisson-Boltzmann equation and its thickness is essentially determined by an interplay between the tendency of the charges on the electrode to attract or repel the ions in the solution, according to their polarity, and the tendency of thermal motions to scatter them. Adjacent to the electrode, where electrostatic forces are usually able to overcome the thermal processes, the greatest concentration of counter-ions would be found, and at greater distances as the electrostatic forces become weaker, progressively less concentrations would exist. Thus, an average distance of charge separation replaces in the capacitance expression. This average distance depends on potential and electrolyte concentration. At higher electrode potentials and/or at higher electrolyte concentrations the diffuse layer becomes more compact and, hence, the double-layer capacitance rises. 4 This model predicts a U-shaped capacitance-potential function which resembles the observed behavior at low concentrations of nonadsorbing electrolytes and at potentials not too far from the potential of zero charge. However, experiments show a flattening at high potentials, and at high electrolyte concentrations. This discrepancy is due to considering the ions as point charges in the Gouy-Chapman model, allowing them to approach the surface arbitrarily closely. Therefore, at high potentials, the effective separation distance between the charge carriers at the electrode and in the solution decreases continuously toward zero. In reality, however, ions have a finite size and cannot approach the surface any closer than their ionic radius and if they remain solvated, the thickness of their primary solvent shell would also be added to that radius. Accordingly, Stern 5 considered a plane of closest approach for the centers of the ions at a distance from the electrode equal to their hydrated radius, named outer Helmholtz plane (OHP), and suggested that the Poisson-Boltzmann equation is only held at distances larger than this plane location. Therefore, the double-layer capacitance is made up of two components in series: one corresponds to the capacitance of the charges held at the OHP ( OHP), and the other is the capacitance of the charges at the diffuse layer ( diffuse ), as shown in Figure S1 (a) and Figure S2 (a), and its capacitance is described by: Thus, the smaller of the two components governs the double-layer behavior. At high potentials or in highly concentrated electrolytes, the ions in solution become tightly compressed against the OHP, and the whole system resembles the Helmholtz model. Whereas, at low potentials or at low electrolyte concentrations, the double-layer structure approaches that of the Gouy-Chapman model. Relying on this concept, Graham 6 determined the concentration independent capacitance of the Helmholtz layer as a function of potential on a mercury electrode in a highly concentrated NaF solution as an electrolyte which does not significantly adsorb on the electrode. By knowing the constant Helmholtz capacitance ( OHP ) and measuring the double-layer capacitance at lower electrolyte concentrations, he calculated the potential and concentration dependent diffuse layer capacitance value ( diffuse ), This agreed with electrocapillary measurements, confirming the validity of the Gouy-Chapman-Stern (GCS) model. However, the prerequisite for Graham's experiments was the use of non-adsorbing electrolytes which could not always be hold. In the GCS model, only long-range electrostatic effects are included as the basis for accumulating the counter-ions in the solution phase and it is considered that the charge density at any point from the electrode surface to the OHP is zero and, hence, potential profile in this layer is linear and its capacitance is independent of potential. However, in real systems this is not always true. Esin and Markov, 7 Grahame, 8 and Devanathan 9 took into account that some ions (especially large anions) can lose their hydration shell and specifically adsorb on the electrode (physical adsorption or chemisorption), forming a layer between the electrode surface and the OHP. The locus of centers of these unhydrated ions strongly adsorbed to the electrode is called the inner Helmholtz Plane (IHP), shown in Figure S2. Therefore, as derived by Devanathan, the double-layer capacitance would be described by: where IHP and OHP are the capacities of the space between the electrode and the IHP and between the IHP and OHP, diffuse is the capacitance of the diffuse layer, and ⁄ represents the rate of change of the specifically adsorbed charge with charge on the metal, respectively. In the models discussed above, the structure of the double-layer was described only based on the interfacial charge characteristics of the electrode and of the ionic species in the electrolyte. However, polar solvents like water also contribute to the potential drop across the electrode/electrolyte interface. Hence both, solvent molecules and electrode materials affect the double-layer structure and capacitance, in line with the observations made in our present study. Accordingly, Bockris, Devanathan, and Muller suggested a model for the structure of the double-layer in which there is a strongly held and oriented layer of water molecules attached to the electrode due to the strong interaction between the charged electrode and water dipoles. 10 The complete orientation of water molecules leads to a lower dielectric constant for this layer. However, besides the full polarization of the dielectric under the strong interfacial field in the inner layer, theoretical simulations show that the strong interaction between the electrode and adsorbed water molecules disturbs their hydrogen bonding with the next water layer. 11 This in turn can lead to ion accumulation in close vicinity of the electrode, beyond the classically considered electrostatic and thermal forces mentioned above, as shown in Figure S1 (b) and Figure S2 (b). Along with the static double-layer capacitance discussed above, faradaic pseudocapacitance can also form by a very fast reversible faradaic electron transfer between an adsorbate and electrode, such as hydrogen underpotential deposition (H-UPD) and surface oxide formation/reduction. The double-layer capacitance and pseudocapacitance behave like two capacitors in parallel, 12 thus, the overall interfacial capacitance would be the sum of these two: interface = EDL + pseudo Figure S1. Schematic representation of the electrode/electrolyte interface at negative applied potentials, the potential profile over distance from the electrode, and the equivalent circuit, based on (a) the traditional mean-field models, (b) the modern considerations supported by our experimental findings including strongly adsorbed water molecules at the electrode surface (in red-white) which lead to higher accumulation of ions in close vicinity of the electrode, thus, a shorter Debye length. The yellow ball represents electroactive adsorbates forming the pseudocapacitance. Counter ions accumulated at the OHP are shown in dark orange and ions in the diffuse layer in light orange. Figure S2. Schematic representation of the electrode/electrolyte interface at positive applied potentials, the potential profile over distance from the electrode, and the equivalent circuit, based on (a) the traditional mean-field models, (b) the modern considerations supported by our results including strongly adsorbed water molecules at the electrode surface (in red-white) which lead to higher accumulation of ions in close vicinity of the electrode, thus, a shorter Debye length, and chemisorbed water molecules (in green-white). The blue ball represent electroactive adsorbates forming the pseudocapacitance. Specifically adsorbed desolvated anions are shown in dark purple, counter ions accumulated at the OHP in medium purple, and ions in the diffuse layer in light purple.
In the present study, suspended nanoparticles in the electrolyte colliding with a potentiostated electrode, establish an electrical connection and equilibrate their potential with it. The nanoparticle/electrolyte interface structure and composition before impacting is different with that after the collision due to the shift of the nanoparticle potential (potential difference between the phases). The interface includes specifically adsorbed capping ligands on some of the surface sites of the nanoparticle, strongly adsorbed water mole cules and counter ions in the inner layer, counter ions accumulated between the adlayer-water and the next layer of water molecules, and ions in the diffuse layer. Also, at potentials positive of PZC chemisorbed water molecules contribute to the interface composition, and desorb at negative potentials leading to an increase of the metal (nanoparticle) work function and consequently to electron transfer for maintaining the electrode potential. Therefore, the charge transfer during water chemisorption/desorption is different from pseudocapacitive behavior, where one electron per charge unit is transferred between the adsorbate and electrode.
The charge measured in capacitive nano-impact experiments is the transferred charge due to the rearrangement of the interface upon the particle electrode collision, i.e., the difference of charge stored at the interface before and after impacting. In the applied potential range, this rearrangement could include ion accumulation/depletion, water chemisorption/desorption, andin principlealso pseudocapacitive phenomena such as: (i) desorption of the capping ligands, (ii) H-UPD, and (iii) surface-oxide reduction. We will reason below why we can exclude any of these three pseudocapacitive contributions as the cause of the detected high capacitance. (i) Upon particle collision at the electrode, depending on the applied potential, capping ligands may be desorbed. However, no significant difference is observed in the charge-potential behavior of citrate capped Pt nanoparticles and 3-mercaptopropionic acid capped Pt nanocubes (see main text Fig. 4). Thus, we conclude that the nature of capping ligand does not influence the quantity of transferred charges in this study. Moreover, no deflection was observed in the charge-potential linear plot in the negative potential range, where desorption of the capping ligands could be expected. Hence, it can be assumed that either no desorption of the capping agent occurs or if desorption of the capping species takes place, it occurs across the full potential range tested. In the latter case, ligand desorption would add a constant contribution to each of the measured charges at all applied potentials and, hence, would not affect the slope (that is the obtained capacitance), but would only shift the charge-potential line along the y-axis. (ii) H-UPD might take place on Pt at potentials of 0.35 V vs RHE and below, that is below −0.05 V vs Ag/AgCl, 3 M KCl RE at pH 3.2. 13 Provided that the slope of the impact charge vs potential plot showed no deflection from linear behavior in the full potential range tested (+0.05 V to −0.3 V vs Ag/AgCl, 3 M KCl RE, see Figs. 3 and 4 in main text), it can exclude H-UPD as the origin of the high measured capacitance. (We can only speculate on the reason of this absence of H-UPD, which might be attributed to the presence of the capping ligands, in line with the fact that we did also not detect H-UPD at ensembles of surface immobilized Pt nanoparticles, see Figure S3). (iii) As inferred from the potential of suspended nanoparticles in the solution, determined by the extrapolation of the charge-potential linear plots to the potential at which no charge transfer occurs upon impacts, and as confirmed by XPS, surface of the used PtNPs is partially covered by surface oxides. These oxides can be reduced upon impact of the nanoparticle at the electrode held at a reductive potential. Since the potential window of our nano-impact analyses is more negative than the potential required for platinum oxide reduction (ca. 0.8. V vs RHE), 14 the contribution of transferred charge due to this process would be constant for each measured impact charge transferred at the different potentials; therefore, it does not affect the slope of the charge-potential plot and the capacitance derived from this.
Accordingly, the slope obtained by linear-fitting of the measured charge vs applied potential represents the static double-layer capacitance of impacting nanoparticles, raising from two contributions: ion accumulation at the interface and desorption of the chemisorbed water at negative potentials.
By looking at the density fluctuations at the metal-water interface in the absence of ions, classical MD simulations predict the way these fluctuations can promote ion adsorption through decreasing the free energy cost to form a cavity that can accommodate ions at the interface. The process of solvating an ion (or any other solute) may be decomposed in two steps: (1) creating a cavity of the right size to accommodate the ion in the liquid, accompanied by a free energy cost of cavity formation that depends on the water structure and not on the nature of the ion; (2) filling the cavity with an ion, accompanied by a free energy gain (otherwise the ion would not be soluble) due to the attractive interactions the ion makes with water that depend on the ion nature. Using this framework, it has been shown that the water structure in the adlayer can promote ion accumulation by lowering the free energy cost of step (1) compared to that in the bulk water, since creating cavities is easier at the interface than in the bulk. From constant-potential classical MD simulations theoretical differential capacitance values are calculated from the fluctuations of the total charge on the electrode surface, using the fluctuation-dissipation relation introduced in Refs. 15 Such capacitance values incorporate the effect of ion adsorption at the interface as well as of the interfacial water network. A recent work has for example shown that specific ion adsorption at gold-water interfaces can induce very large differential capacitance values, on the order of 100 μF/cm 2 . 16 However, common classical MD simulations do not include effects like water chemisorption, which can also contribute to large differential capacitance values, as shown by recent ab-initio MD simulations of Pt/water interfaces 15 and supported by our experimental findings.
Electrochemical measurements
All electrochemical measurements were performed in a three-electrode configuration comprising a Ag/AgCl, 3 M KCl reference electrode and a platinum counter electrode, placed inside a double Faraday cage to minimize electronic noise. The reference electrode was equipped with a double junction filled with the used electrolyte solution to prevent chloride contamination of the electrolyte. The solution in the double junction was refreshed after each set of experiments. In this work, 5 mM sodium citrate buffer with pH 3.2 (prepared by mixing tri-sodium citrate dehydrate (99.5%) and citric acid monohydrate (100%) supplied by VWR Chemicals Co., USA) was used as the electrolyte. All solutions were prepared with Millipore water (Thermo Scientific Barnstead Gen-Pure xCAD Plus, 0.055 µS cm −1 at 25 ºC). Prior to each set of experiments the electrolyte was deaerated by purging Ar for 20 min. Nano-impact studies were performed by chronoamperometry in the presence of citrate-capped, raspberry-like PtNPs (Nanoxact, 0.05 mg ml -1 platinum in 2 mM sodium citrate, purchased from NanoComposix Inc., USA) of ca. 30 (30 ± 3) nm and 50 (46 ± 5) nm in diameter, 3-mercaptopropionic acid-capped cubic PtNPs of ca. 18±1 nm edge length, and ascorbate-capped, spherical AuNPs of ca. 46±5 nm in dimeter. For nano-impact experiments, as a working electrode either a carbon fiber ultramicroelectrode (Ø = 7 μm), a platinum ultramicroelectrode (Ø = 10 μm), or a gold ultramicroelectrode (Ø = 12.5 μm) was employed. Before use, they were freshly polished with 1.0 µm, 0.3 μm and 0.05 μm Al2O3 slurry and thoroughly rinsed. Chronoamperometric nano-impact measurements were done by a potentiostat (VA-10X, NPI Electronics GmbH), equipped with a threeelectrode pre-amplifier and connected to a personal computer through high-speed DA/AD data acquisition cards (DA card: USB-3101FS, AD card: USB1608FS-Plus, Measurement Computing Corp). To filter the electronic noise without altering the spike charge, a 1 kHz eight-pole low-pass Bessel filter was applied. 16 Electrochemical measurements were sampled at a data acquisition rate of 10 kHz. Multistep chronoamperograms were obtained by increasing the potential sequentially in 50 mV steps and holding for 20 s at each value. These showed well-separated transient current features, herein referred to as "spikes", in the presence of nanoparticles. The individual charge associated to each of those spikes was determined using SignalCounter software (provided by Dr D. Omanovic, Ruder Boscovic Institute Zagreb, Croatia). The mean charge and standard deviation were determined by considering at least 50 impacts for each applied potential. The total measurement time was limited to less than 10 minutes for each set of experiments. During this time period, no agglomeration of particles was detected in the used electrolyte, as confirmed by Dynamic Light Scattering measurements conducted by a Wyatt DynaPro NanoStar (Wyatt Technology, USA) instrument.
To determine the suitable potential window for PtNPs nano-impacts, 5 μl of 30 nm PtNPs was dropcast on a freshly cleaned glassy carbon electrode, GCE, (Ø =25 mm) polished with 1.0 µm, 0.3 μm and 0.05 μm Al2O3 slurry and thoroughly rinsed with ultrapure water, and dried under a mild Ar stream. This was used as the working electrode to perform cyclic voltammetry. The experiment was done in 5 mM deaerated sodium citrate buffer of pH 3.2 as the electrolyte. Cyclic voltammetry was performed with a PalmSens 3 (PalmSens BV, Netherlands) potentiostat from +0.60 V to −0.30 V and from +0.60 V to −0.40 V vs. Ag/AgCl (+1.00 V to +0.10 V and +1.00 V to 0.00 V vs. RHE) with a scan rate of 0.025 Vs −1 .
Synthesis of PtNCs
Materials. All chemicals were used as purchased unless otherwise specified. Cyclohexanone (99 %), dodecylamine (99 %), 3mercaptopropionic acid (99 %), octylamine (99.5 %), oleic acid (99 %), platinum acetylacetonate (99.98 %), sulfuric acid (95 %), tetramethylammomium hydroxide pentahydrate (95 %) were purchased from Merck. Ethanol (90 %) and diphenyl ether (99 %) were commercially available from Alfa Aesar and used as received. All heating steps described in the following were performed using silicon oil baths on IKA C-MAG HS 7 magnetic stirred/hotplates. All centrifugation was performed in 15 mL centrifuge tubes (unless otherwise stated) using a rotor with a radius of 11 cm. General Strategy for PtNC Synthesis. The synthesis of single crystalline Pt-nanoparticles with cubic crystal habit was based on the GRAILS (gas reducing agent in liquid solution) method of Yang and coworkers, with minor modifications. 19 Details of the method used herein are included below.
Preparation of Hot Injection stock solution.
Pt(acac)2 (0.200 g, 5.09 x 10 -4 moles), dodecylamine (10.0 mL, 8.06 g, 4.34 x 10 -2 moles), and oleic acid (0.5 mL, 0.448 g, 1.58 x 10 -3 moles) were added into a 20 mL scintillation vial equipped with a 10 mm Teflon coated stir bar in an N2 filled glovebox. The vial was sealed with a septum-screwcap, and vortex mixed until homogeneous (2 minutes). Subsequently, the vial was heated to 80 o C in a thermostated oil bath at 200 RPM for 30 minutes under vacuum to remove any adventitious moisture/oxygen, before being backfilled with Ar using standard Schlenk line technique. Before injection to the reaction solution, the hot injection stock was heated to 150 o C, to control the size of Pt-nanocubes by modification of nucleation rate as a function of reaction temperature.
Preparation of Carbon Monoxide Gas.
Carbon monoxide gas was produced on demand by the dropwise addition of formic acid to sulfuric acid at 50 o C. This method was preferred over the use of a carbon monoxide gas cylinder, as the gas-phase reducing agent could be produced on demand in small quantities, with rate tunable by formic acid addition rate and sulfuric acid temperature.
We emphasize the importance of performing all CO preparations in a well-ventilated hood in the presence of a carbon monoxide detector. Additionally, before CO production is initiated, one should ensure both the CO production vessel and the reaction vessel (to which CO will be transferred by cannula) are under positive Ar-flow using standard Schlenk line technique. In such a manner, all unreacted CO gas is directed into the Schlenk line and subsequently vented into a well-ventilated chemical hood.
For this preparation, concentrated sulfuric acid (10 mL) was added to a 50 mL three neck round bottom flask equipped with a reflux condenser, ½" Teflon coated stir bar, and rubber septa. The contents of the flask were then heated to 50 o C in a thermostated oil bath at 300 RPM under Ar flow using standard Schlenk line technique. Before CO gas production, the flask was equipped with a vent needle connected to a drying tube filled with activated 3 Å molecular sieves, to remove any adventitious moisture produced in the formic acid dehydration reaction. To initiate the production of CO gas, concentrated formic acid was added dropwise to the sulfuric acid solution at a rate of 50 μL/min, which resulted in immediate mild bubbling in the reaction flask, indicative of production of CO. General synthesis of PtNCs. Dodecylamine (5.0 mL, 4.03 g, 2.17 x 10 -2 moles) and diphenyl ether (0.500 mL, 0.484 g, 2.84 x 10 -3 moles) were added to a 15 mL three neck round bottom flask equipped with a reflux condenser, ½" Teflon coated stir bar, and rubber septum. The contents of the vial were heated to 80 o C under vacuum for 30 minutes to remove any adventitious moisture/oxygen, followed by backfilling with Ar using standard Schlenk line technique. The contents of the vial were then heated to 210 o C in a thermostated oil bath at 300 RPM. Once at 210 o C, the reaction solution was bubbled with CO gas for 15 minutes before the hot injection stock solution (5.00 mL) at 150 o C was injected via a glass syringe to the reaction flask at 210 o C. The reaction was allowed to proceed for 30 minutes after injection under continual CO-flow before being quenched by removal from the oil bath. At this point, CO-production was ceased by stopping the formic acid addition and removing the CO-production flask from the oil bath. Post synthesis. A single workup cycle was employed to remove excess diphenylether, dodecylamine, oleic acid, and Pt-precursor from the synthesized nanoparticles. For this, the crude reaction mixture was diluted with toluene (5.0 mL) and added to a 15 mL centrifuge tube, followed by addition of ethanol (5.0 mL) to precipitate the nanocrystals. Subsequently, the dispersion was centrifuged at 1250 RPM for 12 minutes to result in a clear/colorless supernatant and a light grey and distributed pellet consisting of the PtNCs. The supernatant was discarded, and the pellet was immediately carried through to ligand exchange as described below.
Ligand Exchange of PtNCs. The method used herein was developed based on previously reported FePt ligand exchanges, which take advantage of the sulfophilic nature of Pt-surfaces to enable exchange of native carboxylate and amine ligands for waterdispersable alkylthiols. 20 Cyclohexanone was chosen as the solvent for the ligand exchange as it both dissolves 3-MPA and disperses the as synthesized PtNPs. First, a solution of octylamine (0.5 mL, 0.391 g, 3.03 x 10 -3 moles) in n-hexane (9.50 mL) was prepared (0.303 M octylamine), and the purified nanocrystal pellet was dispersed in this hexane/octylamine stock solution (2.5 mL) via sonication (10 minutes) and subsequent vortex mixing. The octylamine was added to prevent aggregation of the PtNPs upon re-dispersion. Next, in a 4 dram scintillation vial equipped with a 10 mm Teflon coated stir bar, 3-MPA (0.500 mL, 0.609 g, 5.72 x 10 -2 moles) was added followed by cyclohexanone (0.5 mL), resulting in a 5.72 M 3-MPA ligand exchange stock solution. Next, a portion of the PtNP in hexane/octylamine was pipetted on top of the MPA/cyclohexanone layer (0.25 mL), resulting in a biphasic system. The dispersions were allowed to stir at 50 RPM at room temperature for 30 minutes, by which point a homogeneous grey dispersion was observed to have formed. The dispersion was then transferred to a 15 mL centrifuge tube, followed by addition of ethanol (1.0 mL) to precipitate the PtNPs. Subsequently, the dispersion was centrifuged at 1250 RPM for 12 minutes, resulting in a clear/colorless supernatant and a light grey and distributed pellet. Next, to remove excess MPA from the PtNPs, the pellets were washed with ethanol (2.0 mL). Brief sonication resulted in a slightly cloudy grey dispersion, which was then centrifuged at 1250 RPM for 12 minutes to result in a clear/colorless supernatant and a light grey pellet. The supernatant was then discarded. Subsequently, a solution of TMAH•5H2O (0.500 g, 2.76 x 10 -3 moles) in ethanol (10 mL) was prepared (275 mM TMAH•5H2O), and the PtNPs were dispersed in the TMAH stock solution (2.0 mL) via sonication (10 minutes). The samples were then precipitated via centrifugation at 2500 RPM for 12 minutes, resulting in a grey/brown pellet and a clear/colorless supernatant. The supernatants were discarded, and the pellets were dried in vacuum for 10 minutes to remove residual ethanol, yielding ~ 2-4 mg of ligand exchanged Pt nanocubes. Next, DI-H2O (1.0 mL) was added to the pellets, resulting in immediately a clear deep brown dispersion free of any particulates. The dispersion was then passed through a 0.25 microneter syringe filter, and used as an aqueous solution for nano-impact experiments.
Synthesis of AuNPs
Materials. All chemicals were used as purchased unless otherwise specified. Tetrachloroauric acid (99.99 % metal basis) and silver nitrate (99.9995 % metal basis) were purchased from Alfa Aesar. The solutions of the metal salts were prepared freshly before the synthesis. Sodium citrate dihydrate 99.5 % (VWR Chemicals) and L(+)-ascorbic acid ≥ 99 % (ROTH) solution were stored at −25 ºC. The heating steps were carried out using IKA RH basic 2 magnetic stirred/hotplates equipped with custom adaptors for round bottom flask. Centrifugation was performed in 50 mL centrifuge tubes on a 5810 R, Eppendorf centrifuge. Ultra-pure water (Thermo Scientific Barnstead Gen-Pure xCAD Plus, 0.055 µS cm -1 at 25 ºC) was used to prepare all solutions and in each of the reaction steps. All the glassware was cleaned overnight in a base bath (0.1 M potassium hydroxide in isopropanol), in an acid bath (0.1 M hydrochloric acid in water) and rinsed with ultra-pure water before use. Teflon coated stirring bars were cleaned in aqua regia and rinsed with ultrapure water. General Strategy for AuNPs Synthesis. 46 ± 5 nm AuNPs were synthetized by using a modified version of the seed mediated approach optimised by Murphy and coworkers. 21 Traces of silver nitrate were added to favour the formation of spherical nanoparticles instead of nanorods. The 25 ± 4 nm Au seeds used in the procedure were synthetized by following the citrate based method originally proposed by Turkevic. 22 The detailed procedures followed in this work are described below. Preparation of Au seeds. 250 mL of ultra-pure water were added to a 500 mL three neck round bottom flask, equipped with a reflux condenser and a 30 mm Teflon coated stir bar, and heated to 100 ºC. 500 µL of a 50 mM tetrachloroauric acid trihydrate solution (2.50×10 -5 moles) and 1.25 mL of a 60 mM trisodium citrate dihydrate solution (7.50 x 10 -5 moles) were added. A red suspension was formed during the first minutes after reagents addition. The suspension was heated for one additional hour before being allowed to cool down at room temperature and adding 2.5 mL of the 60 mM trisodium citrate dihydrate solution (1.5 x 10 -4 moles). 5 mL of the final suspension was centrifuged at 15000 RCF for 15 minutes, 4 mL of supernatant was removed to obtain 1 mL of the concentrated seeds. Preparation of AuNPs. 30 mL of ultra-pure water, 1 mL of the concentrated Au seeds, 85 µL of a 50 mM tetrachloroauric acid trihydrate solution (4.25 x 10 -6 moles) and 8.76 µL of a 10 mM silver nitrate solution (8.76 x 10 -8 moles) were added to a 100 mL round bottom flask, equipped with an equilibrated dropping funnel and a 20 mm Teflon coated stirr bar, and stirred at room temperature. 86 µL of a 100 mM L(+)-ascorbic acid solution (8.6 x 10 -6 moles) and 20 mL of ultra-pure water were transferred to a dropping funnel and added to the reaction mixture, dropwise (25 minutes for completing the addition). The pale red reaction mixture darkened slowly. Once terminated the reductant addition, the suspension was stirred for one hour. Post synthesis. The AuNPs suspension was centrifuged at 15000 RCF for 15 minutes. Most of the supernatant was removed and 500 µL of a concentrated AuNPs suspension was obtained. Figure S8 depicts TEM images of the Au seeds and AuNPs.
Molecular Dynamics (MD) Simulations
A classical MD simulation of a liquid slab composed by 3481 water molecules between two planar Au(100) surfaces (each electrode made of 5 layers of 162 gold atoms) has been performed at a fixed potential difference of 0 V using the MetalWalls code. 23 2D periodic boundary conditions were employed, with no periodicity on the z-direction (perpendicular to the gold surfaces), and box dimensions along x and y directions of Lx=Ly=36.63 Å. The SPC/E model 24 was chosen for water, while Lennard-Jones parameters introduced by Heinz et al. 25 for Au(100) and Lorentz-Berthelot mixing rules were used to model the interactions between all atoms. Electrostatic interactions were computed using a 2D Ewald summation method, with a cut-off of 12 Å for the short-range part of the Coulomb interactions and a cut-off of 15 Å for the intermolecular ones. The simulation box was equilibrated at constant atmospheric pressure by applying a constant pressure force to the electrodes. The electrodes separation was then fixed to the equilibrium value of 78.6 Å (for which the water density in the middle of box corresponds to the bulk value, for the rest of the equilibration (5 ns, NVT ensemble, = 298K) and for the production (100 ns, NVT, T= 298K). The equation of motions were solved using a timestep of 1 fs during equilibration and 2 fs during production. The free ener ( )gy (shown in Figure 7 (b)) is obtained as the difference between the free energy cost to form a cavity at a distance from the adlayer formed in contact with the metal surface, ∆ ( ), and in the bulk. It is calculated from the MD simulation by monitoring the probability ( (0, )) to find zero water oxygen centers in the probing volume, v: 26 , with being the Boltzmann constant and the temperature ( =298K). A spherical probing volume of 3 Å radius has been adopted. The result for the water/Au(100) system is compared in Figure 7 (b) of the main text to the ones previously obtained by Limmer et al. 26 for Pt(100) and Pt(111) systems, employing a similar computational setup as in the present simulation.
Results and Discussion
As shown in Figure S3, no hydrogen under potential deposition (H-UPD) is realized at the nanoparticles during cyclic voltammetry, although this could be expected for Pt electrodes. This may be attributable to the strong specific adsorption of citrate anions on the PtNPs as has been reported by Attard et al. 27 Therefore, H-UPD cannot interfere with the capacitance measurements. In order to figure out the origin of the capacitive spikes being either charging of the impacting nanoparticle or perturbatio n of the electrode double-layer by the nanoparticle, step potential chronoamperometry was run using different ultramicroelectrodes (carbon, Pt, Au) as the working electrode ( Figure S4). During these experiments changes of the transferred charge were investigated as a function of the applied potential. Spikes from −0.30 V to +0.05 V vs. Ag/AgCl (+0.10 V to +0.45 V vs. RHE) were included in data analysis to avoid any parallel faradaic reaction and to be able to precisely resolve spikes from background noise. Observed background currents are in the typical range for microelectrodes and may originate from oxygen reduction caused by very small amounts of oxygen entering the deoxygenated electrolyte upon NP injection into the measurement cell, although the NP suspension was deoxygenated by Ar purging prior to injection, as well. The current associated to this is at least 100 times smaller than expected for air saturated solution and can, hence, be concluded not to significantly alter the capacitance measurements.
Extracting information from the slope of charge-potential plots
Upon colliding with the working electrode, the nanoparticle changes its potential (E NP,S ) to the applied potential at the working electrode (E). 28 Therefore, its surface charge (q) also changes depending on its potential change (ΔE) and its capacitance (C NP ). As a result, the charge measured during a nano-impact reflects the alteration of the nanoparticle's surface charge (Δq) due to changing its potential.
Δq = C NP • ΔE
If we assume that capacitance of the nanoparticle C NP is constant over the considered potential range, we get: Therefore, a constant slope of charge-potential plots represents a difference quotient, which is a reasonable estimate for the differential capacitance of the impacting nanoparticles in the considered potential range when potential steps are infinitesimally small.
In this work two sizes of commercial citrate capped-PtNPs with nominal diameters of 30 nm (30 ± 3 nm) and 50 nm (46 ± 5 nm) were used. As shown by TEM, these particles are raspberry-like clusters formed from the aggregation of smaller particles with sizes in the range of 3-5 nm.
Calculation of nanoparticles' surface area
TEM images show that 30 nm and 50 nm PtNPs are clusters of smaller, spherical nanoparticles, described previously as "mesoporous" nanoparticles. 29 Measuring the accessible surface of these particles based on the TEM images is not directly possible. Therefore, we estimated the area of the raspberry-like particles as the sum of the area of their constituting small particles. For a close-packing-type arrangement of these spheres in the aggregate, the fractional filling efficiency is 0.74. 30 Thus, the number of small particles in the aggregate nanoparticle would be as follows: small NP × small NP = 0.74 cluster here small NP is the volume of constituting small particles and cluster is the volume of clusters. where is the radius of the sphere. Therefore: For the 30 ± 3 nm clusters the radius of the constituting spheres small NP is about 1.5 nm and for the 46 ± 5 nm clusters small NP is about 2.3 nm. Therefore, each of the raspberry-type Pt nanoparticles constitutes of about 740 spheres.
small NP = 740 The maximum surface area of the aggregate ( cluster ) is the sum of small NPs surface area ( small NP ): small NP = 4π small NP 2 cluster = 740 small NP Therefor their surface area can be estimated as 2.1 × 10 -10 cm 2 and 4.9 × 10 -10 cm 2 , respectively.
The surface area of the used Pt nanocubes with 18 ± 1 nm edge length ( ) is 1.9 × 10 -11 cm 2 calculated as follows: The surface area of the used Au nanoparticles with 46 ± 5 nm diameter ( ) is 6.6 × 10 -11 cm 2 calculated as follows: To elucidate the origin of high specific capacitance measured for nominally 30 nm cluster PtNPs, which can be due to porosity or curvature effects of nanoclusters or divergence from classical Gouy-Chapman-Stern (GCS) model for the EDL, we studied particles of different sizes, morphologies and materials. This was done for nominally 50 nm raspberry-like PtNPs, 18 nm PtNCs and 46 nm AuNPs using a platinum working electrode, related chronoamprograms can be seen in Figure S8. Figure S10. Average density profile of water molecules (blue) and Au atoms (orange) as a function of the direction normal to the Au(100) surface (z). A representative snapshot of the simulation box is also reported. The water density in the middle of the box corresponds to the bulk water density (dashed black line), ensuring that the system has been properly equilibrated.
Dynamic light scattering (DLS) measurements were conducted on 30 nm PtNPs in ultrapure water, 5 mM and 10 mM sodium citrate buffer, to ensure the stability of PtNPs in the electrolyte solution used for nano-impact studies. By comparing the 10 mM and 5 mM sodium citrate buffer curves with ultrapure water curve in Figure S11, it is concluded that during the time span of the nano-impact experiments (maximum 10 minutes), agglomeration of the particles is happening in 10 mM sodium citrate buffer, but in 5 mM solution the particles are stable against agglomeration. Accordingly, a 5 mM solution was used for the single particle capacitance measurements. Figure S11. DLS measurement on 30 nm PtNPs performed in 5 mM and 10 mM sodium citrate buffer in the time span of 10 minutes and also in ultrapure water. | 8,895.6 | 2021-11-18T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
In vitro analysis of dental ceramics: evaluation of the radiopacity and chemical composition by Raman spectroscopy
Objetivo: Este estudo comparou a radiopacidade de diferentes sistemas cerâmicos por meio de radiografias digitais e avaliou a composição química das amostras por espectroscopia Raman. A hipótese testada foi que haveria diferença na radiopacidade entre os materiais testados. Material e Métodos: Os espécimes foram preparados para cada cerâmica testada: FLD VM7 (VITA Zahnfabrik), LD Impressora IPS Empress e.max (IPS Empress), AL Em Ceram Alumina (VITA Zahnfabrik), ALYZ In Ceram Zirconia (VITA Zahnfabrik), YZ Lava All Ceram (3M / ESPE) e MYZ Zirconzahn (Talladium Brasil). Os espécimes foram radiografados e submetidos a leituras de densidade radiográfica utilizando uma ferramenta de histograma. O espectrômetro acoplado a um microscópio petrográfico foi utilizado para medidas de espectroscopia Raman. Análise de variância (ANOVA) e teste post-hoc de Tukey foram usados para comparar a radiopacidade dos diferentes materiais. Resultados: Para todos os materiais testados, a radiopacidade apresentou diferenças estatisticamente significantes, exceto YZ e MYZ. Lava All Ceram e ZirkonZahn apresentaram altos valores de radiopacidade e o VM7 e o IPS Empress e.max Press apresentaram menor radiopacidade do que as estruturas dentais humanas. Conclusão: Foi possível concluir que a radiopacidade está intimamente ligada à composição química da cerâmica. ABsTRACT
INTRoDuCTIoN
D ental ceramics are the materials of choice for various esthetic restorations due to their characteristics, such as high compressive strength and abrasion resistance, chemical stability, favorable esthetic features, translucency, biocompatibility, fluorescence and thermal expansion coefficient similar to that of the dental structure [1,2].
One of the most desirable characteristics of any restorative material is a radiopacity that allows it to be distinguished from dental structures, helps with detection of secondary carious lesions, marginal defects, restoration contours, adaptation of restorations to cavity walls, contact points between adjacent teeth, cement excess, and interfacial gaps [3,4].
Clinically, adaptations to cervical and proximal margins are difficult to evaluate, especially when finishing line under gingival level is performed.In this case, radiopacity may help in the radiographic follow-up evaluation [5,6].
In order to improve radiographic images, digital imaging has been proven to be an easy and fast resource [7].Features such as immediate image capture, patients being exposed to low levels of radiation, easy manipulation, low cost, acquisition of accurate radiodensity evaluation, and no need for fill processing, as in the case of traditional images, are highlighted [8].Another advantage is that in the evaluation of digital images, the radiographic density is easily observable, since the software can determine image pixels and have grayscale values [7,8].
The radiopacity of a material is primarily defined by the chemical elements present in its composition, with zinc, strontium, zirconium, barium, ytterbium, and lanthanum being radiopacifying elements with a high atomic number, present in the constitution of various dental materials, such as contemporary ceramic systems [3].Raman spectroscopy allows the analysis and characterization of the vibration spectra, not only of the minerals of which the sample is composed.In this technique, more complex systems, such as dental materials, may also be evaluated by analysis of light diffusion caused by monochromatic laser excitation.They show the different chemical and behavioral patterns of each analyzed ceramic system.This spectroscopy presents several advantages, such as simplicity of sample preparation, facility of analyzing bands, and linear response about mineral and chemical element concentrations [9].
Information on the radiopacity of different ceramics used in restorative dentistry is limited.Consequently, the aim of this study was to compare the radiopacity of different ceramic systems, aluminum step wedge and molar slice (enamel and dentin) by digital radiography and to evaluate the chemical composition of samples using Raman spectroscopy.The study tested the hypothesis that there was a difference in radiopacity between the different tested materials.
mATeRIAls AND meThoDs
This study was approved by Human Research Ethics Committee of the Federal University of Juiz de Fora (protocol number: 177/2010).
The specimens of VM7 was started by making a refractory model, duplicated from the acrylic resin pattern model.The liquid was mixed to the powder, forming a paste.This was deposited on the surface of the refractory model until it reached the desired thickness.The whole was taken to the oven for burning.
To produce the specimens of In Ceram Alumina and In Ceram Zirconia, it was started by making a refractory model, duplicated from the acrylic resin pattern model.This material is composed of an aqueous suspension of aluminum oxide and/or zirconia particles and a stabilizing agent.A modelable mass was deposited on the surface of this refractory model until it reached the desired thickness.The assembly was baked for initial firing, resulting in a solid phase structure of slightly porous metal oxide particles.On this porous infrastructure was applied the lanthanum glass, which, carried to the second firing, impregnated in the open pores during the first firing.
The ceramic specimens were placed on a refractory base in a Vacumat oven (VITA Zahnfabrik) following firing cycles as recommended by the respective manufacturers.
The specimens were finished with a double face diamond disc (KG Sorensen) and a rubber disc with Supermax diamond paste (Edenta AG Dental, Haupstrasse, Switzerland) [11] IPS Empress e.max Press specimens were fabricated using a metal matrix measuring 1 x 3 x 3 mm.The metal matrix (previously lubricated with vaseline petroleum jelly) was placed on a 10 mm-thick glass plate, also lubricated, in order to facilitate removal of the wax models afterwards.The GEO-Classic wax (RenfertGmbH, Hilzingen, Germany) was liquified and poured onto the glass plate until the entire steel matrix cavity surface was covered.After the wax hardened, a straight sharp instrument was used to flatten it and remove excess wax.In the next step, a refractory impression was made by inverting the wax pattern in refractory material.To do this, a wax pattern was connected parallel to a sprue.This connection was made at the edge of the pattern to enable the glass-ceramic material to penetrate into the pattern.The refractory cylinder was put into an electric ring oven EDG 3000 (EDG, São Paulo, Brazil) pre-heated to 700 °C in accordance with the manufacturer's instructions.The next step was the injection process.A glass ceramic ingot of IPS Empress e.max Press was introduced into the central canal of the refractory mold, and then an aluminum cursor (also pre-heated) was inserted into the refractory mold.After this, the refractory mold was transferred to the hot injection chamber.At the end of this process, the refractory cylinder was removed from the oven and left on the laboratory bench to cool.When it was cold, the body-sprue was divested.The specimens were finished with a double face diamond disc (KG Sorensen) and a rubber disc with Supermax diamond paste (Edenta AG Dental, Haupstrasse, Switzerland) [11].
To make the specimens of Lava All Ceram and Zirkonzahn, pre-sintered blocks of zirconia were used, which were sliced with a diamond disc (KG Sorensen).The blocks underwent an additional sintering process in the oven recommended by the manufacturers at 1500 oC for 4h [12].Shrinkage occurred in these blocks, after which they measured 1 x 3 x 3 mm.The specimens were finished with a double face diamond disc (KG Sorensen) and a rubber disc with Supermax diamond paste (Edenta AG Dental, Haupstrasse, Switzerland) [11] The final specimen thickness was measured with a digital caliper (Digimatic Caliper, Mitutoyo, Aurora, USA) confirming the final thickness of 1 mm.
The radiopacity of the ceramics was compared with that of dental structures (dentin and enamel).For this, an recently extracted inferior first molar of a 25-year-old male was sliced using Labcut 1010 (Excet Corp, Enfield, USA) with a diamond disc.Longitudinal slices 1 mm thick in the more central area of tooth were used.
Radiopacity analysis
To take the radiographs, a periapical X-ray appliance Gendex Expert DC® (Gendex, Des Plaines, USA) was used, operating at 7 mA, 65 kVp, and exposure time of 0.1 s.The objectsensor distance was kept the same, with the use of a standardized device that provided incidence of the radiation beam perpendicular to the plane in which the sensor and radiographed objects were placed.Radiographic images were obtained using a direct digital radiography apparatus Visualix eHD (Gendex).The following items were put on the sensor: a molar slice, an aluminum step wedge ranging from 1 mm to 11 mm in thickness in steps of 1 mm each, and one specimen of each tested ceramic.Three images were obtained of each radiographed set.They were obtained at a resolution of 1200 dpi, in TIFF format.No changes in brightness and/or contrast were made.
The radiographic density of the digital images was evaluated using the histogram tool of the Adobe Photoshop® 8.0 software (Adobe, New York, USA).With this software, mean gray values of all steps of the aluminum scale, of the studied specimens, and of the enameldentin of the sliced tooth were obtained.The radiopacity of the tested ceramic as well as dental structures were expressed in aluminumequivalent millimeters (mm Al), allowing comparison among them.The comparison of the radiopacity of the different materials was done by analysis of variance (ANOVA) and a Tukey post-hoc test, with a level of significance of 5%, using BioStat software (Version 5.0, AnalystSoft, Vancouver, Canada).
Raman spectroscopy analysis
The Raman spectra were obtained in a Horiba Jobin-Yvon LabRam HR spectrometer, coupled with a full petrographic microscope using 10x, 50x, or 100x magnification objectives, and a Peltier-cooled (-70°C) CCD detector.A 10 mW HeNe laser with 632.8 nm wave-length was used, and neutral density filters were selected to adjust the laser power in order to avoid damage and/or transformation of the samples.Laser power over the samples was used at less than 1 mW, to avoid any thermal damage, and each spectrum was obtained at least twice to guarantee wavenumber precision and intensity reproducibility.
ResulTs AND DIsCussIoN
In order to compare the radiopacity of ceramic systems with that of human dental structure, the radiopacity of the tested materials, enamel, and dentin were presented in aluminum-equivalent thickness (mm Al).The radiopacity found for 1 mm of aluminum corresponded to the radiopacity of dentin in the same thickness, and for 2 mm corresponded to the radiopacity the enamel.The radiopacity of all the tested materials presented statistically significant differences (p < 0.01) except between YZ (Lava) and MYZ (Zirkonzahn) (Table 2).Ceramic systems should have a radiopacity similar to or higher than the aluminum-equivalent thickness in order to enable detection of marginal infiltrations, and cementation failures [3,9].Aluminum is a comparison reference material because it has a radiopacity similar to that of dentin [4], corroborating the results found in this study (Figure 1).The International Standards Organization (ISO 4049) established that the radiopacity of restorative materials should be equal to or greater than that of 1,100 aluminum alloy with the same thickness [13].It is undesirable for the radiopacity of restorative materials to be lower than that of the replaced hard dental tissues.Therefore, it is important that dental ceramics, which are used to replace enamel tissue, are more radiopaque than human dental enamel [1,7].Based on the aluminum scale, an aluminum equivalent of 2 mm corresponds to enamel radiographic density, and 1 mm to dentin radiographic density.The feldspathic ceramic VM7 and IPS e-max Press presented lower density than dentin.The In Ceram Allumina showed the same radiopacity than enamel and all other tested ceramics presented higher density than enamel.
The results support the hypothesis that the radiopacity measurements of ceramic materials differ from one another.Ceramic material radiopacity is directly linked to its chemical composition.This finding was noted in this study, as there was statistical difference in radiopacity between ceramics with different chemical compositions.Only YZ (Lava All Ceram) and MYZ (Zirkonzhan) presented no statistical difference between them, confirmed by Raman spectra vibration bands that did show similar intensity and wave number values.The Raman spectra are represented in Figure 2.They show the different chemical and behavioral patterns of each analyzed ceramic system.This fact confirms that the chemical composition of the two ceramics is very similar.The superior radiopacity of Y-TZP ceramics is probably a result of the high atomic number and molecular weight of yttrium (Y, atomic number 39), zirconium (Zr, atomic number 40), and hafnium (Hf, atomic number 72) [14].Also in relation to Raman spectra, some vibrational modes characteristic of each system were found.For example, it was possible to observe a band at 1100 cm -1 in FLD and LD, which refers to the symmetrical stretching mode for the Si-O bond, while a band at 580 cm -1 corresponds to the symmetrical stretching mode for of Si-O group coupled to the deformation mode of the Si-O-Si structure 15 .For samples AL and ALYZ, the appearance of vibrational bands characteristic of ceramic materials was found to be inhibited by the emission of fluorescence resulting from the high concentration of Al 2 O 3 , aspreviously described by GOUADEC, et al. [16].Bands at 148 cm -1 and 266 cm -1 were found in groups YZ and MYZ, which are vibrations characteristic of the tetragonal phase of ZrO 2 [17].
The International Standards Organization (ISO 6872) [17] recommends that a spectroscopy test should be used to study the screening of specimen adulteration.This procedure was performed by Raman spectroscopy, which did not detect the presence of any impurity in the chemical composition of each tested ceramic.Furthermore, this test is an excellent tool for the characterization of the various chemical systems present in the composition of each sample, in addition to being a nondestructive analytical technique.
Although manufacturers do not show the quantity of each radiopacifying agent, it was possible to observe statistically significant differences in radiopacity values between materials and their relationships with the atomic number of components.If the atomic number is high, the material is more radiopaque [3,18].Corroborating the findings of OZKURT, et al. [18] and PEKKAN, et al. [19], this study showed that zirconia has the highest radiopacity level.These higher levels are useful to monitor marginal adaptation by radiographic examination [6,20].But these materials, when used in the clinical practice, may mask cement dissolutions and caries, which could occur under the crown [20].
The difference found in the radiopacity values is of clinical significance in radiographic evaluation.Zirconia-based materials are more radiopaque and are easier to detect in a radiographic examination.On the other hand, less radiopaque materials, such as silica-based vitreous ceramics, may be mistaken for dental structures, carious lesions, restoration failures and marginal defects [3].The maximum radiopacity value has not yet been established.However, extremely radiopaque materials may be difficult to identify at the tooth/restoration interface.Moderate radiopacity may be more favorable and could facilitate the detection of tooth/restoration interface [3,4,21].
The adequate radiopacity must be accepted as a factor when evaluating the clinical success of ceramic restorations.Furthermore, studies are needed to compare the radiopacity of the adhesive cements used with the different ceramics systems.
CoNClusIoN
Radiopacity is closely linked to the chemical composition of each ceramic system.The groups containing zirconia demonstrated more radiopacity.The feldspathic ceramic VM7 and the vitroceramic of dissilicate of litium IPS Empress e.max Press presented lower radiopacity than that of human dental structures.
Table 1 -
Manufacturer name and chemical composition of ceramic system according to groups
Table 2 -
Mean of radiopacity and thickness equivalent in aluminum for tested ceramic *Followed values for distinct letters present statistical differences | 3,483.2 | 2018-04-19T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Theoretical study of solar light reflectance from vertical snow surfaces
The influence of horizontal and vertical inhomogeneity of snow surfaces on solar light reflectance is studied using the radiative transfer theory (RTT). We compared 1-D RTT and 2-D RTT and found that large errors are produced if the 1-D RTT is used for the calculation of the snow reflection function (and, therefore, also in the retrievals of the snow grain radii) in 2-D measurement geometries. Such 2-D geometries are common in the procedures for the determination of the effective snow grain radii using near-infrared photography and spectroscopy of vertical snow walls. In particular, we have considered three cases for the numerical calculations: (1) the case with no black film; (2) the case with a black film at the pit’s bottom; (3) the case with a black film at the pit’s bottom and also at one of the vertical snow walls.
Introduction
Optical measurements are commonly used to derive snow microphysical parameters from plane-parallel snow layers (Kokhanovsky et al., 2011).In particular, snow grain size is obtained from near-infrared (NIR) measurements (in the spectral range 865-1240 nm) of intensity of solar light reflected from flat snow layers.The corresponding retrieval algorithms are based upon the physical phenomenon of the enhancement of light absorption by larger ice grains (and as a consequence, a smaller light reflectance for snow layers with larger grains).The main problem with such a method is that only upper snow layers can be observed.The information on the snow microphysical parameters and snow pollution in deeper layers cannot be retrieved because of high absorption of NIR radiation by snow grains.As a matter of fact, NIR radiation does not penetrate deep into a snowpack and, therefore, does not contain information on the properties of snow from the depths above 1-5cm depending on the size of particles and the wavelength (Kokhanovsky and Rozanov, 2012).To avoid this problem, recently measurements along vertical snow walls have become popular (see, e.g.Fig. 1 in Matzl and Schneebeli, 2006;and Fig. 2 in Painter et al., 2007).Also measurements along the length of cylindrical holes in snow are used (Barker and Korolev, 2010;Arnaud et al., 2011).
In most of cases (see e.g.Kokhanovsky et al., 2011) the 1-D transfer theory valid for plane-parallel slabs is used for the interpretation of optical measurements and determination of snow grain sizes.Although there could be some influences of 3-D effects (e.g.shadowing from the snow walls, enhancement of brightness, etc.) on corresponding measurements.For the measurements involving 2-D and 3-D geometries (e.g.along snow walls), the approach based on the correlation of the reflectance and the snow grain size or the snow specific surface area (see e.g.Matzl and Schneebeli, 2006) is used.This is because more quantitative approaches based on the solution of radiative transfer equation in 2-D and 3-D geometries have not been developed in applications relevant to optics of vertical snow walls.
The aim of this work is twofold.Firstly, we develop software, which can be used for studies of 3-D effects in snow and, secondly, we study the corresponding effects using numerical simulations.Only optical snow parameters (extinction coefficient, single scattering albedo, and phase function) are used in the analysis.This makes it possible to perform a study with greater generality.The link to the snow grain size and concentration of pollutants is obvious (Kokhanovsky et al., 2011).The paper is structured as follows.In the next section we introduce the radiative transfer equation and boundary conditions relevant to the studies of light propagation in snow.The numerical algorithm developed for the solution of the corresponding integro-differential radiative transfer equation in the 2-D geometry is described in Sect.3. The results of numerical experiments are reported in Sect. 4. The present study can be used to design and interpret real-world experiments relying on the spectrometry of vertical snow walls.
Radiative transfer equation and boundary conditions
It is assumed that the surface of the snow is flat (no sastrugi, no microstructures on the snow surface).Let us assume that there is a pit with the width D, the length L and the depth H in the snowpack with the width 2X and the height H , see Fig. 1.The pit is covered by a sheer film to convert the direct solar light to diffuse one.Reflected radiation is registered along the line AB on the vertical wall of the pit and along the line CS at the top of the pit.
To find radiation intensity in this region, we introduce the coordinate system with the origin at the centre point O at the pit's bottom, see Fig. 1.We assume that the length L (10-15 m) is larger than the typical width D (about 1-3 m).Therefore, the radiation intensity near the central plane y = 0 of the pit depends only on the spatial coordinates x and z.The dependence on the third coordinate can be neglected.So the problem can be considered in the 2-D framework in the plane y =0.This reduces calculations as compared to the 3-D modelling.Moreover, the region is symmetrical with respect to the plane x = 0, see Fig. 1, hence only the half-region The radiative transfer equation (RTE) for the monochromatic radiation intensity I takes the following form in the case under consideration: The function I (x, z, θ, ϕ) depends on the spatial coordinates x, z and angles θ, φ, defining direction of the radiation transfer, see Fig. 2. The first term in Eq. ( 1) gives the change of intensity in the direction and the second term represents the extinction of radiation by the medium.The integral describes the re-radiation of scattered light, here incident light has the direction (θ , ϕ ) and re-radiated light has the direction (θ, ϕ).
The pit [0, R] × [0, H ], R = D/2, is filled by air, whereas the medium out of the pit is snow.Then it follows that for the extinction coefficient: for the single scattering albedo: for the scattering phase function: Note, the single scattering albedo ω snow 0 (z) is considered as a piecewise function of depth, which describes a layered snowpack; the special case ω snow 0 (z) = const corresponds to a homogeneous snow layer.
One needs to define the boundary conditions for Eq. ( 1).The intensity of the radiation entering the region is defined on each boundary at the corresponding angular intervals, see ( , , , , , ) with albedo A(x).Therefore, it follows that: Such a surface reflects incident radiation uniformly into all possible directions.If the bottom is covered by a black film absorbing all incident radiation, the albedo A(x) is made to be equal to zero and the bottom boundary is called "black".
It is assumed that the radiation does not enter the region via the right boundary x = X: Reflecting condition with albedo A s is defined on the left boundary x = 0: = A s I (0, z, θ, π − ϕ).
When A s = 1 Eq.( 7) gives the condition of symmetry of the whole region [−X, X] × [0, H ] with respect to the plane x = 0. Actually, the solution of the RTE (Eq. 1) in the whole region [−X, X] × [0, H ] is symmetrical: I (x, z, θ, ϕ) = I (−x, z, θ, π −ϕ).Therefore, one can consider the half-region [0, X] × [0, H ] under the boundary condition (Eq.7).When A s = 0, the boundary x = 0 is black.It means that this boundary is covered by a black film absorbing all incident radiation.
The diffuse source on the top boundary is imposed.Therefore, it follows that: where S 0 is the incident light irradiance.
We will study three problems depending on albedo A s at the left snow wall and the albedo A(x) at the bottom.First, under A s = 1, A(x) = A snow > 0 one has the pit with no black film.Second, under A s = 1, A(x) = 0 as x ≤ R, A snow else, the black film lies only at the pit's bottom.Third, under A s = 0 and A(x) = 0 as x ≤ R, A snow else, the black film covers both the bottom and the vertical plane x = 0.
Relative intensities of reflected radiation at the snow surface in the directions * and * * : The function Î (x) can be used to retrieve the optical properties of the upper layer of the snow (up to 5cm in depth).This function can be approximated by the piece-constant function: where the values I snow (H, * ) and I air (H, * ) are obtained via the two 1-D radiative transfer models as shown here (Chandrasekhar, 1950): (11) The boundary conditions are ( see the definition of angles in Fig. 3) Equation ( 10) is used to find the solution over areas, where snow is present.Otherwise, Eq. ( 11 ( , , , , , ) Eq. ( 10) is solved along the line CO passing through the centre of the pit, whereas the problem formulated in Eq. ( 11) is defined along the line TV passing through the centre of the snowpack, see Fig. 1.
The function Ĩ (z) is found by measuring light reflectance from snow walls and can be used to retrieve the optical and microphysical properties of the snow layers at any depth.This is not possible if, for example, the measurements of the light reflectance from the snow top (Kokhanovsky et al., 2011) are analysed.This is due to weak dependence of the snow reflectance in the UV and, also in the visible, on the size of particles and small penetration depths of IR radiation (sensitive to the snow microstructure) into snowpack.The retrieval algorithms are based upon the assumption that the registered radiation intensity is a constant function of the spatial coordinates in each homogeneous sub-region (layer) and this constant value does not depend on neighboring sub-regions (layers).It is actually believed that each sub-region (layer) of the snowpack can be considered separately from others.The constant value Ĩ (z) in each homogeneous layer is often believed equal to the value I snow (H, * ) for the homogeneous snowpack under the same optical properties.Such an approach is "the horizontal 1-D transfer model".The difference of the 2-D solution and the 1D solutions is termed as "2-D effects".We will check the accuracy of the 1D radiative transfer models using exact solutions of the 2-D problem (see Eqs. 1-8).We do not use the term "3-D effects" because the 2-D problem is under consideration.
Numerical algorithm
Below follows an outline of the numerical method used by us for the solution of the above radiative transfer problem.We introduce a quadrature with nodes m {θ m , φ m }, see Fig. 4a, and weights m , m = 1, . . .M. For this purpose we use the mesh over angle θ : and the mesh over angle ϕ for each interval θ −1/2 , θ +1 / /2 of the mesh over θ: Points θ +1/2 and ϕ n+1/2, have been chosen so that squares of all cells ϕ n−1/2, , ϕ n+1/2, × θ −1/2 , θ +1/2 are identical.We define the node {ϕ n , θ } in each cell, see Fig. 4b, and renumber all nodes with a single index m.Further we define the weight m as a square of the corresponding cell m .Then we approximate the continuous function I (x, z, θ, φ) with functions I m (x, z) = I (x, z, θ m , φ m ) and replace the scattering integral in Eq. ( 1) with the quadrature sum: where The coefficients ρ nm (x, z) correspond to the light scattering event from the direction n {θ n , φ n } to the direction m {θ m , φ m }.Therefore, they are the integrals of a complicated forward-peaked phase function ρ(x, z, θ m , ϕ m , θ , ϕ ) over the cell n under fixed values of the angles {θ m , ϕ m }.To find the integral of a forward-peaked phase function one introduces the additional quadrature in the cell n by the nodes j, n {θ j,n , ϕ j, n } and the weights j, n , j = 1, . . ., L n , here L n is the number of the additional nodes.This quadrature is refined in the subregions of the cell n , where the integrand function has a great gradient, see Fig. 5, where the additional nodes j, n {θ j, n , ϕ j, n } are designated by the black circles.The following equality is always kept: Then the coefficient ρ nm (x, z) can be found by a quadrature sum: So Eq. ( 1) for the functions I m (x, z) takes the form (with account for Eq.10): where the derivative ∂I ∂ m is written as: The values are projections of the unit vector m onto coordinates axes x and z, see Fig. 4a.To solve the system of differential equations (Eq.16) for the functions I m (x, z), we introduce a regular mesh over spatial variables x, z: is considered as a homogeneous one.Integrating Eq. ( 16) over this cell, one obtains the exact algebraic relation: Here the values σ k,j , ω 0, k, j , ρ nm, k, j correspond to the cell The values I m, k, j , I m, k±1/2, j , I m, k, j ±1/2 are averaged light intensities defined by the following integrals: The boundary conditions for Eq. ( 19) follow from Eqs. (5-8): To close Eqs.( 19) and (23-25) one needs additional relations.They are taken as where the function s(ξ ) = sgn(ξ ) = 1 as ξ > 0, −1 as ξ < 0 and the parameters v m, k, j , u m, k, j are defined on the interval [0, 1].Therefore, the piece-linear approximation to the solution is sought in the spatial cell, see Fig. 6.Here the parameters v m, k, j , u m, k, j define the variation of the solution in the cell (Carlson, 1972).
Let the solution in the nodes n , n = 1, . . ., m-1 and n = m + 1, . . .M be known.Then the solution of the system (Eqs.21 and 25-29) for the node m can be found by the so-called sweep procedure in a following way.
If the angles θ m , ϕ m are from other intervals, then indices are to be sorted in yet another order.Generally, the index k increases as s(ξ m ) > 0 and decreases as s(ξ m ) < 0, the index j increases as s(γ m ) > 0 and decreases as s(γ m ) < 0.
To solve the system (Eqs.21 and 25-29) for all nodes m , the iterative Seidel's method (Saad, 2000) is used.In this method the already obtained values I n, k, j are used to calculate the right side of Eq. ( 21) and further the values I m, k, j at other nodes.x The previous version of the presented algorithm was outlined by Sokoletsky et al. (2009), where it was applied to the calculation of solar light reflectance by natural sea waters.There the scattering phase functions were defined by their values in nodes of a very refined mesh over the interval [−1, 1] and approximated by piecewise linear functions.
Here the scattering phase functions are given by their Legendre coefficients.Furthermore, we apply the adaptive method of choosing additional meshes j, m to calculation of integrals (Eq.15).
Results of numerical experiments
All computations were done by the code RADUGA-6 (Nikolaeva et al., 2005;Sokoletsky et al., 2009) on the hybrid cluster k100 (http://www.kiam.ru/MVS/resourses/k100.html) assuming the following parameters: 6. snow albedo A snow = 0.8; 7. the diffuse source, when both a snowpack and a pit are covered by a sheer film.
We have selected a typical snow phase function as suggested by Kokhanovsky et al. (2011).The phase function does not depend strongly either on the wavelength (in the optical range) or on the size of ice grains.The extinction coefficient 1 mm −1 and the values of snow grain albedos in the range 0.98-1.0are typical for snow.
Both homogeneous and heterogeneous snowpack were under consideration.A homogeneous snowpack is defined by the constant single scattering albedo ω snow 0 .A heterogeneous snowpack contains a polluted layer, see Fig. 8.It was assumed that: Here parameter t is thickness of a polluted layer, ω0 is single scattering albedo of clean snow.
We define three experimental conditions (see Fig. 1): 1. no black film: 2. a black film is only on the pit's bottom EB: 3. a black film is on the pit's bottom EB and left boundary If the snow surface is covered by the black film, radiation is not reflected at this surface and does not influence the registered radiation intensity on the line AB.So the registered radiation intensity depends only on properties of the snow on the vertical wall AB.If there is no black film, one registers the radiation reflected by both the bottom and two walls of the pit.Then the registered radiation depends on optical properties of the snow at all walls of the pit.
The following parameters are used in the numerical calculations.
1. N = 800 is the number of the Legendre polynomials to represent both phase functions; 2. M = 360 is the number of nodes of the quadrature; one needs a dense quadrature to approximate the strongly anisotropic solution I (x, z, θ, φ); 3. K = 468, J = 1610 are numbers of cells of the spatial meshes.The mesh over z is refined in the vicinity of the top boundary z = H , where the intensity I (x, z, θ, φ) has a large gradient.The mesh over x is refined near snowpack wall AB, see Fig. 2, for the same reason.
Let us consider relative radiation intensity Ĩ (z) given by Eq. ( 9) on the vertical wall AB of a snowpack, see Fig. 2, in the direction * * , which is perpendicular to the wall AB and at the top boundary CS of the system in the zenith direction * .
The relative intensity at the horizontal line CS in the zenith direction * for homogeneous snowpack is given in Fig. 9.One can see that the intensity of reflected radiation has extrema near the air/snow boundary; similar effects are observed in clouds illuminated by direct solar light (Nikolaeva et al., 2005).In the problem under study a maximum of radiation intensity in the snow near the air/snow boundary is formed by radiation penetrating in the snowpack and only weakly absorbed near this boundary; the maximum enhances as snow absorption enhances.In a similar way, a minimum of radiation intensity arises outside of the snow near the air/snow boundary due to absorption of radiation by the snow.Thereby the extrema in the radiation intensity in Fig. 9 arise due to the neighbourhood of two different media (snow and air).Note the 1-D vertical transfer model leads to the This deviation shows whether it is possible to consider the intensity Ĩ (z) as a constant function far from the upper and lower boundaries of the pit.In other words it shows whether the 1-D model is applicable to process measurement data along vertical walls of snowpack.
It follows from Fig. 11 that the 1-D transfer model is not applicable for experiments under the black film because in this case the deviation r(z) is less than 10 % only near the central point z = H /2. Actually, the size of the sub-region, where the deviation r(z) is less than 10 %, is equal to 7 cm -if both bottom and opposite sides are covered by the black film -and about 17 cm -if only the bottom is covered the black film.The size of this sub-region for the case without black film is about 55 cm.
The results for the pit without black film are shown in Figs.12-15.It should be stressed that in this case the radiation registered on the wall AB is reflected by the bottom and the opposite walls of the pit and depends on the optical properties of the whole surface of the pit.
Let us consider homogeneous snowpack.Here the deviation r(z) decreases as absorption decreases (see Fig. 12) and the width D of the pit increases (see Fig. 14).The function r(z) only weakly depends on the depth H (see Fig. 13).
At small values of the probability of photon absorption β = 1 − ω 0 and in broad pit, the deviation r(z) is less than the threshold value 10 % far from bottom and upper edges of the pit; here the 1-D model can be used.At the same time this deviation is large near the bottom and upper edges (boundary effects), where the 1-D model is not applicable.
The influence of heterogeneity of a snowpack on relative radiation intensity is presented in Figs.12-15.The thin polluted layer in the centre of the pure snowpack, see Fig. 8, leads to a minimum in reflected radiation intensity in the vicinity of the layer (the shadow of the minimum is spread over the whole wall, if absorption in snow is weak enough).Let us define the width of the spread of the optical influence of the polluted layer as the size of the sub-region, where the relative intensity of the polluted layer differs more than by threshold value b% from the relative intensity of the homo- geneous snowpack: Here the point z = H /2 is the central point of the whole snowpack and the polluted layer and the function p(z) is defined by the relation: The function Ĩ0 (z) is the relative intensity in the homogeneous snowpack, Ĩ (z) is the relative intensity in the snowpack with the polluted layer.The widths of the spread of polluted layer optical influences t * are presented in Table 1 for the different single scattering albedo ω snow 0 outside of the polluted layer, the width of the polluted layer t and threshold value b.One can see that t * is always larger than the real width of the polluted layer t, especially when the outer snow is clean.The error in width of the polluted layer (when defined via the value t * ) can reach 200-400 % (see Table 1), especially if the polluted layer is thin.At the same time the value of the minimum of the relative intensity in the polluted layer depends on the width of this layer and the single scattering albedo outside of this layer, see Fig. 15 and Table 1.Note that the minimum decreases as the width of the layer decreases and the albedo of surrounding medium increases.
Conclusions
We have presented the 2-D radiative transfer problem related to the reflection of solar light by a rectangular wide pit in a thick snow layer.Simulation (by the parallel code RADUGA-6) is based upon the mesh technique of the discrete ordinate method when peaked scattering phase functions of snow are exactly taken into account.A diffuse radiation source, produced by a sheer film covering a snowpack, is assumed.Such source models are close to those for real ground measurements.
We have checked whether the 1-D model, when the reflected radiation intensity is considered as constant function of the spatial coordinate in each homogeneous subregion of a snowpack, is applicable to describe the real measurements.We found that the 2-D effects (brightening and shadowing) on the top boundary of a snowpack near the vertical wall of the pit are significant in spite of a diffuse radiation source.
The 2-D effects are significant on the vertical wall of the pit in a homogeneous snowpack, especially near the upper boundary.At the same time, 2-D effects are less evident at large values of the pit's width far from its bottom and top boundaries, when snow is almost clean.
Additional 2-D effects arise in layered snowpack.Although minimum in intensity on a vertical wall of a pit is localized near a polluted layer, intensity out of minimum can be influenced by this polluted layer.
One can conclude that 1-D models can lead to large errors in the simulation of the measured radiation intensity on vertical walls of snow pits.The retrieval algorithms should, therefore, be based upon the 2-D and 3-D radiative transfer models.
, H, * /S 0 (9) are of interest.Here the function Î (x) defines the radiation intensity exiting from the top boundary in the zenith direction * .The function Ĩ (z) corresponds to radiation intensity reflected by the vertical wall AB of the snowpack (see Fig. 2) in the direction * * , perpendicular to the wall AB.
Fig. 6 .
Fig. 6.The piece-linear approximation to the RTE solution over the spatial variable x under ξ m > 0.
1. the region height H = 0.5 m, 0.6 m, 0.7 m, the region semi-width X = 5 m, seeFig.air (aerosol) scattering phase function ρ air is obtained via Mie theory, the snow phase function ρ snow (see Eq. 4) is found by geometrical optics theory as described byKokhanovsky et al. (2011), see Fig.7.
Fig. 7 .
Fig. 7.The scattering phase functions.The molecular scattering is ignored and the air phase function is assumed to be equal to that of atmospheric aerosol.
Fig. 8 .
Fig. 8.The 3-D geometry of the region with the central polluted layer.
Fig. 9 .
Fig.9 air snow Fig. 9. Relative intensity Î (x) in the zenith direction * at the top boundary CS of the homogeneous snowpack.Width D = 1 m, depth H = 0.7 m and no black film, for the different single scattering albedos ω snow 0 .
Fig. 11 .
Fig. 11.Relative intensity Ĩ (z) in the direction * * (a) and the deviation r(z) (b) at the vertical wall AB of the homogeneous snowpack.The snow single scattering albedo ω snow 0 = 0.98, the width D = 1 m, and depth H = 0.7 m , with and without black film. Fig.12
Fig. 12 .
Fig. 12. Relative intensity Ĩ (z) in the direction * * (a) and the deviation r(z) (b) on the vertical wall AB of the heterogeneous snowpack.Single scattering albedo ω0 = 1 out of the inserted layer, width D = 1 m, depth H = 0.7 m, and no black film, for the different values of the thickness t of the inserted polluted layer.30
Fig. 13 .
Fig. 13.Relative intensity Ĩ (z) in the direction * * (a) and the deviation r(z) (b) on the vertical wall AB of the homogeneous snowpack.The snow single scattering albedo ω snow 0 = 0.98, the width D = 1 m, and no black film, for different depths H . Fig.13
Fig. 14 .
Fig. 14.Relative intensity Ĩ (z) in the direction * * (a) and the deviation r(z); (b) on the vertical wall AB of the homogeneous snowpack.The snow single scattering albedo ω snow 0 = 0.98, the depth H = 0.7 m for, no black film, different widths D. Fig.15
Fig. 15 .
Fig. 15.Relative intensity Ĩ (z) in the direction * * on the vertical wall AB of the heterogeneous snowpack.Width D = 1 m, depth H = 0.7 m, thickness of the inserted polluted layer t = 5 cm, and no black film, for different values of the single scattering albedo ωsnow 0
Table 1 .
The value of the minimum of the relative intensity Ĩmin and the width of the optical influence spread of the polluted layer t * (cm) for different values of the single scattering albedo ω snow 0 outside of the polluted layer at different widths of the polluted layer t. | 6,434.4 | 2012-10-01T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
The discovery of two new benchmark brown dwarfs with precise dynamical masses at the stellar-substellar boundary
Aims. Measuring dynamical masses of substellar companions is a powerful tool for testing models of mass-luminosity-age relations as well as for determining observational features that constrain the boundary between stellar and substellar companions. In order to dynamically constrain the mass of such companions, we use multiple exoplanet measurement techniques to remove degeneracies in the orbital fits of these objects and place tight constraints on their model-independent masses. Methods. We combined long-period radial velocity data from the CORALIE survey with relative astrometry from direct imaging with VLT/SPHERE as well as with astrometric accelerations from H IPPARCOS - Gaia eDR3 to perform a combined orbital fit and measure precise dynamical masses of two newly discovered benchmark brown dwarfs. Results. We report the discovery of HD 112863 B and HD 206505 B, which are two new benchmark likely brown dwarfs that sit at the substellar-stellar boundary, with precise dynamical masses. We performed an orbital fit that yielded the dynamical masses of 77 . 1 + 2 . 9 − 2 . 8 M Jup and 79 . 8 ± 1 . 8 M Jup for HD 112863 B and HD 206505 B, respectively. We determined the orbital period of HD 112863 B to be 21 . 59 ± 0 . 05 yr and the orbital period of HD 206505 B to be 50 . 9 + 1 . 7 − 1 . 5 yr. From the H and K band photometry from IRDIS data taken with VLT/SPHERE, we estimate the spectral types of both HD 112863 B and HD 206505 B to be early-mid L-types.
Introduction
Companions that have precise model-independent masses and determined ages, known as benchmark brown dwarfs, are fundamental in testing substellar evolutionary models.Such objects are key in placing constraints on the mass-luminosity-age relations of brown dwarfs that are otherwise plagued by a lack of observational constraints (Bildsten et al. 1997;Marley et al. 2007;Marleau & Cumming 2014).
In order to measure precise dynamical masses of benchmark brown dwarfs, a combination of radial velocity, relative astrometry, and absolute astrometry data can be used to constrain the orbital parameters of such objects and therefore reveal their model-independent masses.Radial velocity measurements provide the minimum mass (m sin i) of an unseen companion around a host star with an unknown orbital inclination (i).Proper mo-⋆ Based on observations collected with SPHERE mounted on the VLT at Paranal Observatory (ESO, Chile) under program 0103.C-0199(A) (PI: Rickman), and 105.20SZ.001(PI: Rickman) as well as observations collected with the CORALIE spectrograph mounted on the 1.2 m Swiss telescope at La Silla Observatory.⋆⋆ The radial velocity measurements, reduced images, and additional data products discussed in this paper are available on the DACE web platform at https://dace.unige.ch/.and the links to individual targets are listed in Appendix A.
tions measured from the combination of Gaia (Gaia Collaboration et al. 2016a) and Hipparcos (Perryman et al. 1997) break the degeneracy of the unknown orbital inclination, giving the dynamical mass of the companion.Furthermore, in cases where direct imaging of such companions is possible, we gain additional astrometry relative to their host star that can tightly constrain the orbital parameters and therefore precise mass measurements.Relative astrometry provides not only additional constraints on the orbit of the system and the dynamical mass of the companion, but it also provides photometry, revealing an estimate of the spectral type of a detected companion.
Directly detecting such substellar companions, however, does not come without its challenges.Previously, many direct imaging searches have adopted a "blind" survey approach, but the detection rate for such an approach has been low (e.g., Bowler & Nielsen 2018).To unveil the substellar companion population, an approach of target selection using precursor measurements is fundamental in increasing the efficiency of direct imaging of substellar companions.
In this work, we assess the feasibility of directly imaging companions that show indirect detection with radial velocities and absolute astrometry by performing an orbital fit.The predicted relative separation and estimate of the dynamical mass are assessed against the expected contrast for VLT/SPHERE, as demonstrated in Rickman et al. (2022), and we take coronagraphic imaging observations to confirm the detection of these objects directly.
As a result of adopting this methodology, we present in this paper the direct detection of two new benchmark brown dwarfs, HD 112863 B and HD 206505 B, and we give their dynamical masses.These targets join the short but increasing list of substellar objects with known dynamical masses (e.g.Cheetham et al. 2018b;Bowler et al. 2018;Brandt et al. 2019;Maire et al. 2020;Rickman et al. 2020;Brandt et al. 2021a;Bonavita et al. 2022;Franson et al. 2022Franson et al. , 2023)).The two brown dwarfs that we present in this paper sit right at the boundary of the hydrogenburning limit of ∼ 75 − 80 M Jup (Burrows et al. 2001;Saumon & Marley 2008;Baraffe et al. 2015;Dupuy & Liu 2017;Fernandes et al. 2019), making them key objects to study the boundary between what is considered a brown dwarf and what is considered a very low-mass star.Additionally, these objects can be used to empirically validate mass-luminosity-age relations of substellar objects, a crucial step in understanding evolutionary models of brown dwarfs more broadly.
The paper is organized as follows.The properties of the host stars are outlined in Section 2. In Section 3, we present the radial velocities, astrometry, and direct imaging observations and data reduction.In Sections 4, we present the detections and orbital solutions of HD 112863 B and HD 206505 B. A brief discussion on the implications of our findings and the conclusions of this paper are presented in Section 6.
Characteristics of stellar hosts
The spectral types and the color indices of the primary stars were obtained from the Hipparcos catalog (Perryman et al. 1997).The V T band magnitudes were taken from the Tycho-2 catalog (Høg et al. 2000).The luminosities L and effective temperatures (T eff ) for the two host stars are from the Gaia data release 2 (DR2; Gaia Collaboration et al. 2018), while the astrometric parallaxes (π) are from the Gaia early data release 3 (eDR3; Gaia Collaboration et al. 2021).
The v sin(i) of HD 112863 A and HD 206505 A were calculated through the calibration of the width of the cross-correlation function (CCF) of the CORALIE spectrograph as described in Santos et al. (2001) and Marmier (2014).The stellar surface gravities (log g) and metallicity ([Fe/H]) values are taken from Mata Sánchez et al. (2014).
The ages and masses of the two primary stars were determined using the Geneva stellar isochrones (Ekström et al. 2012;Georgy et al. 2013), which utilizes a Markov chain Monte Carlo (MCMC) approach. 1We ran the MCMC with a chain length of 100,000 in both cases and with the Gaussian priors input for the [Fe/H], T eff , and V-band magnitude as listed in Table 1.The resulting values for the mass, radius, and age of the primary stars are shown in Table 1.
Radial velocities
We used radial velocity measurements taken from the CORALIE survey (Queloz et al. 2000;Udry et al. 2000), which is an ongoing radial velocity survey with a wealth of data taken in the Notes. (a) Parameters taken from Houk & Cowley (1975) and Houk & Swift (1999). (b) Parameters taken from the Tycho-2 catalog (Høg et al. 2000). (c) Parameters taken from the Hipparcos catalog (Perryman et al. 1997). (d) Parameters taken from Gaia Collaboration (2020).The CORALIE spectrograph underwent two major upgrades, one in June 2007 (Ségransan et al. 2010) and the second in November 2014, to improve its overall performance.These upgrades introduced small offsets in the measured radial velocities.Due to this, we treated radial velocity data from the CORALIE spectrograph as three separate instruments, referring to the original CORALIE spectrograph, the 2007 upgrade, and the 2014 upgrade as CORALIE-98 (C98), CORALIE-07 (C07), and CORALIE-14 (C14), respectively.All the data products presented in this paper are available at the Data and Analysis Center for Exoplanets (DACE).2The radial velocity data were reduced using the CORALIE automated pipeline (Weber et al. 2000).This pipeline measures the CCF, the full width at half maximum (FWHM), the bisector, and the H α chromospheric activity indicator.We used these indicators to ensure that any observed periodic signals are not due to any stellar activity of the host star, which could mimic the expected radial velocity signal of an unseen companion.This is described in more detail in Appendix E. We also checked for the presence of any additional planetary signals at shorter orbital periods in the radial velocity data, and we did not find evidence of any additional companions in either system, as shown in Apch/radialVelocities/?pattern=HD112863 and https://dace.unige.ch/radialVelocities/?pattern=HD206505, respectively.pendix E. From the over 20-year baseline of CORALIE radial velocity data, we selected candidates that show signs of hosting long-period companion candidates through either linear or quadratic trends, as shown previously in Rickman et al. (2019), that could potentially be directly detected with high-contrast imaging.
Absolute astrometry
In order to utilize astrometric acceleration information from Hipparcos and Gaia, we used the Hipparcos-Gaia catalog of accelerations (HGCA; Brandt 2021).The HGCA is a cross-calibration of Hipparcos (ESA 1997;van Leeuwen 2007) and Gaia eDR3 (Gaia Collaboration et al. 2016b, 2021;Lindegren et al. 2021) that places both on a common reference frame with calibrated uncertainties.Each star in the HGCA has three proper motions: a Hipparcos proper motion near 1991.25, a Gaia proper motion near 2016.0,and an average proper motion between both epochs, calculated as the positional difference between Hipparcos and Gaia scaled by the time baseline.The long baseline between these missions enabled us to find astrometric accelerators that could be hosting companion candidates and that are able to be directly detected, much like in the case of long baseline radial velocity detections.
Direct imaging
We elected to observe targets that were predicted to be directly detectable from their radial velocity and absolute astrometric measurements.Incorporating the astrometric accelerations from the HGCA into the radial velocity information enables a more comprehensive characterization of the orbital parameters of the stellar companion.The sensitivity of the combination of Hipparcos and Gaia proper motions to orbital periods has been demonstrated for orbital periods extending to several hundreds of years (Brandt 2018(Brandt , 2021)).This means that the expected position of the companion relative to the host star can be well predicted before the direct detection itself, as described in Appendix D.
Knowing the predicted position and therefore the relative angular separation of a companion relative to its host star allows the feasibility of the direct detection to be assessed against the known coronagraphic inner working angle (IWA) as well as the expected contrast ratio against measured contrast curves, as demonstrated in (Rickman et al. 2022).Based on these criteria, we observed HD 112863 and HD 206505 with VLT/SPHERE (Beuzit et al. 2019) via the extreme adaptive optics system at the VLT under programs 0103.C-0199(A) (PI: Rickman) and 105.20SZ.001(PI: Rickman).
The data were reduced using the Geneva Reduction and Analysis Pipeline for High-contrast Imaging of planetary Companions (GRAPHIC; Hagelberg et al. 2016).GRAPHIC performs sky subtraction, flat fielding, bad pixel cleaning, and anamorphic distortion correction (Maire et al. 2016b).We then used principal component analysis (PCA; Soummer et al. 2012;Amara & Quanz 2012) and angular differential imaging (ADI; Marois et al. 2006) on the reduced data to remove point spread function (PSF) residuals.The detection images of both companions for both bands in both epochs are shown in Fig. 1.
The relative astrometry and photometry of the companions were calculated using the negative fake planet injection technique as used in Bonnefoy et al. (2011), and in particular, we followed the same procedure as used in Rickman et al. (2020).The forward models of the PSFs were generated using observations of the target stars through a neutral density filter while not behind the coronagraph and with shorter exposure times than the standard science observations.We therefore scaled the stellar PSFs to correct for these differences in exposure time and filter transmission prior to insertion into the science images. 3Given the large amount of observations of HD 206505 and the brightness of HD 206505 B, the reduced frames of the HD 206505 were cropped and binned prior to PSF insertion in order to reduce computation time while still obtaining precise relative astrometry and photometry.
Since HD 112863 B lies within the 150 mas IWA of SPHERE's H23 and K12 dual-band imaging modes 4 in both epochs of observations (see projected angular separation values (ρ) in Table 2), its flux is attenuated by the coronagraph.As the transmissivity of the coronagraph changes on scales smaller than a PSF, the native companion PSF is therefore distorted, and both astrometric and photometric measurements of the companion are biased.To correct for this effect, we used the SPHERE H23 and K12 coronagraphic transmission profiles (Vigan 2023, private communication) to create radial coronagraphic transmission images and then divided the reduced frames of HD 112863 by these radial coronagraphic transmission images.We then performed the same process as described above of injecting scaled stellar PSF images into the coronagraphic transmission-corrected reduced frames.The effect of this was much less acute for the second HD 112863 epoch in the K12 band, as the separation is greater than the first epoch in the H23 band.Our coronagraph transmission-correction approach is discussed in more detail in Ceva et al. (in prep).
The separation and position angle detector positions were then converted into on-sky separation and position angles by accounting for the plate scale of each band, the anamorphic distortion, the true north offset, and the pupil offset using the values found in Maire et al. (2016b).Additionally, a systematic uncertainty of ±3 mas for the positions of the target stars was folded into the positional errors of the companions (Vigan et al. 2016).The resulting astrometry and photometry of both companions for both bands in both epochs are shown in Table 2.We note that the uncertainty on the separations for nearly all of the measurements is similar (∼ 3 mas).This is because the errors of the separation values prior to conversion to on-sky values are on the scale of ∼ 0.1 − 0.5 mas such that the ±3 mas uncertainty of the positions of the target stars become the dominant term in the final on-sky separation error of the companions.
Orbital solutions
To obtain orbital fits of our observed systems, we used the orbitfitting code orvara (Brandt et al. 2021b), which has the capability of combining radial velocity data with relative astrometry from direct imaging and absolute astrometry from Hipparcos and Gaia using a comprehensive MCMC approach. 5In this section, we describe the fitted orbital solutions to HD 112863 and HD 206505.The output of the orbital parameters determined by fitting for the radial velocities, the astrometric accelerations from HGCA, and the relative astrometry from VLT/SPHERE imaging are shown in Table 3.
HD 112863 (HIP 63419)
The star HD 112863 was monitored with the CORALIE radial velocity survey between March 1999 and June 2023, covering 24 years of observations with 132 radial velocity measurements in total and providing a significant orbital phase coverage of 3 The corrections for the Neutral Density (ND) filter transmission make use of the filter curves available at https: //www.eso.org/sci/facilities/paranal/instruments/sphere/inst/filters.html. 4From the VLT SPHERE User Manual, 16th release 5 orvara can be accessed via GitHub here: https://github.com/t-brandt/orvara.
the radial velocities, as shown in Fig. 2. The RVs and astrometric acceleration joint analysis were previously presented in Barbato et al. (2023).Here, we present the first direct detection of HD 112863 B and updated orbital parameters that incorporate new relative astrometry from these observations along with the RVs and HGCA data as presented in Barbato et al. (2023).
We directly imaged HD 112863 B with VLT/SPHERE on 2021-04-07 with IRDIS in H2 and H3 bands as part of program 105.20SZ.001(PI: Rickman), as shown in Fig. 1, with a total integration time of 4096 seconds.The detection of this object is right at the limit of the IWA of the coronagraph of SPHERE.We measured a projected angular separation of 105.3 ± 3.2 mas and 106.6 ± 3.1 mas for the H2 and H3 bands, respectively, as outlined in Table 2, which to date is the smallest separation companion directly imaged with SPHERE/IRDIS coronagraphy.We incorporated the coronagraphic transmission correction (see Section 3.3) when calculating the astrometry in order to ensure that our values are not biased by any PSF distortion due to the coronagraph.We conducted further follow-up imaging of HD 112863 B on 2022-01-30 in the K1 and K2 bands as part of program 105.20SZ.001(PI: Rickman), also shown in Fig. 1, with a total integration time of 4864 seconds.This provided additional relative astrometry that could further constrain the orbital parameters and therefore the dynamical mass of the brown dwarf companion as well as extend the photometric baseline, which improves the determination of the spectral type.This is discussed in Section 5.
We performed an orbit fit that combined absolute astrometry from Gaia (Gaia Collaboration et al. 2016b) and Hipparcos (ESA 1997) that made use of the HGCA as described in Section 3.2 along with the CORALIE radial velocities (Barbato et al. 2023) and the relative astrometry measured through direct imaging, as shown in Table 2.For the fit, we used the orvara code, employing a parallel-tempered MCMC with 15 temperatures.For each temperature, we used 100 walkers with 40,000 steps per walker thinned by a factor of 50.We used a log-flat prior on the host star mass in order to also measure the mass dynamically.
As the orbit phase is well sampled by the radial velocity measurements, we were able to constrain an orbital period of 21.59 ± 0.05 years, which is in agreement with the orbital period of 21.61 ± 0.04 years from Barbato et al. (2023).We measured the dynamical mass of the primary to be M host = 0.89 +0.05 −0.04 M ⊙ , which is in agreement with the from Barbato et al. (2023) of 0.81 ± 0.05 M ⊙ , which was derived using the stellar spectral energy distribution (SED).The dynamical mass of the primary star is also in agreement with the isochronal mass measured in this paper of 0.85 ± 0.02 M ⊙ .The dynamical mass measurement of the BD companion HD 112863 B is M comp = 77.1 +2.9 −2.8 M Jup , which is also in agreement with the RV and astrometrically derived mass from Barbato et al. (2023) of 73.10 ± 3.20 M Jup .Unlike the analysis in this paper, Barbato et al. (2023) imposed a Gaussian prior on the primary stellar mass for the orbital fit.Despite this, the additional relative astrometric data from imaging adds constraints to the orbital fit that give a dynamical mass of the companion at a marginally higher level of precision than reported in Barbato et al. (2023) without relying on a constrained prior of the primary star, meaning that we calculated the dynamical mass of the primary as well.
The resulting orbital fits are shown in Fig. 2, and the full orbital parameters are listed in Table 3, Semimajor axis mas 206.5 +3.6 −3.5 317.5 +3.9 −3.6 Mass ratio q M comp /M host 0.083 +0.0015The RVs and astrometric acceleration joint analysis were previously presented in Barbato et al. (2023).Here, we present the first direct detection of HD 206505 B and updated orbital parameters that incorporate new relative astrometry from these observations along with the RVs and HGCA data as presented in Barbato et al. (2023).
We directly imaged HD 206505 B with VLT/SPHERE in the H2 and H3 bands on 2019-08-06 as part of program 0103.C-0199(A) (PI: Rickman) and with a total integration time of 8192 seconds.Additional follow-up imaging was performed on 2021-07-01 in the K1 and K2 bands with VLT/SPHERE as part of program 105.20SZ.001(PI: Rickman) and with a total integration time of 6144 seconds.The resulting images are shown in Fig. 1.
Even though the orbital phase is not fully covered by the radial velocity measurements of HD 206505, there are ample measurements from when HD 206505 B passed through periastron, where the radial velocity is at a maximum (see Fig. 3), which provides a strong constraint on the eccentricity and a relatively good constraint on the orbital period.Using the radial velocities as well as the astrometry from the HGCA and the relative astrometry from imaging, we performed an orbit fit using orvara as described in Section 4.1.As for the case of HD 112863, we used a parallel-tempered MCMC with 15 temperatures; for each temperature, we used 100 walkers with 40,000 steps per walker thinned by a factor of 50.We used a log-flat prior on the host star mass in order to also measure the mass dynamically.
From this orbital fit, we determined an orbital period of 50.9 +1.7 −1.5 years which is in agreement with the orbital period of 51.61 ± 0.03 years from Barbato et al. (2023).We measured the dynamical mass of the primary to be M host = 0.97 ± 0.03 M ⊙ , which is in agreement with Barbato et al. (2023) of 0.88 ± 0.06 M ⊙ derived using the stellar SED.The dynamical mass is also in agreement with the isochronal mass measured in this paper of 0.93 ± 0.02 M ⊙ .We measured the dynamical mass of the companion to be M comp = 79.8 ± 1.8 M Jup , which is also in agreement with the RV and the astrometrically derived mass from Barbato et al. (2023) of 75.60 ± 3.30 M Jup .As mentioned in the previous section, we did not impose a Gaussian prior on the primary star mass when performing the joint orbital analysis.Despite this fact, the additional relative astrometric information from direct imaging yielded a more precise dynamical mass on the companion than reported in Barbato et al. (2023).For HD 206505, the gain in precision in the dynamical mass is greater than for HD 112863, as there is less orbital phase coverage of HD 206505 from the RVs alone, and therefore the relative astrometry provides more of a constraint.
The resulting orbital fits are shown in Fig. 3, and the full orbital parameters are listed in Table 3 We also include some notable substellar companions (star symbols).The field brown dwarfs are color-coded by spectral classification.
Discussion
We calculated the absolute flux of the companions by integrating a BT-NextGen model spectra (Allard et al. 2012) of the target stars based on the stellar parameters given in Table 1 through the filters and then multiplying the contrast values in Table 2 while also accounting for the SPHERE filter transmission curves as described in Section 3.3.These model spectra were scaled by the distances and radii of the target stars (Table 1).
The absolute flux values were used to generate the colormagnitude diagram (CMD) shown in Fig. 4. In the figure, we also show a selection of field brown dwarfs and highlight some notable substellar companions that have previously been imaged with SPHERE.The field brown dwarfs shown in Fig. 4 are from the Brown Dwarf Spectral Survey (McLean et al. 2003(McLean et al. , 2007)), the L & T Dwarf Archive (Golimowski et al. 2004;Knapp et al. 2004;Chiu et al. 2006), and the IRTF Spectral Library (Cushing et al. 2005;Rayner et al. 2009), with updated distances.We included only those objects with parallactic distance measurements.Distances are from the Gaia data releases eDR3 or DR2 (Gaia Collaboration et al. 2016b, 2018, 2021) when available and from brown dwarf parallax studies otherwise (Dupuy & Liu 2012;Faherty et al. 2012;Smart et al. 2013;Tinney et al. 2014;Liu et al. 2016;Dupuy & Liu 2017;Smart et al. 2018;Best et al. 2020).Photometry for the highlighted substellar companions of interest is taken from Maire et al. (2016a); Chauvin et al. (2017); Cheetham et al. (2018a); Maire et al. (2020); Rickman et al. (2020); Bohn et al. (2020a,b).
The companions HD 206505 B and HD 112863 B are "twin" objects, and both objects are consistent with those of early-L field brown dwarfs, which are well above the L-T transition.Both companions are fainter than any of the M-type objects plotted in Fig. 4, hinting that they are likely to be of substellar nature.Both brown dwarfs have very similar K band absolute magnitudes (Fig. 4, see also Table 2); this is consistent with the fact that both companions have very similar dynamical masses, ages, and host star metallicities.HD 206505 B is toward the blue end of the distribution of field brown dwarfs, which agrees with predictions for a relatively massive, relatively old brown dwarf that has undergone significant cooling and contraction since formation.HD 112863 B is somewhat redder than expected for a fieldage brown dwarf, which could potentially be due to the object having a lower surface gravity or hosting a dusty circumplanetary disk.Further investigations into the spectroscopic properties of these objects are being explored in detail in a follow-up paper (Ceva et al. in prep.) that will include verifying the red nature of HD 112863 B with wider wavelength coverage and higher resolution from SPHERE/IFS data.
Summary
We report the direct detection of two new benchmark brown dwarfs respectively orbiting HD 112863 and HD 206505.Both companions have dynamical masses close to the stellarsubstellar boundary.The dynamical masses of HD 112863 B and HD 206505 B are 77.1 +2.9 −2.8 M Jup and 79.8±1.8M Jup , respectively.These masses were calculated through orbit fitting with orvara by combining the relative astrometry determined from direct imaging with VLT/SPHERE with radial velocity measurements from CORALIE as well as with astrometry from Hipparcos and Gaia.We measured the precise model-independent masses of these brown dwarf companions, which are vital benchmark objects to probe the stellar-substellar boundary.These objects can be used to empirically validate mass-luminosity-age relations of substellar objects that are degenerate in nature and contain a number of underlying assumptions, and they join a small but growing list of known benchmark substellar companions (e.g., Cheetham et al. 2018b;Peretti et al. 2019;Rickman et al. 2020;Maire et al. 2020;Bonavita et al. 2022;Franson et al. 2022Franson et al. , 2023) ) that serve as key calibrators of brown dwarf evolutionary models.Furthermore, the result of these direct detections validates the strategy for direct imaging of exoplanets and brown dwarfs by using precursor information such as radial velocities and/or astrometry to select candidates based on the potential for direct detectability, increasing the detection efficiency of such objects, as demonstrated in Fig. D.1.
The dynamical masses of the brown dwarf companions are both in agreement with the recently published values from Barbato et al. (2023) that were calculated from the radial velocity measurements and proper motions.The relative astrometry derived from the direct detections in this paper provides further constraints on the orbital parameters both in terms of measuring a dynamical mass on each of the primary stars and determining a higher level of precision on the companion dynamical masses.
As we did not use any informed priors on the host star masses for the orbital fits, we were able to measure the dynamical masses of the host stars.This approach is unlike that of Bar-bato et al. (2023), who imposed a Gaussian prior on the host stars in order to perform the orbital fit.The host star dynamical masses we report are in agreement with the masses determined by Barbato et al. (2023) using the stellar SEDs as well as the stellar masses measured in this paper using stellar isochrones, as described in Section 2 and shown in Table 1.We were also able to measure the dynamical masses of the companions to a higher level of precision, with an improvement from a ∼ 4.4% error reported in Barbato et al. (2023) for both HD 112863 B and HD 206505 B to errors of 3.6% and 2.2% for HD 112863 B and HD 206505 B, respectively.Using the H and K band photometry from VLT/SPHERE, we determine that both HD 112863 B and HD 206505 B are early-mid L-types, as shown in Fig. 4.More extensive followup of the spectroscopic properties of these two new benchmark brown dwarfs will be presented in Ceva et al. (in prep.) in order to explore the nature of these objects in more detail and to test against brown dwarf evolutionary models.
(e) Parameters taken fromMurgas et al. (2013).(f)Parameters taken from Holmberg et al. (2009). (g) Parameters taken from Gaia early data release 3 (Gaia Collaboration et al. 2021). (h) Parameters taken from Gaia data release 2 (Gaia Collaboration et al. 2018). (i) Parameters taken from Mata Sánchez et al. (2014). (j) Parameters derived using CORALIE CCF.southern hemisphere since June 1998.The survey utilizes the CORALIE spectrograph on the Swiss/Euler 1.2 m telescope at La Silla Observatory in Chile.The radial velocity survey includes a sample of 1647 main-sequence stars within 50 pc of the Sun.
Fig. 1 :
Fig. 1: High-contrast images of HD 112863 B and HD 206505 B taken with VLT/SPHERE IRDIS coronagraphy.The date of each image is shown in each sub-caption in the format YYYY-MM-DD.The filter used is shown in each image.The primary star in each image is masked behind the white circle where the coronagraph is.Top: VLT/SPHERE images of HD 112863 B. Bottom: VLT/SPHERE images of HD 206505 B.
Fig. 2 :
Fig. 2: Orbit fits of HD 112863 using the orbit-fitting code orvara.Top left: Radial velocity orbit induced by HD 112863 B over a full orbital period.Shown are the radial velocity data of COR-98 (blue points), COR-07 (yellow points), and COR-14 (green points).The thick line shows the highest likelihood fit; the thin colored lines show 500 orbits drawn randomly from the posterior distribution.Top right: Relative astrometric orbit of HD 112863 B relative to its host star in right ascension (∆α * = ∆α cos δ) and declination (∆δ).The thick black line represents the highest likelihood orbit; the thin colored lines represent 500 orbits drawn randomly from the posterior distribution.Dark purple corresponds to a low companion mass, and light yellow corresponds to a high companion mass.The dotted black line shows the , and the arrow at the periastron passage shows the direction of the orbit.The dashed line indicates the line of nodes.Predicted past and future relative astrometric points are shown by black circles with their respective years, while the observed relative astrometric point from VLT/SPHERE data is shown by the blue-filled data point, where the measurement error is smaller than the plotted symbol.Bottom: Acceleration induced by the companion on the host star as measured from absolute astrometry from Hipparcos and Gaia.The thick black line represents the highest likelihood orbit; the thin colored lines are 500 orbits drawn randomly from the posterior distribution.The residuals of the proper motions are shown in the bottom panels.
with the posteriors from the MCMC shown in Fig. B.1.The relative astrometry in terms of projected angular separation and position angle are shown in Fig. C.1.
The star HD 206505 was monitored with the CORALIE survey between October 2001 and July 2023, covering 22 years of observations with 91 radial velocity measurements in total, as shown in Fig.3.Using the radial velocity measurements and the astrometric information from the HGCA, we were able to predict the relative astrometry of the companion as shown inFig.D.1.
Fig. 4 :
Fig. 4: Color-magnitude diagram showing HD 112863 B and HD 206505 B (black squares) in comparison to the population of field brown dwarfs (circle symbols).We also include some notable substellar companions (star symbols).The field brown dwarfs are color-coded by spectral classification.
Fig. B.1: Marginalized 1D and 2D posterior distributions for selected orbital parameters of HD 112863 B corresponding to the fit of the RV, relative astrometry from direct imaging observations, and absolute astrometry from Hipparcos and Gaia with the use of orvara.Confidence intervals at 15.85%, 50.0%, 84.15% are overplotted on the 1D posterior distributions; the median ±1σ values are given at the top of each 1D distribution.The 1, 2, and 3σ contour levels are overplotted on the 2D posterior distribution.
Fig
Fig. D.1: Predicted relative astrometric positions for HD 112863 B (left) and HD 206505 B (right) from orbital fits using the radial velocity and HGCA data relative to their host stars in right ascension (∆α * = ∆α cos δ) and declination (∆δ).The blue star represents the primary star, and the gray star shows the position of the detected companion relative to the host star in the epoch of the first direct detection of each companion (2021.3 and 2019.6 for HD 112863 B and HD 206505 B respectively.)The contour lines represent the positions to 1, 2, and 3 σ predicted positions of each companion calculated for the time of the first direct observations of each target.
Fig. E. 1 :
Fig. E.1: Spectroscopic analysis of HD 112863 A using the CCF bisector.Top: Measured CCF bisector of the host star HD 112863 as a function of time.Bottom: Corresponding Lomb-Scargle periodogram.The two black horizontal lines show the 1% (bottom) and the 0.1% (top) FAPs and demonstrate that there are no significant periodic signals in the CCF bisector indicator.
Table 1 :
Observed and inferred stellar parameters for host stars HD 112863 A and HD 206505 A.
Table 2 :
Relative astrometry and photometry of HD 112863 B and HD 206505 B. We note that the values listed for HD 112863 B were obtained after correcting for the attenuation by the coronagraph that occurs within the IWA of 150 mas.
Table 3 :
(Brandt et al. 2021barlo orbital posteriors for the orbital fits of each system using orvara(Brandt et al. 2021b). | 7,733.6 | 2024-01-18T00:00:00.000 | [
"Physics"
] |
Pre-Lightning Strikes and Aircraft Electrostatics
An electric storm is a source of electrostatic charge that can induce high current and electric potential on a surface of an aircraft through direct effects. It can also be a source of radiated electromagnetic pulses on an aircraft in flight through indirect effects. Both direct and indirect effects can have adverse effects on flight safety. Thus, it is vital to gain good understanding of the pre-lightning strike and the electrical characteristics of a thunderstorm in order to quantify lightning threats to aircraft. Since lightning parameters are not easily measurable, predictive modeling can be applied to model the pre-lightning strike and aircraft electrostatics. In this paper, we applied the 3D dipole model in predicting the electrostatics build up along an aircraft extremities as it approaches an ambient electric field of a charged cloud. The results give a quantitative evaluation of the threats during pre-lightning strikes and electrostatics buildup on the aircraft. This is vital in designing and coordinating shielding measures to mitigate the threats and to harden the protection systems for aircraft.
Introduction
As an aircraft becomes part of a natural lightning discharge process, the direct and indirect effects due to lightning strikes are recognized as a threat to flight safety.The severity of the threat is heightened further for modern aircrafts made up of composite materials and equipped with the latest digital state of the art technologies.Such modern aircraft design is a doubleedged sword.On one side is the advantage of composite materials that provide cost, weight, and safety advantages, and the ease of flight control through the state of the art of technologies in control, communications, and command systems.While on the other side are the issues of dissipating the electric charges and or current induced away from a non-conductive surface and the susceptibility to electromagnetic interferences (EMI) induced through indirect effects of lightning radiated electromagnetic pulses (LEMPs).
In this paper, we apply a novel and innovative computational tool using the 3-dimensional (3D) dipole analysis to determine the voltage, charges, and electric field induced by lightning on an A380 airbus.We have chosen the airbus to expound on the previous work highlighted in [1].
Thundercloud and aircraft electrostatics
Cloud electrification process begins with a charge build up and the separation of the charges of opposite polarities within the cloud [2].The cloud electrification is simply a result of electrostatic charges of different polarities that buildup within the cloud.There are four major stages of the lightning stroke which are the pre-breakdown, the leader, the attachment process as it reaches an object on the ground, and the return stroke [3].
When an aircraft enters an ambient electric of a thundercloud during the pre-breakdown stages, it will modify the electric field [4].The entry of an aircraft into an ambient electric field can be regarded as a sudden introduction of a conductor into an electric field which intensifies the local electric fields [4].This enhances the local electric field buildup.The electric field enhancement will reach maxima along the aircraft extremities that are oriented towards the ambient fields.Typically for an ambient electric field of 100 kV/m, the fields at the extremities such as the radome and the tip of the stabilizers, and rudder could be enhanced to 1 MV/m [5].The charging of the aircraft produces a potential gradient between it and its surroundings.The potential gradient builds up to a sufficient level that corona discharge results.The corona discharges occur at the extremities of the aircraft and initiate a bi-directional leader that connects the cloud charge electrically to ground.There are two distinct phases to lightning-aircraft interaction.First is the development of streamers and leader sets develop at the field enhanced parts of the aircraft.The second phase is the high currents produced by first and subsequent return strokes.The second phase therefore induces the high energy transient current pulse, subsequent re-strikes and the long duration of the slow currents.
3D modelling of airbus A380 aircraft
As the potential, the charges, and the electric field along the aircraft surface are all unknown, a novel approach based on Eq.( 1) to Eq.( 8) was derived in order to analyse the pre-breakdown parameters of aircraft voltage, charges, and the electric field at various altitudes.The aircraft is mapped into dipoles indicated with small dots as shown in Figure 1 and Figure 2. The potential and the charges are computed using the postulation that an object placed in an electric field will assume the potential at that point.Thus, as an aircraft enters a charged thundercloud, it will assume the potential at that point and that all points along the surface will be at equipontential point.This postulation makes the analysis easier to compute knowing the potential coefficients, the cloud charges, and the cloud voltage.A set of equations were derived to compute the charges and the potentials of the dipoles on the aircraft surface.
The dipole placements as illustrated in Figure 1 are for an Airbus A380 near ground at a height of 900m.The charged cloud center is assumed to be at a height of 1000 m above ground while the aircraft is almost within the cloud.Note that Figure 1 is not drawn to scale and only shows the two dimensional dipoles D0 to D5 where D0 is the cloud dipole.The 3D arrangement showing the wings, stabilizers, and engines are shown in Figure 2.For convenience and ease of calculation, the aircraft is assumed to be directly below the thundercloud with the mid fuselage taken as the reference point.The dipoles are equally spaced along the fuselage with distances to the right taken as positive while that to the left of the reference point are taken as negative.The aircraft top surface is at 900 m high while the bottom surface is 890.82 m indicating a separation distance of the dipoles along the fuselage to be 9.18 m high.The rudder is protruding another 13.17 m from the top surface.Figure 2 shows the 3D arrangement of the dipoles.The 3D dipole model is used to calculate the aircraft voltage, the charge along the aircraft, and the electric field induced by these charges using the equations given in Eq. (1) through Eq.(8).The aircraft voltage is given as where k is a constant, q AD is the aircraft dipole charge, and V A is the aircraft voltage.The terms r + and r -are the distance from the positive and negative mono poles.and their images to a selected point on the aircraft surface.In Eq. (1) V A and q AD are unknown.The only known terms in the equation are the distances from the dipole to a selected coordinate or point on the surface of the aircraft and the separation distances of the mono poles which is determined from the aircraft geometry and the altitude of the aircraft.As the aircraft is at an equipotential surface, V A is the same at all points which makes the analysis easier to solve using the substitution method.
The charge calculation requires the distances of the dipoles and their images and the selected point on the surface of the aircraft.Since the aircraft geometry is in 3D, a three-dimensional distances (x, y, z) are determined as defined in the Eq. ( 2) from the aircraft geometry and its altitude.That is, for a particular point, say p1, on an aircraft surface, where k=1 to 2 where 1 is the positive mono pole and 2 is the negative mono pole that make up the dipole, the distance from the center of the dipole to a point p 1 on the surface of the aircraft is given by Eq. ( 2).The angle between the mono poles and the point p1 is given by Eq. ( 3).The same equation is also used in the calculation for the images dipoles, however with different variable used for the aircraft dipoles and the dipole images.
Thus, for an aircraft dipole and its image, the general term for the coefficient of potential for the dipole charge is given in Eq. ( 4): where q ADcoeff is the coefficient of the charge due to the dipole k on the surface of the aircraft and its image l within the earth.Thus, the voltage V A at point p 1 on an aircraft surface due to n number of dipole charges is given by the equation
A similar procedure is applied for the next point p 2 on the surface of the aircraft to obtain the voltage equation as shown in Eq. (7).
H S (7)
The process of solving a set of linear equations by substitution method is applied here by equating for a charge Q n in the equation for a point p n in terms of the other Q variables and substituting that value in the next equation for point p n+1 .The procedure is repeated for (n + 1) points thus eliminating the charges (Q) terms.The final equation is simply a single equation comprising of the charge coefficients and the aircraft voltage V A which is the only unknown term.Thus, from the computed aircraft voltage, the charges and the electric fields can be easily determined.The electric field is computed using Eq. ( 8).The computational process is quite tedious as it requires n+1 sets of equations n+1 points for n number of mono poles.
Simulation of pre-breakdown electric fields
The application of the computational method in calculating the aircraft voltage and electric fields and the surface charges for an A380 airbus at various altitudes and distances away from a -50 MV charged clouds are shown in Table 1 through Table 3. Table 1 is for an airbus A380 aircraft at an altitude of 900 m just below a negatively charged cloud center of 1000 m altitude.The aircraft voltage computed by the substitution method is -42.6227x 10 6 Volts.This is high as expected as the aircraft is almost within the charged cloud center of a potential of -50 MV.The dipole electric fields calculated along the A380 aircraft shows high electric fields at the rudder tip of 1.394 x 10 6 V/m and the left horizontal stabilizer of 1.137 x 10 6 V/m.Both these fields are within the specified breakdown electric field of 3 x 10 6 V/m as defined in [2] for dry air at sea level.These two extremities with the highest electric fields are most likely to initiate a bidirectional leaders into the charged cloud center to trigger a lightning flash connecting through the other extremities to ground.With a high electric field at the rudder tip, the most likely swept area would be along the fuselage connecting the radome to ground.That is, the rudder tips becomes the entry point while the radome becomes the exit point as defined in the normative zoning standards [7].
Similarly, the results shown in Table 2 is that for an airbus A380 at an altitude of 900 m but 1.5 km away from the charged cloud center.The aircraft potential is -1.4247 x 10 6 Volts.Comparing the dipole electric fields calculated, it is high for the rudder tip of 2.566 x 10 5 V/m and the left and right horizontal stabilizers of 1.338 x 10 5 V/m and 1.186 x 10 5 V/m respectively.The electric fields are below the breakdown level however, as the aircraft Left horizontal stabilizer (D12) 2.07244 x 10 -5 1.032 x 10 3 Right horizontal stabilizer (D13) 5.45472 x 10 -5 1.166 10 3
Conclusion
The application of the 3D dipole computational analysis in predicting lightning aircraft pre-breakdown electrostatics has been observed.Among the significant observations in this research, are the high electric field buildup at the aircraft extremities as an aircraft approaches the charged cloud center.The results obtained shows the electric fields at the aircraft extremities reach the breakdown electric field of 3 x 10 6 V/m.This highlights an important quantitative understanding of the phenomena of pre-lightning strike and aircraft electrostatics that can be applied in protection and shielding coordination to mitigate the effects of lightning strike on commercial and military aircraft.Modern aircraft industries are employing non-metallic structures and highly digital and computerized control technologies in aircraft command and control systems that are susceptible to failures and damages if no extra protection against severity of lightning are in place.Thus, aircraft industries need an improved definition of the threats that lightning poses in order to continue to drive the protection standards that can render safety of aircraft.
Figure 1
Figure 1Dipole placements along the aircraft surface[6]
Figure 2 A
Figure 2 A 3D arrangement of the aircraft dipoles showing the engines and the wings [6] | 2,889.6 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Physics",
"Environmental Science"
] |
Student-focused virtual histology education: Do new scenarios and digital technology matter?
This article was migrated. The article was marked as recommended. Innovative changes have become a critical part of teaching when resources are limited. In this study, we examined whether the student-oriented teaching method, when powered by virtual microscopy, improves histology learning compared to traditional microscope-based studies. Anonymous and voluntary post-course surveys were administered to students and essays were processed for content analysis. Google Analytics was used to obtain accurate Internet usage monitoring for WEBMICROSCOPE®. Using SPSS statistics, the examination scores for 2016 were compared to those of previous year, when the course was taught with a traditional-microscope-based model. The results demonstrated that the new teaching scenario was an effective tool, based on the mean examination scores in 2016 compared to the identical groups in 2015. The survey analysis showed that the students benefited more from using WEBMICROSCOPE® and that they frequently gained access to the Web server when they were not in class. The new scenario helped clarify the concept of histology for most of the students and was generally appreciated during teamwork-based histology classes. Students perceived that the use of the digital technology significantly influenced their confidence in learning the fundamentals of histology. In addition, changing to the new teaching scenario powered by WEBMICROSCOPE® improved the students’ motivation to participate in discussions and better understand the concept of Histology between the 2015 and 2016 academic years. Finally, these changes all had a positive impact on the students’ attention and satisfaction.
Introduction
Together with biochemistry and physiology, anatomy is one of the basic sciences taught in the medical curriculum (Bergman and Goeij 2010).For many clinical specialties, a long-lasting familiarity with macroscopic and microscopic anatomy is indispensable to guarantee safe and efficient everyday clinical work (Fasel et al. 2016).Millennium imaging technologies have become an important part of teaching the concepts of human histology and pathology in many medical schools.The emergence of new digital technologies has started to replace traditional information communication in basic science education because the current generation of graduates has been immersed in technology since their early school years and thus have high expectations regarding digital resources.Educators should especially support their proficiency in problem solving in technology-rich environments (Breslauer et al. 2006;Moreno-Walton et al. 2009;Van Nuland et al. 2017;Sharma and Kamal 2006).The traditional microscope-based, teacher-focused program in histology education is very much dependent on having adequate numbers and equal qualities of human histological slides and requires a number of qualified educators who can provide simultaneous close supervision at individual microscope workstations.Currently, the lack of these factors has driven many medical and dental faculties towards Web-based histology education.In the process, Web-based education is an effective way of interactively teaching younger generations.Moreover, by using virtual slides and Internet-based software, students can better understand the concept of histology, especially when using a high-quality digital slide of the tissue of interest.
The internationally highly recognized tradition of providing classical and didactic teaching impetuses, instruction and guidance regarding medical/dental education started at the Kuopio campus in 1972, when our first anatomy educators decided to provide a stimulating intellectual environment for undergraduate students.As one of the flagship institutions in Finland, the Institute of Biomedicine`s mission for education is still to emphasize programs with a strong correlation between structure and function based on Anatomy, Biochemistry, Histology and Physiology.However, there is now a strong tendency for traditional classroom lessons to share a smaller proportion of contact hours between our medical and dental curricula.
It was a significant challenge for the educators at the Institute of Biomedicine to adopt a new more effective and thoughtprovoking teaching program, particularly in histology, for medical and dental students during the 2016 academic year.Several Web-based microscope applications are currently available, and some of the advanced ones are listed here: (a) NYU Virtual Microscope (NYUVM) from New York University, USA (https://virtualmicroscope.iime.cloud/),(b) VSlides (Pathorama) from Basel University, Switzerland (http://pathorama.ch/vslides/),(c) vMic from Basel University, Switzerland (https://histodb11.usz.ch/index.html),(d) WEBMICROSCOPE ® (Fimmic, Helsinki, Finland (http://demo.WR.net/) and (e) 3DHISTECH, Budapest, Hungary (http://www.3dhistech.com/).During the selection process, we looked at technologies from the perspectives of both students and educators and considered the overall effectiveness of their use in terms of practice.Finally, WEBMICROSCOPE ® combined with a student-focused histology program was adopted for the new histology curriculum at UEF (Figure 1).
The main aim of these changes was to generate active interaction to exchange knowledge about the subject during histology classes in an intellectual environment supervised by teachers that could encourage a sense of inquiry.However, our hidden aim was that students would continue their web-based self-studies during small team discussions outside class hours using the collection of our digitalized histological specimens.
There are plenty of data available that show that students' interest in anatomy/histology/pathology can be evoked by incorporating digital imaging and that online access to histological specimens might improve their understanding of the cellular arrangement of human tissues/structures and their complex relationships during health and disease (Pantanowitz et al. 2012;Foster 2010;Kish et al. 2013;Gu and Ogilvie 2005).However, to the best of our knowledge, there is no such data analysis available on the effects of the changes obtained during the shift from traditional microscope-individual slide based and teacher-focused education to student-focused digital histology education.
Using deidentified data from final written histology exam grades, network profile data and an anonymous-voluntary survey given to students, we report the results of our analysis on the effectiveness of the new histology teaching methods.We also present solid evidence that the overall feedback from the students on student-focused digital histology was highly positive.Considering all of our results and the limitations of this study, we are able to recommend that web-based and student-focused teaching methods be included in histology curricula in the field of health sciences.
Analysis of academic performance
We analyzed the core histology written examination performance of 320 medical and 71 dental students in both scenarios (traditional teacher-focused microscope-based/traditional and student-focused WEBMICROSCOPE ® -based/digital) separately and together.The total population and gender balance in the medical and dental programs for the two years analyzed are shown in Figure 2. The assessment of histological knowledge was based on the scales and grades on the final subject examinations through the Institute's data system.Throughout the analysis, deidentified data were used.Only gender information was merged with the individual scores, so no student privacy issues were considered.Therefore, the study protocol did not require submission to the University Ethics Committee.Data are presented as the mean AE standard error of the mean (SEM) when comparing grades and as percentages when comparing proportions of the highest and/or lowest grades.Data were statistically analyzed using the SPSS software version 23 (IBM Corporation, New York, USA).The data were subjected to Student's t-test (grade averages) and Pearson Chi-Square analysis (grade proportions).p values < 0.05 were considered statistically significant.
Moreover, Google Analytics provided accurate Internet usage monitoring of the WEBMICROSCOPE ® server to understand how the students behaved in "digital scenarios" after they opened the website to drive better performance during histology classes and/or outside the UEF EDUROAM network.Finally, surveys were administered to the 137 students regarding the merits and impact of using WEBMICROSCOPE ® and the reformed histology teaching experiences on student's own preparedness.The descriptive, voluntary, anonymous, post-course survey was conducted in early spring 2017 and powered by a Web-based Kahoot application.The questions in the survey addressed the methods of teaching, quality of teaching, teaching tools (WEBMICROSCOPE ® ) and mode of assessment of students in relation to the histology curriculum.In the survey, students were mainly asked to agree, disagree, or give definitive opinions about the questions raised.Profiles of students who participated in the survey are summarized in table 1.
Learning environments
Professionals participating in histology education during 2015 and 2016, particularly those delivering histology classroom lectures, were the same in both semesters.The major topics for lectures and classrooms were similar and delivered by the same individuals.The number, length and schedules of histology classrooms were identical in both years.All information is available on the UEF website.
Real (2015) histological slides and the establishment of the virtual slide collection (2016) Until 2015, at the Institute of Biotechnology, light microscopes and sets of real slides were used to view sections of interest and learn the concept of histology in the classroom.The quality of slides was appropriate, but there was some discrepancy between different sets of slide boxes from the nature of histological slide preparations.
In early 2016, when the system was changed to virtual Histology, the most representative digital set of teaching slides was created as follows: first, several microscope slides of each tissue of interest were evaluated by professionals, and only the highest-quality and most representative copy of each tissue of interest was selected for digitizing.After this time-and work-consuming selection procedure, slides were sent to Fimmic (Fimmic Oy, Helsinki, Finland) for digitalization.The digital images were transferred to the server, and a Fimmic-powered WEBMICROSCOPE was used in the histology classroom for the academic year of 2016.After the slides were digitalized and transferred to the server, students were able to access these histological sections at any time.
Learning scenario differences between academic semesters 2015 and 2016
During teacher-focused histology education in 2015, when students worked under the guidance of a professor using their own box of slides, no real interactions were initiated by students to better understand the concept of histology.In 2015, a teacher explained and showed the histological structures of the tissue; thereafter, students tried to find the same structures in their own slides by using traditional light microscopes in histology classroom while following the syllabus.The interactions among the students and between the students and the teacher was rather limited when teacher-focused histology education was utilized.
In the student-oriented classrooms used in 2016, students were able to receive the professor's help if it was needed; however, they worked freely and in an individual or collective way throughout the classes.In these student-oriented classrooms, students studied histological samples in groups (team-based learning) using WEBMICROSCOPE ® on big touchscreens (Figure 1) and following the same syllabus used in 2015.In these classrooms, the level of interaction was greatly elevated, which probably enhanced the students' motivation to study histology.
In both years, the syllabus was not a minimum-level summary; instead, its content expected medical and dental graduates to better understand the concept of particular tissues of histology.Furthermore, it has some relevant and immediate applications to their future careers as clinicians and can evoke students' interest in the importance of knowing the principles of histology.
Academic performance monitoring
Learning was measured using an objective written test (exam) to assess the acquisition of histology topics in each of the groups studied.The quality, quantity and type of the questions on the final exam were comparable in these two years analyzed.Both final written tests contained a combination of simple, multiple-choice, and "bell ringer" identificationtype questions.The highest possible score in both years was 50 points, and the grading system was identical.Moreover, these examinations were developed by the same professors.
Results
A shift from traditional and teacher-focused scenarios to digital and student-focused scenarios improved academic performance.Based on the significantly better performance seen in traditional and digital scenarios (Figure 3), we decided to analyze further the academic performance of students by comparing the distribution frequencies of the highest (4-5) and lowest (1-2) grades between scenarios (Figure 4A).The proportion of the highest grades increased significantly in the total population between the traditional and digital scenarios.Moreover, the proportion of the lowest grades decreased significantly, in favor of the digital scenario.Subclass analysis revealed a significant decrease in the proportion of the lowest grades among medical students.However, the proportion of the highest grades did not differ significantly between the scenarios.The proportion of the highest and lowest grades among dental students was not significant, although the trend was similar compared to both the total population and medical students, with a 14.0 percentage point increase in the highest grades and a 16.9 percentage point decrease in the lowest grades, due to the low number of students in each group studied.
Gender differences
The benefit of a digital scenario was significant among female medical students, and there was a similar trend in the female dental student population (Figure 5).The distribution frequencies of the highest and lowest grades were significantly improved in female medical students (Figure 4A).Male students did not benefit from the digital scenario, although male dental students performed better in digital scenarios in terms of the average grades and distribution frequencies.However, the effect was not statistically significant.Overall, females seemed to benefit from the digital scenario more than male students did.
Digital and student-oriented scenarios improved students' academic performance significantly.The next step was to analyze WEBMICROSCOPE ® usage by using Google Analytics and a SEBPQ survey to study the phenomena and reveal possible correlations and explanations.The analysis of the representative survey indicated that 35% of the students with higher grades (4, 5) used the WEBMICROSCOPE ® for up to 4 hours daily while preparing for final written examinations.Further, 10% of this group of students used WEBMICROSCOPE ® daily for more than 4 hours before the histology written exam to increase their knowledge and enhance their self-confidence on histological structure recognition.On the contrary, 42% of those students who passed the exams with the lowest grade used WEBMICRO-SCOPE ® only up to a maximum of 2 hours daily.According to the survey, it is also clear that the overall time spent on WEBMICROSCOPE ® server sites resulted in more accurate performance on the final written histology examination.These data indicate that students who spend more time in "digital scenarios" are more successful on final written exams.These anonymous, voluntary and objective data given by students are nicely in line with the Google Analytics readouts, which show that just before the final written examination, the "Website visiting activity / page views" peaked around 12 -16 May 2016 (Figure 6); further, the total page views during this "high season" numbered well over 1100.
Next, using the survey, we decided to look beyond the volume-based activity data obtained during the course because we wanted to obtain data about the effectiveness of our renewed histology teaching method: how did students work together during teacher-supervised, student-focused histology classes using their tablets/computers and/or classroom touchscreens running WEBMICROSCOPE ® ?Were they interested in generating interactions to study at UEF EDUROAM or in public networks?To examine the utilization of WEBMICROSCOPE ® , we first used Google Analytics and surveys to answer the questions raised above.Google Analytics provided exact activity reports and information regarding where the students had utilized the WEBMICROSCOPE ® (IP addresses) and indicating whether the students preferred to access the histology slides on or off the UEF EDUROAM network.Furthermore, we obtained valuable answers in the survey concerning the shift from teacher-focused patterns to more student-focused methods, particularly how it influenced the students' feelings about and understanding of the concept of histology.Finally, we also monitored the effect of regularity of the WEBMICROSCOPE ® site's utilization on learning for exams in general.First, it is clear from the Google Analytics data that 42% of the total Website hits (6611) were from UEF EDUROAM sites and the rest were from different networks.It is also clear from our survey analysis that students were actively using WEBMICROSCOPE ® during their histology sessions (99%) and initiating small teamwork (97%) problem-focused discussions.Furthermore, students appreciated the supervisors' WEB-based instructions, which, according to the 98% of the survey answers, helped them understand the basic concept of histology.The survey answers are summarized in Figure 7.
A review of data from surveys (Figure 7) indicates that students appreciate expert-guided classes on the subject of histology, which helped them in exercises concerning the analysis, synthesis and conceptualization of the fundamentals of histology.Students appreciated the possibility of managing information and using the educator's reasoning to solve problems.Survey data also revealed that active discussions between students were important to allow them to exchange and exceed their current levels of understanding about histology.According to survey answers, WEBMICROSCOPE ®powered digital histology supervised by an identified expert was a mutually supportive scenario for the students' confidence: 97% of students had positive perceptions of team work during histology classes.These data of the survey run nicely in parallel with the overall improvement in students' academic performance and the educators' individual experiences: teaching the fundamentals of histology may benefit from a shift from teacher-focused/traditional scenarios to digitally directed, WEBMICROSCOPE ® -powered, student-oriented education.
It is also important to highlight that the Google Analytics data show that students outside the UEF EDUROAM network intensively logged into the portal to access WEBMICROSCOPE ® from many different 3/4 G networks (mainly from dnainternet.fi,inet.fi,elisa-mobile.fi)and used various web browsers (e.g., Internet Explorer, Safari, Chrome, Edge and Firefox).Additionally, and importantly, survey analysis demonstrated that no loss of functionality was recognized by students using WEBMICROSCOPE ® both in and outside classes and either in or outside the UEF network.According to the survey, more than 78% of the students gained easy access to the server independent of the network profile (3G/4G/EDUROAM) while using different types of devices (tablets in 49% of the cases, desktops in 41%, and smartphones in 9%).
These data suggest that students in younger generations have a favorable impression of having active access to WEBMICROSCOPE ® from outside the UEF network and of surfing uncomplicatedly in the cloud between organized microscopic slides resources.
Limitations
Because our implementation is rather recent, we have analyzed only the interim semesters of 2015 and 2016.Second, it is important to highlight that the academic quality of the students who experienced the recent change in the histology curriculum may also account for aspects of these successes.However, a pilot experiment regarding this second issue indicated that in the same population of medical students, no improvement could be determined in the anatomy exam grades obtained in 2015 and 2016.Even more interestingly, the average anatomy performance in 2015 was higher than that obtained a year later (2015 mean grade: 3.32; SD: 1.09; 2016 mean grade: 3.26; p<0.05,Student's t-test).This might decrease the possibility of the issue highlighted above; however, it would be worth the effort to monitor such factors in this issue more accurately in future studies.
Discussion
In the current educational case report, we uniquely provide empirical evidence that supports the notion that the curricular innovations adopted in 2016 (Institute of Biomedicine, UEF, Finland), which combine digital web "cloud"-stored online histology imaging and student education, are feasible for dedicated students for the following reasons: 1, educational principles can have a strong impact on students' academic performance and feelings of competence with regard to learning the fundamentals of histology.2, Students easily and actively gained access to new digital tools to analyze histological specimens.3, Our education combined independent, outside-class preparations with in-class, small-group, problem-focused discussions.4, Web-stored, high-quality and occasionally animated histological slides were available for review.5, WEBMICROSCOPE ® allowed both educators and students to "consume" histological images and develop their ideas either during class or via external, web-based discussions.6, This "WAW" effect of our new cloud-based histology education motivated us to build a digital bridge in our education for students, and such digital technologies are shown to be pivotal for continuing anatomy, biochemistry and physiology education reforms, thus providing further challenges and motivating opportunities for basic science educators at UEF.
Although free Internet atlases of static histological images are important in terms of becoming familiar with the concept of microscopic anatomy, to teach histology/histopathology, interactive cloud-based "Webmicroscope" servers started to be used only 5-10 years ago (Brisbourne et al. 202;Lundin et al. 2004;Scoville et al. 2007;Pinder et al. 2008;Husmann et al. 2009;Dee 2009;Gona et al. 2012;Sung et al. 2015).Following this international trend in histology education, we changed our classical, static-glass-slide-based and teacher-focused style of sharing information in 2016 to a new IT-and cloud-based system using WEBMICROSCOPE ® software combined with student-focused teaching methods.The aim of these changes was first to bring digital innovations into our education for younger generations and, second, to more efficiently achieve the goal of making students better understand the basic concepts of histology.
In light of the survey seen above, it is important to assert that those students for whom the new histology program was introduced overwhelmingly felt that the new teaching and learning digital scenario had a positive impact on their retention, although there was a "common knowledge" and "consensus" about the very short time limit (10-15 min) of student's attention during contact hours (Bradbury 2016).
Of the four studied groups of students (male medical students, female medical students, male dental students and female dental students), only one group (female medical students) showed a significant improvement in academic performance after the switch to student-focused, virtual histology (Figures 4 and 5) from a statistical point of view.Nevertheless, it is important to highlight that groups of dental students also achieved better exam scores, but based on the small individual number of scores in the group of dental students, the p values were not under 0,05, even though this difference was remarkable in the score distribution patterns.Here, we acknowledge this and attempt to hypothesize about what was special about female medical students.First, it was possible to learn, from the survey data, the differences between the male and female students on how they benefited from this type of learning.Therefore, in practice, digital guidance with digital materials was appreciated more by the girls.This is in line with the data published by Wang and coworkers (2009), who investigated the determinants and age and gender differences in the acceptance of mobile learning, finding that "selfmanagement of learning was stronger determinant of intention for women than for men".Taking these facts together, it is possible that female students may simply have studied more based on inspiration by the system than they did in the previous traditional microscope-based course, resulting in their better academic performances in 2016.
Ultimately, after analyzing the data, we argued that the investment of time and effort in the design of a new digital and student-centered histology course resulted in deeper and more self-confident learning in digital scenarios for the majority of students from the millennial generation.It is widely accepted that the present generation in universities should be even more characterized as a technologically proficient population than earlier generations and that they are very comfortable with using digital interactive technologies for communication.The use of PCs, tablets and smartphones has become an integral part of learning and everyday life for students (Kirschner and Karpinski 2010;Hills et al. 2016); therefore, learners might need more creative, interactive and computer-based Internet linked teaching scenarios, even more so than students from earlier generations.
This study offers credible evidence that our educators could deliver the fine details of cellular and tissue organization of human bodies in a more profound way than ever before by using WEBMICROSCOPE ® , which can have a huge impact on students' learning and can help them better understand the fundamentals of histology.In the new digital scenario, students were able to spend more time on learning histology, which definitely increased their learning satisfaction and academic performance.
According to the students' confidence in teamwork-based discussions obtained from the survey, we believe that studying in peer groups not only appeared to work for better academic performances from basic science but could also provide a foundation for introducing professionalism and leadership skills early in medical education.Teamwork skills in everyday practice are essential in modern healthcare systems that require clinicians to be members of a team that frequently needs to communicate and work together (O'Connell and Pascoe 2004).Therefore, our new teaching scenario might have a positive impact on this aspect.
Although cell-phone-based platforms have experienced significant improvements since their introduction (Price 2015; Ingraham 2015) and the younger generation is one of the major users of SMART mobile technology, the Google Analytics data showed that mobiles were used only in 9% of their total server accesses.Currently, SMARTs are nonbulky, highly portable, midrange handhelds with AMOLED (active-matrix organic light-emitting diode) screens, which are better suited to color reproduction and greater viewing angles.This could be optimal for the WEBMICROSCOPE ® server, using its highquality images (Fleming et al. 2016;Sanchez-Franco 2010).Although larger smartphone screens will lead to a greater perceived control of the subjects (Kim and Sundar 2014), bigger is not always convenient for use in students' everyday lives, which might explain the phenomenon noted above.Tablets and desktops (91% of the total accesses from outside the UEF EDUROAM network) might appeal to both task-and affect-oriented needs when studying histology.
To the best of our knowledge, no comparable studies have been published that could be cited in connection with these results.Therefore, further study could offer valuable insights for understanding the effects of screen size on smartphone adoption in digital histology studies, and these studies may confirm and extend our findings by investigating the potential moderating effects of different types of handhelds.Most importantly, the results of this study appear to support the hypothesis that educational principles can have a strong impact on students' academic performance and feelings of competence with regard to learning the fundamentals of histology, which was educators' and decision makers' dream when these educational changes were adopted at UEF.
Figure Legend
New IT learning environments have been set up to move histology education into digital scenarios to help students' active exchange of histology knowledge.A total of 391 students took the written examinations; 188 did so in traditional scenarios, and 203 did so in the digital scenario.The mean performance was significantly higher (* /p≤0.05/) in the total population of students and in the population of the medical students after the introduction of our new histology education compared to their previous performance.Although the dental students performed definitely better in the new scenario compared to their grades in the old scenario, the difference was not statistically significant (p=0.07).
The table in Figure 4A shows the distribution frequencies of grades in the different populations studied.Figure 4B shows the distribution profile of grades of the 391 students who took the written examinations before (188) and after (203) the new teaching scenario was introduced.Please note that the distribution profiles of grades changed from 2015 (traditional scenario) to 2016 (digital scenario).
A total of 159 female medical students took written examinations before (82) and after (77) the new teaching scenario was introduced.The mean performance was significantly higher after the introduction of our new histology education compared to the previous performance (3.2 versus 3.7, *significant difference between groups p≤0.05).
The graph in figure 6A indicates that students were regularly in "Digital histology scenario" during the time of the course using WEBMICROSCOPE®.Students very actively gained access to the program just before the final exam (labeled with *).The web server traffic pattern demonstrated a baseline activity per week.The number of hits increased to well over 400 in the days prior to the exam.As shown in figure 6B, in the highest season (* 16.05.2015)with 455 total hits, the use of Webmicroscope ® grew considerably during the morning and late afternoon hours, when the students were preparing themselves for the final written examination.From 8-16 h, the Webmicroscope ® Web site received more than 48% of the total "hits".We also recorded that 84 conversions occurred from the UEF network and that 371 views were gained from outside the UEF network by students, thus indicating the importance of this interactive interface for reviewing histology.
The overwhelming majority of students' answers indicate that team-based and teacher-supervised discussions using WEBMICROSCOPE ® appear to promote students' understanding and that educators' help is respected in this context.Overall, 67.4% of the students completed the survey.Most of the students (>98%) who participated in the surveys had passed a histology final written examination as a qualification to give accurate answers.While it is uncertain how the remaining population of students would have answered the survey questions, we strongly believe that the reported demographics were a fair representation of the class as a whole.
Take Home Messages
In the University of Eastern Finland, which is one of the flagship universities in Finland and one of the top 300 universities of the world, a new, more effective and evoking teaching program to medical and dental students was adopted in the Institute of Biomedicine during academic year of 2016.According to our data, the student-oriented teaching method, when powered by virtual microscopy, improves histology learning compared to traditional microscope-based studies.
Notes On Contributors S. Felszeghy, S. Pasonen-Seppänen designed the study, S. Felszeghy developed, collected survey; S. Felszeghy and A. Koskela analyzed data; S Felszeghy, S. Pasonen-Seppänen interpreted results of experiments; S. Felszeghy wrote and A. Koskela, S. Pasonen-Seppänen edited the manuscript; all authors approved the final version of manuscript submitted for publication.
Declarations
The author has declared that there are no conflicts of interest.digital era, students are now very tech savvy, the integration of digital learning and feedback would definitely enhance their learning.The detailed description of the new improvement and the monitoring of the students performance in the paper is well presented.The problem based and team teaching are very useful.
Figure 1 .
Figure 1.The new profile of the histology classroom at UEF.
Figure 2 .
Figure 2. Profile of students participated in the study
Figure
Figure A shows the basic arrangement of IT facilities, such as big touchscreens and round tables, to offer possibilities for team work.However, high-quality light microscope is connected to the system for teacher-oriented discussions of real sections of interest, if needed.Figure B illustrates how teachers can interact with students on touch-sensitive screens to motivate students in problem-focused discussions, and C shows how students can use PCs and tablets to gain access to WEBMICROSCOPE ® .
Figure 3 .
Figure 3. Mean grades of 1st year medical and dental students' histology exams in traditional and digital scenarios.The data are shown as the mean AE SEM, * p≤0.05,Student's t-test.
Figure 4 .
Figure 4. Distribution frequency of academic performances of the different student populations between Traditional and Digital scenarios.
Figure 5 .
Figure 5. Mean grades of female and male student exams in traditional and digital scenarios.The data are shown as the mean AE SEM, * p≤0.05,Student's t-test.
Figure 6 .
Figure 6.Cumulative graphs of the web browsing activity of the students during the histology course.
Figure 7 .
Figure 7. Students' beliefs and attitudes in relation to the educational value of new teaching scenarios powered by virtual histology.
Table 1 .
Profile of students that participated in the survey.Medical education in the digital age: Digital whole slide imaging as an e-learning tool.Journal of Pathology Inform 10.Reference Source Gona, A.G., Berendsen, P.B., & Alger, E.A. (2012) "New Approach to Teaching Histology".Medical Science Educator.15, 78-89. | 7,060.2 | 2017-09-07T00:00:00.000 | [
"Education",
"Computer Science"
] |
Bioinformatic Exploration of the Targets of Xylem Sap miRNAs in Maize under Cadmium Stress
Cadmium (Cd) has the potential to be chronically toxic to humans through contaminated crop products. MicroRNAs (miRNAs) can move systemically in plants. To investigate the roles of long-distance moving xylem miRNAs in regulating maize response to Cd stress, three xylem sap small RNA (sRNA) libraries were constructed for high-throughput sequencing to identify potential mobile miRNAs in Cd-stressed maize seedlings and their putative targets in maize transcriptomes. In total, about 199 miRNAs (20–22 nucleotides) were identified in xylem sap from maize seedlings, including 97 newly discovered miRNAs and 102 known miRNAs. Among them, 10 miRNAs showed differential expression in xylem sap after 1 h of Cd treatment. Two miRNAs target prediction tools, psRNAtarget (reporting the inhibition pattern of cleavage) and DPMIND (discovering Plant MiRNA-Target Interaction with degradome evidence), were used in combination to identify, via bioinformatics, the targets of 199 significantly expressed miRNAs in maize xylem sap. The integrative results of these two bioinformatic tools suggested that 27 xylem sap miRNAs inhibit 34 genes through cleavage with degradome evidence. Moreover, nearly 300 other genes were also the potential miRNAs cleavable targets without available degradome data support, and the majority of them were enriched in abiotic stress response, cell signaling, transcription regulation, as well as metal handling. These approaches and results not only enhanced our understanding of the Cd-responsive long-distance transported miRNAs from the view of xylem sap, but also provided novel insights for predicting the molecular genetic mechanisms mediated by miRNAs.
Introduction
Heavy metal accumulation in soils is of concern in agricultural production due to the adverse effects on food safety. Cadmium (Cd) is a non-essential element for plants, however it can be absorbed by the roots from the soil and transported to the aboveground parts; thus, it can not only affect the growth and the subsequent productivity of crops, but can also pose a great threat to human health because of its accumulation in the consumable parts of food crops [1][2][3][4]. MicroRNAs (miRNAs) are the most studied 20-to 22-nucleotide non-protein coding RNAs and are at the heart of regulating gene expression in multiple developmental and signaling pathways [5][6][7]. MiRNAs are hypersensitive to different heavy metals, such as Cd, aluminum, and lead in some crop plants (including rice, maize, oilseed rape, and radishes), and mounting evidence has revealed that miRNAs and their targets weaved networks play important regulatory roles in plant adaptation to different heavy metal stresses [4,[8][9][10][11].
In plants, one of the most fascinating aspects of RNA silencing is its mobile nature, and the movement of small RNA (sRNA) molecules can "non-cell-autonomously" orchestrate developmental and stress responses [12,13]. Results obtained with grafting techniques and transient expression systems have shown that sequence-specific short interfering RNAs with a size of 21-24 nucleotides travel to distant organs [14,15]. Phloem exudates contain diverse miRNAs and at least two of them, miR395 and miR399, involved in responses to nutrient availability, are transmitted through grafts, indicating long-distance movement [12,16,17]. Similarly, siRNA signals produced in source or sink tissues move from cell-to-cell and travel long distances via the phloem to apical tissues [18]. Though the long-distance transport of sRNAs was intensively investigated in phloem, small RNAs have also been isolated from the developing xylem of Populus stems, and a majority of these miRNAs have been predicted to target developmental-and stress/defense-related genes, including those associated with the biosynthesis of cell wall metabolites [19]. Arabidopsis miR857, specifically expressed in the vascular tissues of seedlings, is involved in regulating lignin content and consequently morphogenesis of the secondary xylem in by regulating the expression of its target gene LACCASE7 [20].
MiRNAs negatively regulate their target gene expression at transcriptional and post-transcriptional levels by regulating both messenger RNA (mRNA) degradation and translational inhibition based on miRNA/target sequence complementarity [6]. High-throughput degradome sequencing has been successfully established and adapted to validate miRNA splicing targets in a variety of plant species, such as hyperaccumulator Sedum alfredii [21], Populus [22], rice [23,24], soybean [25], canola [5], and maize [26]. Recently, an integrated web-based tool, DPMIND (Degradome-based Plant MiRNA-Target Interaction and Network Database), was developed to scan sRNA targets in multiple plant species [27].
To thoroughly predict the roles of long-distance moving Cd-responsive maize miRNAs, three xylem sap sRNA libraries of Cd-stressed maize were constructed for high-throughput sequencing. Then, an integrative bioinformatic approach composed of psRNATarget and DPMIND was employed to predict potential targets of Cd-responsive xylem miRNAs. Intriguingly, 34 high-confidence cleavable targets for 27 xylem sap miRNAs were identified. Moreover, nearly 300 other genes were also the potential miRNAs cleavable targets, and the majority of them were enriched in abiotic stress response, cell signaling, transcription regulation, as well as metal handling, chelation, and storage. This investigation, therefore, would provide aid to elucidate the molecular genetic mechanisms underlying plant responses to Cd stress from the aspect of mobile miRNAs.
High-Throughput Sequencing of sRNAs in Maize Xylem Sap
To investigate the differences in maize xylem sap sRNA profiles after Cd treatment, we collected xylem sap samples from Cd-treated seedlings, and three sRNA libraries (including two control samples) for sequencing were generated from the maize xylem sap.
We obtained about 2.7, 3.8, and 3.8 M total reads, represented by 0.96, 1.25, and 1.39 M unique sRNA reads, respectively, from the untreated 0 h (C0), untreated 1 h (C1), and Cd-treated 1 h (Cd1) libraries of xylem sap collected at the indicated time-point and treatment. The length distribution of reads showed that the majority of the reads were 20-24 nt in size, which was within the typical size range for Dicer-derived products [28].
To confirm the expression of sRNAs identified by deep sequencing, eight sRNAs (lengths of 19-25 nt) were randomly selected for quantitative real-time RT-PCR (qRT-PCR) analysis, and these contained three miRNAs (PC-3p-33282_23, zma-miR169l-5p, and zma-miR395a-5p_R-1) with lengths of 20-22 nt (Supplementary Table S1). The comparison indicated that the expression patterns of Cd-responsive sRNAs from high-throughput sequencing and qRT-qPCR exhibited a good concordance (Figure 1, Supplementary Table S1), implying the reliability of the sRNA-seq profiling data for the following analysis. of 20-22 nt (Supplementary Table S1). The comparison indicated that the expression patterns of Cd-responsive sRNAs from high-throughput sequencing and qRT-qPCR exhibited a good concordance ( Figure 1, Supplementary Table S1), implying the reliability of the sRNA-seq profiling data for the following analysis. The expression levels of sRNAs were compared between 1 h of Cd treatment (Cd1) and the control C1 samples. Data of qRT-qPCR are means ± SD from three independent biological replicates.
Identification of Xylem Sap Cd-Responsive miRNAs
After length filtration of the sequenced sRNAs, about 199 miRNAs (20-22 nt) were identified in xylem sap from Cd-treated or -untreated maize, including 97 newly discovered and 102 known miRNAs, which were homologous to the sequences in miRBase (Supplementary Table S2). Based on the number of reads (>10 at least in one sample) and MFEI (≥0.85) [29,30], about 20 new high-confidence miRNAs with relatively high expression levels were identified in the three samples (Table 1). The expression levels of sRNAs were compared between 1 h of Cd treatment (Cd1) and the control C1 samples. Data of qRT-qPCR are means ± SD from three independent biological replicates.
Identification of Xylem Sap Cd-Responsive miRNAs
After length filtration of the sequenced sRNAs, about 199 miRNAs (20-22 nt) were identified in xylem sap from Cd-treated or -untreated maize, including 97 newly discovered and 102 known miRNAs, which were homologous to the sequences in miRBase (Supplementary Table S2). Based on the number of reads (>10 at least in one sample) and MFEI (≥0.85) [29,30], about 20 new high-confidence miRNAs with relatively high expression levels were identified in the three samples (Table 1). MiRNAs detected in C1 and Cd1 libraries were used for differential expression analysis using the stringent criteria (|log 2 Ratio| ≥1, p ≤ 0.05). Finally, 10 miRNAs showed differential expression in xylem sap after 1 h of Cd treatment ( Table 2). Among them, the expressions of three newly identified miRNAs (PC-3p-10246_108, PC-3p-33282_23, and PC-3p-65413_10) was significantly regulated by Cd exposure (p ≤ 0.01). Regarding the 10 Cd-modulated miRNAs, two highly expressed miRNAs (zma-miR169l-5p, zma-miR398a-3p), and two new miRNAs (PC-3p-10246_108 and PC-3p-33282_23) were upregulated by Cd exposure; Cd significantly negatively regulated the remaining miRNAs (Table 2).
Target Predictions of Xylem Sap miRNAs
To better understand the biological functions of long-distance transported miRNAs, 199 significantly expressed miRNAs (p ≤ 0.05, in at least one dataset) from maize xylem sap were identified for target scanning (Supplementary File S1). The putative target sites in maize cDNAs were predicted using two plant sRNA target prediction tools (psRNAtarget and PsRobot).
With the application of psRNAtarget using the inhibition pattern of 'Cleavage', we identified a total of 2184 transcripts from 1436 maize genes, to be the targets of 196 xylem sap miRNAs (Supplementary Table S3). Using PsRobot, we obtained a total of 2514 transcripts from 1774 maize genes to be targets of 172 xylem sap miRNAs (Supplementary Table S4). Through the integration, we identified a total of 493 transcripts from 332 genes as the cleavable targets of 115 miRNAs in the intersection of results from psRNAtarget and PsRobot.
The Function Classification of the Predicted miRNAs Targets
To gain insights into the functionality of the miRNA targets, all of these 493 transcripts were functionally grouped by agriGO [31] and visualized in the candidate pathway networks with MapMan software [32].
Among the genes within the 'TF' group, 11 members of MYB family, two WRKYs, and one AP2-EREBP as well as one bHLH-transcription factor were all the targets of miR159 family members, whereas five NACs and six Homeobox-transcription factors were targeted by miR164s and miR166s, respectively (Table 3, Figure 2, Supplementary Table S3). With regard to the targets mapped to "Secondary metabolism" category, nine laccases were exclusively coupled to MIR397 family members. With regard to 'Abiotic stress' response, five genes (including two DNAJ proteins and one ERD ortholog) were targeted by five miRNAs individually. Similarly, within the 'metal handling, chelation, and storage' group, two major facilitator proteins and one MATE efflux family protein were uniquely targeted by three miRNAs. However, ZM2G058032 (heavy-metal-associated domain protein) and ZM2G407032 (ABC transporter) were co-targeted by miR399s (Table 3, Figure 2). With regard to the targets mapped to "Secondary metabolism" category, nine laccases were exclusively coupled to MIR397 family members. With regard to 'Abiotic stress' response, five genes (including two DNAJ proteins and one ERD ortholog) were targeted by five miRNAs individually. Similarly, within the 'metal handling, chelation, and storage' group, two major facilitator proteins and one MATE efflux family protein were uniquely targeted by three miRNAs. However, ZM2G058032 (heavy-metal-associated domain protein) and ZM2G407032 (ABC transporter) were co-targeted by miR399s (Table 3, Figure 2).
The predicted cleavable targets inhibited by miRNAs were inputted into MapMan software (3.6.0RC1) for metabolic pathways analysis (using the framework of Arabidopsis seed-Molecular Networks). The colored boxes indicated the expectation score output by psRNAtarget, and the larger value was shown in deep red.
In addition, several genes located in the pathway network were the potential targets of novel miRNAs. Particularly in the 'signaling' category, three genes involved in Ca signaling (ZM2G107575, ZM2G312661, and ZM2G174315) and the protein kinase (ZM2G100454) were uniformly targeted by novel miRNAs (Table 3, Figure 2).
The miRNAs Cleavable Targets
The transcripts of 332 maize genes in the intersection of psRNAtarget and PsRobot output, which were the potential 115 miRNAs cleavable targets, were further evidenced by the maize degradome data. With the aid of the DPMIND webserver [27], which harbors the degradome sequencing data of maize anther and ears, we obtained 34 maize genes harbored target sites of 27 xylem sap miRNAs by combining the outputs of psRNAtarget, PsRobot, and DPMIND (Table 4, Supplementary Table S5).
Regarding these cleavable candidates, the majority of them were intensively studied, including 11 squamosa promoter binding proteins (SBPs), seven nuclear transcription factor Ys (NFY), and three auxin response transcription factors (ARF) [6]. After excluding these well-known targets of miRNAs, several genes were filtered out as fresh cleavable targets for these xylem sap miRNAs, including the common stress-responsive miRNAs and their targets [6].
Many miRNAs appear to function together via co-targeting to regulate functionally related genes or pathways, and vice versa [33]. For example, the myb74 transcription factor (ZM2G028054_T03) was co-modulated by zma-miR159a-3p_R-1 and zma-miR319a-3p_R+1, while myb138 (ZM2G139688_T01) was specifically targeted by zma-miR159a-3p_R-1 (Table 4). In contrast, two F-box proteins (ZM2G064954_T01 and ZM2G119650_T01) were both the targets of zma-miR394a-5p. Similarly, ZM2G155490_T01 and ZM2G304745_T01 (encoding LRR receptor-like kinase) were targeted by zma-miR390a-5p (Table 4). The miRNAs in italic and those maize transcripts between parallel lines mean each of the miRNAs can target each transcript successively. psRNAtarget output: Exp for Maximum expectation and the star (*) indicating the largest score of the corresponding miRNA-target combinations, unpaired energy (UPE) for maximum energy to unpaired target site; For DPMIND, miR represents the homolog of the queried miRNA by BLAST, and the dollar label ( $ ) means the least number of the degradome datasets for the miR-target associations.
The Long-Distance Transport of miRNAs
Long-distance transport of signaling molecules, intensively investigated in phloem, is known to be a major component in plant growth regulation, as well as their adaptation to changing environmental conditions [34]. Although some studies have demonstrated the presence of numerous miRNAs in phloem tissues [35][36][37], so far, only three miRNAs (miR399, miR395, and miR172) have been shown to move long distances in plants [16,17,34,38].
However, xylem also plays an important role in the root-to-shoot signaling system [39]. Furthermore, the root-to-shoot Cd translocation process may be more complex than previously thought [40]. Regarding xylem, research on miRNAs is in its infancy. In this study, about 199 miRNAs were identified in Cd-treated or -untreated maize xylem sap, including 97 novel and 102 known maize miRNAs. Moreover, the three famous long-distance moving miRNAs (miR399, miR395, and miR172) were detected in maize xylem sap ( Table 3, Supplementary Table S2). These findings suggested that these miRNAs are potential signal molecules that move systemically.
MATE transporters were involved in the cellular transport and detoxification of Cd [41]. Furthermore, the distribution of allocated transcripts (e.g., MATE) along the root-to-shoot axis was correlated with the siRNA signal spread in hetero-grafted Arabidopsis [42]. Here, we identified zma-miR528a-5p in xylem sap, and predicted ZM2G148937 (MATE family protein) as its potential cleavable target (Table 3).
Six highly conserved amg-miRNA families (amg-miR166, amg-miR172, amg-miR168, amg-miR159, amg-miR394, and amg-miR156) were viewed as potential regulatory sequences of secondary cell wall biosynthesis [43], and Populus Pto-MIR156c might play vital roles in the regulation of wood formation in trees [44]. Moreover, the knockdown of rice MicroRNA166 confers drought resistance by causing leaf rolling and altering stem xylem development [5], and rice miR166 also plays a critical role in Cd accumulation and tolerance [45]. In this study, the miRNAs variants of these six families and nine miR166 isoforms with more than 30 reads in each of the three samples were identified in maize xylem sap (Supplementary Table S2).
The Potential Cleavable Targets of miRNAs in Xylem Sap
In plants, miRNAs and their targets show a pattern of near complementarity, suggesting that plant miRNAs likely act through endonucleolytic cleavage of target mRNAs [46]. In this study, 34 targets were predicted to be inhibited by miRNAs through cleavage with degradome evidence, and most of them were intensively studied targets of common stress-responsive miRNAs, such as NFY, ARFs, SBPs, and GRAS family transcription factors [6]. Concerning the cleavable candidates of xylem sap miRNAs, they were concentrated on the NFY-, ARF-, SBP-, and GRAS-transcription factors, which was targeted by zma-miR169s, zma-miR160f-5p, zma-miR156s, and zma-miR171s, respectively (Table 4).
It is of particular interest to note that three homeobox-transcription factors were the cleavable targets of zma-miR166s, and two of them were annotated as rolled leaf genes (Table 4), which was reminiscent of their rice ortholog OsHB4. Moreover, miR166 plays a critical role in Cd tolerance as well as in drought resistance through regulation of its cleavable HD-Zip target gene OsHB4 in rice [5,45]. Altogether, these results of miR166-mediated regulatory cascade strengthened the pivotal role miR166-HB couple in abiotic stress acclimation. Moreover, among the miRNA targets within the 'TF' group, members of other TF families (e.g., MYBs, WRKYs, NACs) were the specific targets of certain miRNA family members (Table 3, Figure 2, Supplementary Table S3). These TF-type miRNA targets might contribute to elucidate the complicated mechanism of Cd stress from the aspect of transcriptional network through unveiling transcription factor-regulated downstream target genes which were involved in Cd stress response.
In addition to these well-known targets of miRNAs, several genes were filtered out as novel cleavable targets for these xylem sap miRNAs, including the common stress-responsive miRNAs and their targets [6] (Table 4).
Many miRNAs appear to function together via co-targeting to regulate functionally related genes, and vice versa [33]. For example, the myb74 was co-modulated by zma-miR159a-3p_R-1 and zma-miR319a-3p_R+1, whereas two F-box proteins were both the targets of zma-miR394a-5p (Table 4). These co-targeting paradigms indicated that individual miRNA variants have different functions according to their specific targets [33].
Besides the 34 maize genes as the cleavable candidates of miRNAs with degradome evidence from the limited degradome data (Table 4), nearly 300 other genes were also the potential miRNAs cleavable targets ( Table 3, Supplementary Table S3). From a global view, many target genes were prone to be enriched in abiotic stress response, cell signaling, transcription regulation, as well as metal handling, chelation, and storage (Table 3, Figure 2). Using a degradome sequencing approach, leucine-rich repeat (LRR) protein, cation transporting ATPase, and Myb transcription factors, were found to be cleaved by miRNAs under heavy metal stress [25]. Furthermore, a few miRNA cleavable targets, including iron transporter and ABC transporter, were involved in plant responses to Cd stress [2,8]. MATE transporters were involved in the cellular transport and detoxification of Cd [41]. In this study, ZM2G148937 (MATE family protein) was predicted as the cleavable target of zma-miR528a-5p (Table 3). From another perspective, these targets in relation to the Cd stress acclimation highlighted the role of the corresponding miRNAs in regulating Cd stress response.
Intriguingly, these known cleavable targets of miRNAs were also identified in this investigation (Table 3), though we did not retrieve degradome evidence from the available public dataset. miRNAs were regarded as a new target for genetically improving plant tolerance to certain stresses [6]. From the perspective of miRNA-target couple, the characterization of the miRNAs and the associated targets in responses to Cd exposure will provide a framework for understanding the molecular mechanism of heavy metal tolerance in plants. Thus, it would be interesting to determine the role of these long-transported miRNAs, and whether these xylem sap miRNAs are transported to leaves under heavy metal stress or other stresses. Future investigation on the final location of xylem sap miRNAs, which might be achieved through analyzing the difference between the exudates from the node incisions where ears or leaves separated and xylem sap below the node, together with the target identification through degradome and proteome or ribosome profiling of the detached ears or leaves, will aid to illustrate the effect of xylem sap miRNAs on their potential targets in their final destination.
Plant Materials and Cd Treatment
The seedlings of maize (Zea mays L. cv. Nongda 108; China) were cultivated in a hydroponic system in a growth chamber with a temperature of 22 • C (night) to 28 • C (day), photosynthetic active radiation of 200 µmol·m −2 ·s −1 , and a 14/10-h day/night photoperiod. All hydroponic solutions were continuously aerated and renewed every three days. When the third leaves were fully expanded, the seedlings were transferred into fresh growing solutions containing 100 uM CdCl 2 , according to previous reports [2,3].
Sampling of Xylem Sap
The seedlings were separated into three groups-untreated 0 h (C0), untreated 1 h (C1), and Cd-treated 1 h (Cd1). For each group, 30 maize seedlings at the indicated timepoint/treatment were de-topped by cutting the stem with a razor blade just above the first internode, and the remaining parts without stem were used for xylem sap collection, according to previous reports with minor modifications [47][48][49][50][51]. Then, the cut surface was rinsed twice with distilled water and the liquid drawn in the first 5 min was discarded. Finally, the bleeding saps from 30 maize plants were harvested with 10 uL syringe for 1 h after cutting and mixed in a tube containing 1 mL Trizol as one sample replication, and single biological replication for each sample was used for sRNA sequencing.
Small RNA Library Preparation and Sequencing
Total RNA was extracted from C0, C1, and Cd1 xylem sap samples using Trizol reagent (Invitrogen, Carlsbad, CA, USA) following the manufacturer's procedure. Approximately 1 µg of total RNA were used to prepare a small RNA library (single biological replication for each sample) according to protocol of TruSeq Small RNA Sample Prep Kits (Illumina, San Diego, CA, USA). Then, we performed the single-end sequencing (36 bp) on an Illumina Hiseq2500 at the LC-BIO (Hangzhou, China) following the vendor's recommended protocol. The data was uploaded to NCBI/SRA with accession number SRP073229 (https://trace.ncbi.nlm.nih.gov/Traces/study/?acc=SRP073229), and contained miRNAs reads of C0, C1, and Cd1 xylem sap samples (single biological replication for each sample).
Small RNA Analysis
Data processing followed the procedures as described previously [23,24]. The raw reads were subjected to the Illumina pipeline filter (Solexa 0.3), and then the dataset was further processed with an in-house program, ACGT101-V4.2 (LC Sciences, Houston, TX, USA) to remove adapter dimers, junk, low complexity, common RNA families (rRNA, tRNA, snRNA, snoRNA), and repeats.
Identification of Known and Novel miRNAs
Unique sequences with lengths of 20-22 nucleotide [52] were mapped to monocot plant precursors in miRBase Release 22.1 (October 2018, mirbase.org) by BLAST search to identify known miRNAs and novel 3p-and 5p-derived miRNAs.
The unique sequences mapping to maize mature miRNAs in hairpin arms were identified as known miRNAs. The unique sequences mapping to the other arm of known maize precursor hairpin opposite to the annotated mature miRNA-containing arm were considered to be novel 5p-or 3p-derived miRNA candidates. The remaining sequences were mapped to other monocot precursors (with the exclusion of maize) in miRBase 20.0 by BLAST search, and the mapped pre-miRNAs were further BLASTed against the maize genomes (ftp://ftp.maizesequence.org/pub/maize/release-5b/assembly/ ZmB73_RefGen_v2.tar.gz with genome annotation file ZmB73_5b_FGS.gff.gz) to determine their genomic locations [53].
The sequences unmapped to maize mature miRNAs or other monocot miRNAs precursors were further BLASTed against the maize genomes, and the hairpin RNA structures containing mappable sequences were predicted from the flank 120 nt sequences using RNAfold software (http://rna.tbi.univie.ac.at/cgi-bin/RNAfold.cgi).The criteria used to annotate potential miRNAs was as described previously [54][55][56][57]. Sequences that met the criteria for secondary structure prediction were then considered to be novel miRNA precursors [53,58,59]. Minimum free energy index (MFEI) was also taken into account for evaluating the confidence of novel miRNAs, with the expected value (≥0.85) [7,29,30]. Moreover, all of the aforementioned criteria must be fulfilled in at least two distinct sRNA-seq libraries [52].
Differential Expression Analysis of sRNAs Under Cd Stress
Data normalization followed the procedures as described in a previous study [60]. sRNA differential expression based on normalized deep-sequencing counts was analyzed by selectively using Fisher's exact test and Chi-squared 2 × 2 test with the selection threshold of 0.05 [23,28,60].
To investigate the differentially expressed miRNAs between libraries, we compared the gene expression patterns of miRNAs in Cd1 and C1 library. Towards this purpose, we considered the following criteria: (1) p-value should be less than 0.05 (p ≤ 0.05) in at least one dataset; and (2) Log 2 ratio of fold change between normalized counts of C1 and Cd1 libraries was greater than 1 or less than −1 [28,56,57,59,61].
The Prediction of miRNA Targets
After filtering, we get 199 significantly expressed miRNAs (p ≤ 0.05, in at least one of the three samples). Then, these xylem sap 199 miRNAs were used to interrogate maize annotated cDNAs sequences (Zea_mays.AGPv3.22, ftp://ftp.ensemblgenomes.org) preloaded in the psRNAtarget web server (plantgrn.noble.org/psRNATarget) [62] for predicting target sites as described previously [61], and the default criteria for target prediction in psRNAtarget website were used.
Prediction of miRNA target genes was also performed by the local version of psRobot_tar scripts in psRobot (omicslab.genetics.ac.cn/psRobot) [63] using the 199 miRNAs for scanning maize cDNAs sequences (http://ftp.maizesequence.org/release-5a/filtered-set/ZmB73_5b_FGS_cdna.fasta.gz) with default settings (moderate model of pre-set the target penalty score 2.5, thus alignments that meet the penalty score cutoff will be reported in the result).
To scan the cleavable targets of the identified known miRNAs and novel miRNAs, these miRNAs were uploaded to DPMIND (http://cbi.njau.edu.cn/DPMIND) [27] to locate the homologous miRNAs for the following degradome data query.
For determining the expression of sRNAs, about 2 µg RNAs were reverse-transcribed by miRcute miRNA First-Strand cDNA Synthesis kit (TIANGEN, Beijing, China). Transcript levels of mature sRNAs were measured by qRT-PCR using a DNA Engine Opticon 2 real-time PCR detection system (Bio-Rad, Hercules, CA, USA) with miRcute miRNA qPCR Detection kit (TIANGEN) according to the manufacturer's instructions. Details of the primers used are listed in Supplementary Table S1. The maize 5S RNA was used as the internal control for RNA template normalization [64]. All reactions were run in triplicate. The relative expression levels of sRNAs were calculated by the comparative threshold cycle (Ct) method. At least three independent biological replicates were used for each small RNA. | 5,877.2 | 2019-03-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Principle Components Analysis and Multi Layer Perceptron Based Intrusion Detection System
Security has become an important issue for networks. Intrusion detection technology is an effective approach in dealing with the problems of network security. In this paper, we present an intrusion detection model based on PCA and MLP. The key idea is to take advantage of different feature of NSL-KDD data set and choose the best feature of data, and using neural network for classification of intrusion detection. The new model has ability to recognize an attack from normal connections. Training and testing data were obtained from the complete NSL-KDD intrusion detection evaluation data set.
Introduction
Fast few years have witnessed a growing recognition of intelligent techniques for the construction of efficient and reliable Intrusion Detection Systems (IDS).Due to increasing incidents of cyber-attacks, building effective Intrusion Detection Systems are essential for protecting information system security, and yet it remains an elusive goal and a great challenge.
In general, the techniques for Intrusion Detection (ID) fall into two major categories depending on the modeling methods used: misuse detection and anomaly detection.Misuse detection is based on the knowledge of system vulnerabilities and known attack patterns, while anomaly detection assumes that an intrusion will always reflect some deviation from normal patterns.Many AI techniques have been applied to both misuse detection and anomaly detection.Pattern matching systems like rule-based expert systems, state transition analysis, and genetic algorithms are direct and efficient ways to implement misuse detection.On the other hand, inductive sequential patterns, artificial neural networks, statistical analysis and data mining methods have been used in anomaly detection [1].
Architecturally, an intrusion detection system can be categorized into three types host based IDS, network based IDS and hybrid IDS [2] [3].A host based intrusion detection system uses the audit trails of the operation system as a primary data source.A network based intrusion detection system, on the other hand, uses network traffic information as its main data source.Hybrid intrusion detection system uses both the methods [4].However, most available commercial IDS's use only misuse detection because most developed anomaly detector still cannot overcome the limitations (high false positive detection error, the difficulty of handling gradual misbehavior and expensive computation [5]).This trend motivates many research efforts to build anomaly detectors for the purpose of ID [6].
We organize this paper as follows, section 2 provides brief introduction about PCA and Neural Network, section 3 presents previous work, section 4 explain the model designer, section 5 discusses the experiments results followed by conclusion.
PCA and Neural Network
Principal Component Analysis (PCA) is an effective statistical technique for reducing the dimensions of a given unlabeled high-dimensional dataset while keeping its spatial characteristics as much as possible by performing a covariance analysis between factors.As such, it is suitable for data sets from multiple dimensions field of application, such as image compression, pattern recognition (face recognition in particular), gene expression, data clustering and traffic flow events intrusion detection.One of the main advantages of PCA is that you can compress the data, i.e. by reducing the number of dimensions, without much loss of information.Now it is mostly used as a tool in exploratory data analysis and for making predictive models.PCA can be done by eigen value decomposition of a data covariance matrix or singular value decomposition of a data matrix.PCA is also known as the discrete Karhunen-Loeve transformation, or the Hotelling transformation [7].
An increasing amount of research in the last few years has investigated the application of Neural Networks to intrusion detection.If properly designed and implemented, Neural Networks have the potential to address many of the problems encountered by rule-based approaches.Neural Networks were specifically proposed to learn the typical characteristics of system's users and identify statistically significant variations from their established behavior.In order to apply this approach to Intrusion Detection, we would have to introduce data representing attacks and non-attacks to the Neural Network to adjust automatically coefficients of this Network during the training phase.In other words, it will be necessary to collect data representing normal and abnormal behavior and train the Neural Network on those data.After training is accomplished, a certain number of performance tests with real network traffic and attacks should be conducted.Instead of processing program instruction sequentially, Neural Network based models on simultaneously explorer several hypotheses making the use of several computational interconnected elements (neurons), this parallel processing may imply time savings in malicious traffic analysis [8].
Previous Works
Mrutyunjaya Panda et al. [9] use discriminative multinomial Naïve Bayes with various filtering analysis in order to build a network intrusion detection system, they perform 2 class classifications with 10-fold cross validation for building the model .In [10] Shilpa lakhina et al. propose a new hybrid algorithm PCANNA (principal component analysis neural network algorithm) is used to reduce the number of computer resources, both memory and CPU time required to detect attack.The PCA transform used to reduce the feature and trained neural network is used to identify any kinds of new attacks.The model gives better and robust representation of data as it was able to reduce features resulting in a 80.4% data reduction, approximately 40% reduction in training time and 70% reduction in testing time is achieved.In [11 ] Syed Muhammad Aqil develops intrusion detection system by using principle component analysis and Neural Network the authors use four Multi Layer (MLP) working in parallel for each attack with the normal dataset such as normal vs. probe, normal vs. DoS, normal vs. U2R and normal vs. R2L.
Experiment Design
The block diagram of the hybrid model is showen in the following figure (1)
A. NSL-KDD Data Set
KDD Cup 1999 intrusion detection benchmark dataset is used by many researchers in order to build an efficient network intrusion detection system [12].However, recent study shows that there are some inherent problems present in KDD Cup 1999 dataset .The first important limitation in the KDD Cup 1999 dataset is the huge number of redundant records in the sense that almost 78% training and 75% testing records are duplicated, as shown in Tables 1 and 2 [13]; which cause the learning algorithm to be biased towards the most frequent records, thus prevent it from recognizing rare attack records that fall under U2R and R2L categories.At the same time, it causes the evaluation results to be biased by the methods which have better detection rates on the frequent records.This new dataset, NSL-KDD dataset is used for our experimentation and is now publicly available for research in intrusion detection.It is also stated that though the NSL-KDD dataset still suffers from some of the problems discussed in [14] and may not be a perfect representative of existing real networks, it can be applied an effective benchmark dataset to detect network intrusions.In this NSL-KDD dataset, the simulated attacks can fall in any one of the following four categories [15]: DOS (Denial of Service): an attacker tries to prevent legitimate users from using a service e.g.TCP SYN Flood, Smurf.
Probe: an attacker tries to find information about the target host.For example: scanning victims in order to get knowledge about available services, using Operating System.
U2R (User to Root): an attacker has local account on victim's host and tries to gain the root privileges.
R2L (Remote to Local): an attacker does not have local account on the victim host and try to obtain it.
B. Data Preprocessing
Some features have symbolic form (e.g.Protocol type ,Service ,Flag) were converted into numerical ones by assigning a unique number for each feature from the range [1.. no. of the values in the feature] ,lower iteration value takes no.1 and the upper iteration value takes no.equal number of the values within the feature.
C. Principle Components Analysis (PCA)
The basic knowledge of PCA requires the covariance matrix for the features in the training set.The covariance matrix is defined by Where M,N number of the records in training set, number of features in each record, i location of feature in record and j location of the record in dataset, M i , M j mean of feature i,j.
The mean (μ) is defined by the following Equation: By using Jacobi's Method, we find eigen values as the following steps.
1. Find the largest element in the square matrix that is not in the main Diagonal 2. Find the angle θ 3. Rotation can be done by the following: Find the value α Find the other elements of the rotation matrix by the two following equations:
D. MLP Algorithm
The anomaly detection is to recognize different authorized system users and identify intruders from that knowledge.Thus, intruders can be recognized from the distortion of normal behavior.Multi-layer feeds forward networks (MLP) is used in this work.The number of hidden layers and the number of nodes in the hidden layers, were also determined based on the process of trial and error.We choose several initial values for the network weight and biases.Generally, theses are chosen to be small random values.The Neural Network was trained with the training data which contain normal and attack records.When the generated output result doesn't satisfy the target output result, adjust the error from the distortion of target output.Retrain or stop training the network depending on this error value.Once, the training is over, the weight value is stored to use in recall stage.In training stage, we used different network architectures with different training algorithms to find the best architecture with a good result.Resilient back propagation and Levenberg-marquardt with two hidden layers were best result from the others.After many experiments to the best features of data which is resulted from PCA algorithm, we take 16 features from 41.The architecture of Multilayer feeds forward networks consisted from 16 nodes in input layer, 10 nodes, in the first hidden layer, 5 nodes in the second hidden layer, and 1 node in the output layer is illustrated in the following figure.
Conclusions
The main contribution of the present work is to achieve a classification model with a high intrusion detection Rate and with low false negative, this was done through the design of a classification model for the problem using PCA and MLP neural network for the detection of attacks.The first stage of the model is PCA, to find the best filed from the NSL-KDD dataset, we chose 16 features from 41 features.The second stage of the model is MLP neural network which is used for the classification of normal connection from attack connection.After many experiment on the Neural Network by using different training algorithms and object functions, we observe that Resilient back propagation with sigmoid function is the best one for classification.We used two hidden layers, 10 nodes in the first hidden layer and 5 nodes in the second hidden layer.We used the complete NSL_KDD dataset which are 125973 records for the training stage and 22544 records for testing stage.
6 .
Rearrange the steps from (1-3) until we get the elements of off-diagonal near the zero[16] .Steps for executing PCA algorithm 1. Reading training NSL-KDD data set.2. Processing data mentioned above in section B. 3. Calculate Variance/Covariance matrix for the features in every record of the training data.4. Calculate Eigen vector of Variance/Covariance matrix as follows: A. Find the largest element in the matrix.B. Find the angle of Rotation.C. Find the elements of rotating matrix.D. Rearrange the steps from (A -C) until, we get the elements of off-diagonal near the zero. 5. Calculate the values of Eigen vector from the resulted matrix and put it in the Eigen matrix.Arrange the Eigen matrix.
Figure 2 .
Figure 2. The Architecture of the MLP The goal which is used in the algorithm was 0.001, and the epochs number was 1000.The training time for Resilient back propagation was 50 seconds and the training time for Levenberg-marquardt was 12 minutes.While, the testing time for Resilient back propagation was 17.939403 seconds and the testing time for Levenberg-marquardt was 17.293176 seconds.The result of recall stage of two algorithms and the previous works is shown in the following table.
Table ( 3
).The result of recall stage of two algorithms | 2,795.8 | 2013-03-02T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Contingency Table Browser − prediction of early stage protein structure
The Early Stage (ES) intermediate represents the starting structure in protein folding simulations based on the Fuzzy Oil Drop (FOD) model. The accuracy of FOD predictions is greatly dependent on the accuracy of the chosen intermediate. A suitable intermediate can be constructed using the sequence-structure relationship information contained in the so-called contingency table − this table expresses the likelihood of encountering various structural motifs for each tetrapeptide fragment in the amino acid sequence. The limited accuracy with which such structures could previously be predicted provided the motivation for a more indepth study of the contingency table itself. The Contingency Table Browser is a tool which can visualize, search and analyze the table. Our work presents possible applications of Contingency Table Browser, among them − analysis of specific protein sequences from the point of view of their structural ambiguity.
Background:
The relation between a protein's conformation and its residue sequence is a key problem in protein structure prediction. The most accurate prediction methods, such as those implemented by Rosetta [1] or I-Tasser [2], combine a knowledge-based approach with molecular dynamics simulations. The process relies on sequence-structure relationship information which relates sequences to known secondary structures. Such information is usually expressed in the form of libraries. The length of the input sequence fragment varies, usually falling between 3 and 9 residues [3]. Since native conformations depend not only on local interactions but also on interactions with sequentially distant fragments, local sequence-structure information is only partially accurate [4] and often ambiguous. Nevertheless, local "lookup" libraries are used by many protein structure prediction algorithms, such as those based on statistical potentials [2], Monte Carlo simulations [1] or neural networks [5].
The Contingency Table Browser presented in this work is intended as a visualization and analysis aid supporting the Early Stage model (proposed by Roterman [6,7]), and can be used to produce suitable early-stage folding intermediates on the basis of the so-called Fuzzy Oil Drop (FOD) model. Contrary to other leading methods, our approach relies on restricting the set of potential starting structures and replicating the in vivo folding process by taking into account hydrophobicity density distribution throughout the protein body.
Early Stage model
The Early Stage (ES) model bases on the assumption that -at least at the initial folding stage -selection of the optimal conformation of each peptide bond in the protein backbone determines the structure of the emerging intermediate [8]. We thus search for a limited conformational subspace which expresses the geometry of the polypeptide chain using two parameters. Analysis of the backbone indicates strong correspondence between the dihedral angles formed by adjacent peptide bond planes and the radius of curvature of the resulting chain. The function which defines this relationship also establishes a limited conformational subspace of φ,ψ angles, which manifests itself as an elliptical path on the Ramachandran plot [ Figure 1a]. Its defining characteristic is that it intersects zones of the plot which correspond to each basic secondary structure (β-sheet, right-handed helix and lefthanded helix). Casting actual pairs of φ,ψ angles measured for a large number of native structures onto this elliptical path (using the minimum distance criterion) produces local conformation probability profiles for each amino acid. Such profiles exhibit seven distinct probability peaks [9] to which we ascribe seven structural codes (A to G) denoting specific zones on the Ramachandran plot (Figure 1a). Thus, given a set of φ,ψ angles characterizing the input chain, we can assign a structural code to each of its constituent residues. In a similar manner, each known tertiary sequence can be expressed as a set of structural codes. The conformation adopted by each residue is thus accurate to within one of seven zones on the Ramachandran plot, corresponding to various secondary structures. For certain codes (such as C, which corresponds to an α-helix, as well as E and F representing β-sheets) this assignment is quite unambiguous, while in the case of other codes all we can say is that the given residue forms part of a loop. table visualized by Contingency Table Browser. Columns correspond to individual tetrapeptide fragments in protein 2BA2 (PDB code) while rows correspond to structural motifs; c) Frequency of occurrence of each four-letter structural motif for a specific tetrapeptide (IGRL) visualized as a bar chart; d) Visualization of the entire contingency table (columns correspond to tetrapeptides while rows represent structural motifs). Despite the overwhelming volume of data, preferred conformation zones can clearly be discerned. For example, the two marked bands correspond to α-helixes (1) and β-strands (2) respectively. Additionally, we have highlighted the prevalence of codes A (3) and G (4) for glycine-containing tetrapeptides, as well as the characteristic correlation between proline and code G (5).
Input data
In order to gather information regarding sequence-structure relationship we selected a set of tertiary structures from the Protein Data Bank (PDB) making sure that no two sequences exhibit structural similarity in excess of 95%. We then prepared a contingency table for tetrapeptide fragments [10]. The table lists the frequency of occurrence of each structural code for a given tetrapeptide. The dimensions of the table are 160,000 columns (combinations of four peptides from a set of 20) and 2401 rows (combinations of four structural codes from a set of 7).
ES prediction
On the basis of our contingency table we can try to determine the likelihood of encountering a given structural code at each position in an arbitrary polypeptide sequence. The average accuracy of this method is 46% [10]. One of the reasons for this limited effectiveness is the high degree of structural ambiguity for certain tetrapeptide fragments -indeed, it appears that predicting the conformation of the early stage intermediate requires a more in-depth study of the contingency table and probability profiles for each structural motif separately. To facilitate this process we have developed the CTB tool which supports visualization, browsing and analysis of the entire contingency table, as well as its selected parts.
Description of the Contingency Table Browser
The CTB tool operates on text files which have been prepared for each tetrapeptide separately. An input file consists of two data columns: a list of structural motifs expressed as four-letter sequences and the number of occurrences of each sequence. The program can process an arbitrary number of input files -it can load the entire contingency table at once, or just a fragment thereof. The user can determine the ordering of tetrapeptides and sequences by supplying a text file which contains an ordered list of each. This enables users to focus on specific sequences, including those which exhibit a high degree of structural ambiguity (Figure 1b). Contingency Table Browser visualizes the contingency table by applying a grayscale (0 to 255) to each pixel depending on the corresponding frequency of occurrence. The greyscale is normalized in such a way that pure white corresponds to maximum frequency while pure black indicates complete absence of the corresponding motif. Since the maximum value present in the contingency table is 193, the table can be unambiguously visualized using 256 shades of grey. Users may manipulate visualization characteristics by reversing the greyscale, applying gamma correction, visualizing all nonzero values using a single color (white) or enhancing the intensity of either the highest or the lowest values. Given the relatively large area of the contingency table, zooming out may result in multiple values "competing" for a single pixel -in such cases selected subzones can be cropped out for display in a separate window. Another useful tool is the ability to generate frequency bar graphs for a specific tetrapeptide (or a specific structural motif) -this is done by clicking the appropriate column or row (Figure 1c).
Despite the large volume of data embodied in the contingency table our tool can be used to draw useful conclusions regarding the statistical properties of individual codes and residues. Figure 1d reveals some interesting regularities, showing bars which correspond to α-helixes and β-strands, as well as certain rare structural codes, such as A and G.
Conclusions:
Contingency Table Browser is a tool for visualization and analysis of the contingency table which expresses the correspondence between structural motifs and protein sequences (derived from PDB) in the early stage intermediate. It can aid researchers in applying custom modifications to the ES structure, which would be difficult to achieve solely on the basis of quantitative data regarding the frequency of each structural code. Visual inspection may enhance analysis of certain protein structures and augment statistical methods. It should also be noted that the tool's usability extends beyond the ES intermediate and can include any categorization of the available motifs using letter codes. The tool is freely available at http://www.unique-solutions.pl/ctb/ | 2,017.4 | 2015-10-31T00:00:00.000 | [
"Biology",
"Computer Science"
] |
“Global” Productivity Trends, Consumption Expenditures and US Macroeconomic Conditions: A Verification of the “Contagion” Phenomenon
Panel discussions on global economic performance and the role of economic shocks, often creates the notion that adverse macroeconomic conditions prevailing in dominant economies such as the U.S, tend to have automatic impact on domestic conditions of other economies around the world. This study examined this perceived automatic contagion phenomenon by verifying how key modeled adverse macroeconomic conditions characterizing the U.S economy influence two macroeconomic indicators within selected advanced economies. Empirical estimation via SUR estimation technique verified this contagion phenomenon to some degree; test results suggests adverse macroeconomic conditions such as economic policy uncertainty, inflation expectations etc. can influence core economic indicators within some economies around the world. This study however, also found that not all cross-border interactions exhibits features of the contagion phenomenon, because some economies examined seem to be relatively insulated from modeled cross-border macroeconomic conditions.
Introduction
Do occasional adverse macroeconomic conditions or challenges in the U.S economy automatically influence key domestic macroeconomic indicators or conditions within other economies around the world? If so, do such conditions constrain growth or rather creates economic opportunities within these external economies due to growing integration as some have argued? These questions, spurred by growing divergent views on how macroeconomic conditions within a dominant economy like the U.S impacts domestic economic conditions of other economies around the world, defines the rational for this study. Related literature on similar relationship provides significant body of empirical work suggesting some relationship between macroeconomic conditions in one economy and performance of economic indicators in other economies. For instance, the concept of "Uncertainty Channel of Contagion" propounded by Kannan and Kohler-Geib (2009), which is an expansion of earlier concepts on cross-border macroeconomic interactions; to a greater extent, revolves around the notion of cross-border impact of macroeconomic conditions. In their study of the relationship in question, Kannan and Kohler-Geib (2009) sought to outline mechanics explaining how "crisis" in one economy impacts decision making behaviors among economic agents in other economies; ultimately influencing the probability of economic crisis or otherwise in those economies. Despite existing empirical and recent evidences of economic "contagion" effects (As evidenced in how recent recession (2007) emanating from the U.S impacted economies around the world), there is still an ongoing academic discussions on whether macroeconomic conditions in a dominant economy such as the U.S automatically influence economic activities around the world. Significantly constrained growth of the US economy in recent years and its perceived impact on the global economy as a whole, continue to add to this growing debate on automatic cross-border macroeconomic interactions.
Fundamental questions raised above, on how key economic indicators among economies around the world might responds to adverse or otherwise macroeconomic conditions emanating from the US economy forms the basis of this study"s enquiry. This paper among other things, specifically examines the extent to which domestic macroeconomic conditions among key advanced economies around the world responds to adverse macroeconomic conditions in the US economy, in order to verify growing perception of waning US economic influence, and its impact on global economic performance. Additionally, this study also seeks to test the presumption that adverse US macroeconomic conditions might not necessarily constrain growth of specific macro-indicators among key economies around the world as often believed due to the potential for moderating domestic policies or economic conditions. To examine this economic contagion effect phenomenon, this study employs three "adverse" macroeconomic conditions modeled as a feature characterizing the U.S economy; and estimate how industrial productivity and domestic private consumption expenditures among six selected advanced economies around the world react to such macroeconomic conditions. The choice of industrial productivity and private final consumption expenditure as a measure of macroeconomic performance within selected economies to be tested is based on the fact that much of the variability in GDP growth (a measure of general economic performance) can be attributed to consumption expenditure dynamics and industrial productivity.
As alluded to earlier, the view that macroeconomic "events" or adverse macroeconomic conditions tend to have some measure of cross-economy impact is not new to the macroeconomic literature. Studies such as Kamau (2010), Naveh, M. H., Toros Torosyan, T., andJalaee, S. A. (2012) etc. for instance; have all verified how economic integration impact economic activities of engaged economies. Conclusions from these studies suggest that economic "events" in one economy tend to have significant impact on economic activities in others due to growing integration. This study however, seeks to verify whether such growing integration or economic relationship between an economic powerhouse such as the US and other economies around the world continue to serve as a conduit for constrained growth for such economies or otherwise. This study subscribe to the view that significant variability exist in how specific domestic macroeconomic indicators among economies around the world respond to adverse macroeconomic conditions from a major world economy. Consequently, this study projects that economies to be tested might exhibit significant variability in how they respond to adverse macroeconomic conditions because of varied degrees of resiliency and susceptibility to economic shocks or external adverse macroeconomic conditions. This projection is supported by Moser"s (1998) conceptualization of the concept of vulnerability to external stimuli. This concept posit that, the extent of vulnerability of a variable or an entity to external shock or condition depends on the degree of exposure (sensitivity) and internal capacity to ward off or contain such shock or condition (resilience). In order words, how susceptible economies to be tested are to adverse macroeconomic conditions from the U.S economy depend on these two core domestic characteristics of vulnerability all things being equal.
The rest of the study is structured as follows: Section two examines adverse U.S macroeconomic conditions whose impact on other economies this study seeks to verify. Section three highlights private final consumption expenditure among selected economies and a brief account on industrial productivity across such economies. Section four initiates the process of estimating effects of adverse U.S macroeconomic conditions on domestic conditions among stated economies by stating the key variables and data source used in this study; the section also introduces econometric model employed in the estimation. Section five provides empirical results of the estimation process, examination and discussions of the results, conclusions and policy implications of the results.
Estimating Adverse US Macroeconomic Conditions
To estimate how specific macroeconomic variables associated with selected advanced economies around the world respond to adverse macroeconomic conditions emanating from the US economy, this section discusses the three main adverse macroeconomic conditions to be tested in the study. The three include economic policy uncertainty, macroeconomic uncertainty and inflation expectations. The goal is to verify the extent to which each of these macroeconomic conditions influence industrial productivity and private final consumption expenditure dynamics characterizing selected economies. In the following analysis, each adverse macroeconomic condition is defined and discussed separately to capture key features and how it might exert projected influence on economic variables in treatment. The following provide succinct discussions on these macroeconomic parameters.
Economic Policy Uncertainty
Empirical studies over the years have been confronted with the challenge of quantifying economic uncertainty as perceived by policy makers and the average consumer in order to estimate how the condition ultimately impact economic activity. Legion of studies that have estimated the effect of the condition on key economic variables have resorted to the use of proxies based on stock market volatility derived from models such as the GARCH, and others from actual surveys etc. In this study however, we employ economic policy uncertainty variable (Hereafter EPU) developed by Baker et al. (2012). This uncertainty parameter measures uncertainty inherent in policy actions by key decision makers; and the potential economic effects of such actions or inactions. This form www.ccsenet.org/ibr International Business Research Vol. 8, No. 5;2015 of uncertainty about the action or the inaction among policy makers could be monetary policy, fiscal policy etc. This index is modeled using newspaper coverage of policy-related economic uncertainty, and variability in professional expectations or forecast of key economic indicators. It is however, cogent to point out that this policy uncertainty parameter is to some extent, related to general economic uncertainty which might characterize an economy as a whole; however, the distinction is important because it captures only the policy related component of general economic uncertainty. This study, thus, investigate whether policy-related uncertainty in the U.S, as quantified by methodology employed by Baker et al. (2012), has statistically verifiable impact on industrial productivity and private final consumption expenditure dynamics among selected economies around the world.
Macroeconomic Volatility
Unlike the policy related form of economic uncertainty discussed above, macroeconomic volatility/uncertainty variable in this context is modeled to capture an overall macroeconomic variability or uncertainty associated with the general economic system as perceived by the average investor on consumer. In this study for instance, proposed macroeconomic volatility variable, is meant to capture uncertainty about the entire spectrum of economic activities characterizing the US economy within a specific time frame. Compared to economic policy uncertainty parameter developed by Baker et al. (2012) however, this variable is often derived or modeled using statistical estimation process such as GARCH etc. Studies such as Byrne & Davis (2002), Driver, Temple, and Urga (2005), Baum, Chakraborty, and Liu (2010) and Baum, Stephan, and Talavera (2008), have all employed a variant of this approach in estimating uncertainty or volatility inherent in a variable (specifically, GDP growth).
Following these studies, this study employs generalized autoregressive conditional heteroscedastic (GARCH) process in estimating macroeconomic uncertainty or volatility associated with US economic activities. This is done by estimating the extent of volatility inherent in GDP growth overtime. Econometrically, Generalized arch (GARCH) function capturing volatility associated with US economic performance is modeled as follows: Where ω is the constant term, −1 2 is the ARCH term and −1 2 is the GARCH term respectively. Equation 1 is estimated using Stata statistical package.
Inflation Expectations/Tendencies
This study also tests the extent to which Inflation expectations in the U.S impact industrial productivity and private final consumption expenditure among selected economies around world. Inflation expectations or tendencies in this regard defines macroeconomic environment where economic actors either by critical assessment of available information or otherwise espouse the view of an impending significant increase in the general price levels or inflationary conditions. According to the theory of rational expectations, decisions made by economic actors or agents tend to reflect relevant available information at their disposal at any point in time. Consequently, if this proposition by rational choice theory holds, then, expectations of an impending inflationary conditions or inflationary tendency might have the potential to significantly impact industrial productivity and private final consumption expenditures all things being equal. This conclusion stems from the fact that such information will be incorporated into decision making process relating to productivity and consumption dynamics. Inflation expectations variable employed in this study is sourced directly from the Federal Reserve Economic Data from St. Louis Fed database.
Private Final Consumption Expenditure Conditions among Selected Economies
Private final consumption expenditure (hereafter PFCE), as modeled in this study captures consumption expenditures occurring in selected economies with the exception of those made directly by the government. We project that unlike expenditures made by government which might not reflection prevailing macroeconomic dynamics because of potential for political motivations, private final consumption expenditures will reflects variability in prevailing or projected macroeconomic conditions. This section employs graphical approach in analyzing dynamics of private final consumption associated with the various economies in treatment in this study. Economies in treatment include Canada, Germany, United Kingdom, France, Switzerland and Norway. This study, as highlighted earlier verifies the extent to which specific macroeconomic conditions associated with the U.S economy influence PFCE performance trend within these economies. In reviewing PFCE dynamics within each economy, this section provides only a succinct overview of PFCE performance associated with each economy in a graphical comparison. In the following illustrations, With the exception of Sweden, all graphical www.ccsenet.org/ibr International Business Research Vol. 8, No. 5;2015 illustrations are based on quarterly data from 1980 to 2013. Few missing data points for Sweden will be treated accordingly using necessary tools during the empirical estimation process. The following collage of graphs illustrates quarterly percentage change in private final expenditure trends within the six economies to be tested in this study.
Figure 1. Private final consumption among selected economies
Critical examination of the above graphs suggests significant variability in private final consumption expenditure within the six economies employed in this study. The key question however, is the extent to which such significant fluctuations could be attributed to adverse external macroeconomic conditions emanating from the United States; the world"s dominant economy. This analysis is conducted holding constant, other country specific factors or conditions which might also influence variability in private final consumption expenditures illustrated above.
Industrial Productivity Performance among Selected Economies
Industrial productivity is another key measure of domestic economic activity. To estimate the extent to which modeled adverse external macroeconomic conditions impact economic activities within selected economies around the world, this study also verifies the effects of such conditions on industrial productivity among economies in treatment in our verification of the economic contagion effect. The following graphical representations illustrate quarterly percentage change in industrial productivity among selected economies. It www.ccsenet.org/ibr Vol. 8, No. 5;2015 shows varied industrial productivity growth rates with a sharp decline in industrial productivity activities within economies such as Germany, UK, France and Canada coinciding with the recent global recession triggered by the Sub-prime mortgage crisis from the U.S. Thus, to some extent, it could be argued that adverse macroeconomic conditions such as the U.S mortgage crisis had some significant impact on domestic economic activities among economies in question. Empirical examination of the relationship to be performed in the later part of the study is thus meant to further verify this projected cross-border contagion effect of macroeconomic conditions. Figure 2. Industrial productivity conditions among selected economies
Macroeconomic Conditions and Economic Indicators: A Brief Overview
The literature on the effects of macroeconomic conditions on core economic indicators among economies around the world, showcase plethora of empirical studies with varied conclusions. These studies however, are dominated by those focusing on the extent to which adverse economic conditions such macroeconomic uncertainty, inflation expectations, stock market volatility and other macroeconomic shocks impact economic activities or specific macroeconomic variables within an economy. Studies such as Imtiaz and Qayyum, Abdul (2008) and (2009), Shinada (2008) and Liping et al. (2010) have verified similar relationships; by focusing on how uncertainty associated with economic activity influence investment growth within an economy. Liping et al. (2010), in their study for instance, found significant negative relationship between uncertainty and investment performance among Chinese listed companies; a conclusion which is consistent with earlier conclusion by Imtiaz and Qayyum, Abdul (2008) and (2009) "within country" studies such as those mentioned, studies focusing on cross-border spillover effects of macroeconomic conditions are also not new. Studies in the finance literature for instance, present evidence suggesting that conditions in one economy could significantly perturb key financial indicators or variables in other linked economies. Studies such as Uribe and Yue (2006), Agenor et al. (2008), Mackowiak (2007) etc. illustrates how external conditions ultimately impact domestic conditions of some economies around the world. Uribe and Yue (2006), for instance, showed how US interest rate shock induces significant macroeconomic fluctuations among emerging economies around the world. Again, Agenor et al. (2008) further articulated how external shocks influence output fluctuations in the Argentine economy. Additionally, Mackowiak (2007) also showed that U.S. monetary policy shocks affect interest rates and exchange rate among emerging markets. These conclusions to some degree, lend credence to a form of the contagion effect this study seeks to further explore; by focusing on how specific macroeconomic conditions associated with the U.S economy influence industrial productivity and private consumptions expenditures among key advanced economies around the world.
The examination process involves estimating how spillover effect of macroeconomic indexes such as economic policy uncertainty etc. prevailing in the United States influence stated macroeconomic indicators among selected economies. This study"s review of existing literature suggest that, although there is a general perception that macroeconomic condition in the U.S ultimately impact economies around the world, studies focusing on the relationship tend to be few and far between; we also found that there are no empirical studies in the literature focusing on the relationship in question using specific indicators being tested in this study.
Estimating Impact of Spillover Effects of Adverse U.S. Macroeconomic Conditions
This study projects that modeled macroeconomic conditions (Economic policy uncertainty, Macroeconomic Uncertainty and Inflationary expectations conditions) could have significant impact on both industrial productivity and private consumption expenditure dynamics among major advance economies around the world because of growing global economic integration. However, differences in domestic economic capacities based on degree of vulnerability to external economic shock, is further projected to allow for significant variations in how each economic indicator within selected economies respond to modeled adverse macroeconomic conditions. This study consequently, test for the effects of stated adverse macroeconomic conditions on industrial productivity and private consumption expenditure (the highest contributor to GDP growth) among selected economies around the world using Seemingly Unrelated Regression model (SUR).
Research Methodology and Data
This study utilizes data published by FRED (Federal Reserve Economic Data of St. Louis FED). Data sets used are made up of quarterly time series spanning the year 1980 to 2013. Key variables sourced from this database include U.S economic policy uncertainty (EPU), macroeconomic uncertainty associated with the U.S economy (MEU)econometrically derived, Inflation expectations in the U.S (InfE), Industrial Productivity (IndPr) and final private consumption expenditures (PCE) associated with selected advanced economies around the world. In testing for the extent to which stated macroeconomic conditions influence two key macroeconomic variables among listed economies, this study, as already indicated, employs Seemingly Unrelated Regression Model (SUR) in verifying dynamic relationships between key dependent and explanatory variables. The rational for the choice of this estimation model is captured in the following section.
SUR Estimation
This study opted for SUR method in its verification of aforementioned dynamic relationships because of the potential for correlated error terms between specified macroeconomic conditions and variables being tested in a two equation systems. SUR procedure which hinges on the assumption that the error terms of a given system of regression equations are correlated stand out as a good fit for tests being conducted in this study. This study employs a version of seemingly unrelated regression (SUR) model developed by Zellner, (1962). The modeling process involves estimating individual variable relationships that are linked together by contemporaneous cross-equation error correlation. Reviewed evidence from existing literature suggest that when errors terms between system of equations are correlated, the SUR estimator is more efficient in examining underlying relationships between such system of variables or indicators (Baltagi, 2005). Linear SUR equation developed by Zellner (1962) utilizes sets of regression equations with cross-equation parameter restrictions and correlated error terms with differing variances. In a compact form, such SUR model is given by: E [ e i e j " ] = σ 2 i I (i = j) Where T j and e j are 1 n vectors, k j is the j n p matrix of rank P j , and δ j is a P j dimensional coefficient vector. Equation (2) is modeled to have different independent variables and error term variances; and error terms between the equations are permitted to correlate. To estimate effects of modeled independent macroeconomic conditions (variables) on specified dependent variables via the SUR method, this study first verifies stationary conditions associated with the various macroeconomic conditions or variables employed in the study via uni-variate stationary condition analysis.
Uni-Variate Stationary Condition Analysis
Univariate analysis examines specific stationary condition features associated with individual variables to be tested in the study. Time series data employed are tested for stationary conditions or otherwise. To do so, an optimum lag order for the estimation process is first determined using all the three optimum lag order estimation procedures; i.e. Akaike Information Criterion (AIC), HQIC, and SBIC. Our estimate based on all the three procedures suggested a lag order of (1); consequently, optimal lag order of 1 is employed in examining stationary conditions associated with various variables in treatment using both the Augmented Dickey-Fuller (1981) and the Philip-Perron (1988) unit root tests procedures. The following results illustrate stationary conditions associated with variables employed in this study. Stationary condition results reported in table 1, show that various macroeconomic conditions and variables employed in this study are stable at varied significance levels under both stationary assessment estimation procedures. With stationary conditions associated with variables in treatment verified, this study proceeds to examine core interactions between specific macroeconomic conditions in the U.S economy, and variability in industrial productivity and private consumption expenditure (only PCE results reported) associated with selected advanced economies around the world via SUR estimation process.
Economic Policy Uncertainty and "Global" Private Consumption Expenditures
Estimated results on how specific macroeconomic conditions associated with the U.S economy impacts productivity and consumption patterns around the globe using the SUR process are reported in table 2. In the first part of the table, reported results show that the three modeled macroeconomic conditions projected to emanate from the U.S economy tend to have varying degrees of impact on private consumption expenditure patterns within the six advanced economies employed in the study. Results for instance, suggest that economic policy uncertainty emanating from the U.S economy has no statistically significant impact on consumption www.ccsenet.org/ibr Vol. 8, No. 5;2015 expenditure pattern within the six advanced economies tested (viz, UK, Norway, France, Germany, Canada and Switzerland). The results suggest that this macroeconomic condition is only inimical to private consumption expenditure growth within the U.S economy itself. This outcome though unexpected, might reflect the view espoused by some that since economic policies tend to be country specific, uncertainty associated with one economy might not necessarily have widespread impact on other economies despite the potential for such phenomenon to occur.
Inflation Expectations and "Global" Private Consumption Expenditures
Still on first part of table 2, unlike U.S economic policy uncertainty variable, this study finds that inflation expectations in the U.S economy tend to have statistically significant impact on private consumption expenditures dynamics within most of the economies tested in the study. Reported results suggest that with the exception of Norway, inflation expectations in the U.S tend to influence private consumption patterns within the other five economies tested in the study. This study finds that inflation expectations concerns in the U.S tend to have negative impact on private consumption expenditure patterns in the UK, France, and Canada; however, the same condition is found to have positive impact on private consumption expenditures in Germany and Switzerland. A critical assessment of this divergent outcome among the economies tend to point to the extent of economic integration, or how closely linked these economies are, to the U.S economy and the degree of domestic economic vulnerability to external shock. Those that are closely integrated economically tend to be negatively impacted and vice versa. The results further show that inflation expectations all things being equal is also inimical to private consumption expenditure within the U.S economy itself.
Macroeconomic Uncertainty and "Global" Private Consumption Expenditures
Additionally, empirical results reported in Table 2, further suggest that general macroeconomic uncertainty conditions about the U.S economic growth trajectory etc., tend to only constrain private consumption expenditure growth within the UK and Canadian economies. This study finds that, apart from these two economies and within the U.S itself, growing macroeconomic uncertainty about the US economy have no statistically significant influence on private consumption expenditure conditions within the German, Norway, Switzerland and the French economies. This outcome to some extent, question growing perception that any adverse macroeconomic condition associated with a major economic power such as the U.S, ultimately impact macroeconomic conditions around the globe. This result might also reflect variability in domestic capacities and resiliency levels associated with each economy as alluded to earlier. The above analysis on how the three modeled macroeconomic conditions associated with the U.S economy influence private consumption expenditure patterns around the world, thus, suggest that adverse macroeconomic conditions in the U.S might not necessarily be inimical to varied economies around the world as often believed. Our analysis suggest that, all things being equal, the extent of economic integration and domestic economic capacities could be crucial in how effects of such adverse macroeconomic conditions will be felt among economies around the world. Note. Standard errors in parentheses + p < 0.10, * p < 0.05, ** p < 0.01, *** p < 0.001.
Economic Policy Uncertainty and "Global" Industrial Productivity
In terms of how industrial productivity performance within selected economies respond to modeled adverse macroeconomic conditions emanating from the U.S economy, this study again finds highly varied effects on selected economies. For instance, result estimates suggest that economic policy uncertainty in the U.S as defined by Baker et al. (2012), has no statistically significant impact on industrial productivity within selected economies across the globe. The results even suggest that the condition has no verifiable impact on industrial productivity even within the U.S economy. This outcome, is highly incompatible with what this study anticipated; in that, we expected economic policy uncertainty to at least constrain industrial production to some degree given that such uncertainty might influence decision making behavior of some investors. Whereas a case could be made with respect to other economies if one takes into consideration unique domestic macroeconomic conditions which could minimize the effect of the external condition, the same cannot be said of why this macroeconomic condition seem to have no statistically verifiable impact on U.S domestic industrial production. This study projects that further testing controlling for some key related conditions might be needed to specifically understand the underlying condition responsible for the relationship between the variables in question.
Inflation Expectations and "Global" Industrial Productivity
Effects of inflation expectations in the US on industrial productivity within the selected economies as reported in the second part of table 2 also vary significantly as projected. Results suggest inflation expectations in the U.S ultimately constrains or have negative effects on industrial productivity in economies such as the UK, France and Canada all things being equal. The macroeconomic condition is also found to be inimical to industrial productivity within the U.S itself. Additionally, this study finds that inflation expectations from the U.S economy tend to have no within data statistical influence on industrial productivity within the German and the Swiss economies. These results does not necessarily mean that inflation expectations conditions in the U.S have no influence on economic conditions in these later economies; rather, it might reflects the extent of linkages between these economies and that of U.S, as well as differences in prevailing domestic economic conditions or policies within each economy which could make effect of the condition negligible.
Macroeconomic Uncertainty and "Global" Industrial Productivity
According to results presented in Table 2, general macroeconomic uncertainty associated with the U.S economy tend to have negative impact on industrial productivity within the UK, France, Germany and the Canadian economies respectively. Once again, economies of Norway and Switzerland seem to be less responsive or somehow insulated from such adverse macroeconomic condition emanating from the U.S economy; test results failed to find statistically significant relationship between the two variables in the case of the two economies. These divergent results to some extent further question the general notion sometimes captured in the maxim "when the U.S economy sneezes, world economies in general catches cold". The case of Norway and Switzerland (in both private consumption expenditure and industrial productivity analyses) suggest that macroeconomic perturbations in the U.S economy does not necessarily influence key economic indicators within all global economies as often advanced by some.
Conclusions
This study examined how specific adverse macroeconomic conditions emanating from the U.S economy influence key economic performance indicators among selected advanced economies around the world with the view of verifying the "contagion phenomenon". Estimated results via SUR method suggest that, the contagion phenomenon is more pronounced in some economies than others. This study for instance, finds that adverse macroeconomic conditions in the U.S tend to impact trend dynamics associated with industrial productivity and private consumption expenditures among some economies, whereas other economies seems to be insulated from such conditions. Reported results thus suggest that, despite having one of the most dominant economies around the world, macroeconomic conditions prevailing in the U.S economy might not necessarily influence domestic economic indicators among all key economies around the world as often believed. Estimated results further point to the potential that, variations in how various economies responds to such adverse external macroeconomic conditions might depend to some degree on domestic economic resiliency, and individual economy"s susceptibility to such conditions. Thus, the economic contagion phenomenon exists, but the extent of such phenomenon might be more country or economy specific. | 6,711.6 | 2015-04-25T00:00:00.000 | [
"Economics"
] |
Quantifying the effect of air gap, depth, and range shifter thickness on TPS dosimetric accuracy in superficial PBS proton therapy
Abstract This study quantifies the dosimetric accuracy of a commercial treatment planning system as functions of treatment depth, air gap, and range shifter thickness for superficial pencil beam scanning proton therapy treatments. The RayStation 6 pencil beam and Monte Carlo dose engines were each used to calculate the dose distributions for a single treatment plan with varying range shifter air gaps. Central axis dose values extracted from each of the calculated plans were compared to dose values measured with a calibrated PTW Markus chamber at various depths in RW3 solid water. Dose was measured at 12 depths, ranging from the surface to 5 cm, for each of the 18 different air gaps, which ranged from 0.5 to 28 cm. TPS dosimetric accuracy, defined as the ratio of calculated dose relative to the measured dose, was plotted as functions of depth and air gap for the pencil beam and Monte Carlo dose algorithms. The accuracy of the TPS pencil beam dose algorithm was found to be clinically unacceptable at depths shallower than 3 cm with air gaps wider than 10 cm, and increased range shifter thickness only added to the dosimetric inaccuracy of the pencil beam algorithm. Each configuration calculated with Monte Carlo was determined to be clinically acceptable. Further comparisons of the Monte Carlo dose algorithm to the measured spread‐out Bragg Peaks of multiple fields used during machine commissioning verified the dosimetric accuracy of Monte Carlo in a variety of beam energies and field sizes. Discrepancies between measured and TPS calculated dose values can mainly be attributed to the ability (or lack thereof) of the TPS pencil beam dose algorithm to properly model secondary proton scatter generated in the range shifter.
tions for a single treatment plan with varying range shifter air gaps. Central axis dose values extracted from each of the calculated plans were compared to dose values measured with a calibrated PTW Markus chamber at various depths in RW3 solid water. Dose was measured at 12 depths, ranging from the surface to 5 cm, for each of the 18 different air gaps, which ranged from 0.5 to 28 cm. TPS dosimetric accuracy, defined as the ratio of calculated dose relative to the measured dose, was plotted as functions of depth and air gap for the pencil beam and Monte Carlo dose algorithms. The accuracy of the TPS pencil beam dose algorithm was found to be clinically unacceptable at depths shallower than 3 cm with air gaps wider than in water when a range shifter was used. Do not use the device in these situations." 4 RayStation updated the language in the RayStation 6 User Manual to explain the reasoning for the inaccurate dose calculation and suggests the use of the Monte Carlo dose engine to more accurately calculate dose in such situations. 5 One of the main benefits of proton therapy is the ability to control the distal range of the treatment field by taking advantage of the Bragg Peak. This allows for the treatment of target volumes located proximal to normal tissue or organs at risk with little dosimetric detriment to the non-target volumes. 6 When target volumes are located relatively deep in the patient, the accuracy of the TPS is sufficient. 1,3 Targets such as chest wall, however, can have a significant portion of the target volume located a depths shallower than 3 cm. At such shallow depths, the minimum beam energy has a range greater than the target depth. A range shifter placed in the beamline sufficiently reduces beam energy such that full dose modulation is achievable at the patient surface. The ProteusONE is capable of producing a minimum beam energy of 70 MeV, which has a range in water of approximately 4.1 cm. 7 WKCC commissioned a 3.5 cm physical thickness (4.1 cm water-equivalent thickness) Lexan range shifter to treat shallow target volumes with the ProteusONE.
Other proton therapy systems with minimum beam energies of 100 MeV would require a range shifter with approximately 7.5 cm water-equivalent thickness. 7 As noted above, the use of range shifters for shallow treatments can be problematic for a TPS using a pencil beam dose algorithm.
Though most commercially available proton TPSincluding Pinnacle, 8 XiO, 9 Eclipse, 10 and RayStation 4use pencil beam dose algorithms, no published studies could be found which quantify the functional dependence of TPS dosimetric accuracy on depth or air gap. A selection of publications have quantified TPS accuracy for multiple depths with a fixed air gap, 11 and other works have generally noted that a pencil beam algorithm breaks down with large air gaps and shallow depths. 2,4,5,12 This study, for the first time, systematically quantifies the dosimetric accuracy of a proton pencil beam dose algorithm as a function of range shifter air gap and treatment depth for superficial proton PBS treatments. Moreover, this study performed an identical analysis using the RayStation 6 Monte Carlo proton dose engine to determine the improvement in dosimetric accuracy one may expect when using Monte Carlo. Finally, a smaller subset of this study performed similar measurements with a thicker range shifter to identify the relationship between pencil beam TPS accuracy and range shifter thickness. This data was then tested against patient treatment plans to confirm its applicability to the clinical treatment environment. To further confirm the dosimetric accuracy of the RayStation 6 Monte Carlo dose algorithm at beam energies and field sizes other than those described above, MC-calcu-
2.B | Experimental setup
The optimized treatment plan was exported to MOSAIQ (Elekta, Sunnyvale, CA, USA) and delivered by the IBA ProteusONE compact-gantry proton therapy system with a 3.5 cm Lexan range shifter inserted in the retractable snout. Dose was measured with the PTW T23343 Markus chamber (PTW, Freiburg, Germany) embedded in SP34 RW3 solid plate phantom material (IBA-Dosimetry, Schwarzenbruck, Germany). Dose measurements at a given depth were taken for each air gap by simply moving the range shifter snout to the appropriate position. When all data for one depth were acquired, the chamber was repositioned to the appropriate depth in the phantom, the vertical couch position was adjusted to keep the isocenter position constant, and the measurement process was repeated for all depth/air gap combinations. MatriXX PT with the Markus chamber showed the MatriXX PT CAX dose to be accurate within 0.4%. This daily output correction factor (P DO ) is the ratio of the baseline central-axis dose determined during machine commissioning to the measured daily CAX dose, and was included in the TRS-398 absorbed dose calculation. 13 D w;Q ¼ M raw N D;w;Q0 k Q;Q0 P T;P P ion P pol P elec P DO (1) All further analysis of dosimetric accuracy excludes these surface doses, as they are clearly significant outliers in otherwise consistent data.
3.B | Pencil beam algorithm dosimetric accuracy
The dosimetric accuracy of the TPS pencil beam algorithm has a dependence on both depth and air gap, as shown in Figs. 1(a) and 1(b). PB-calculated TPS doses become more accurate at increasing depths and at decreasing air gaps. When the air gap is relatively small, the TPS accuracy is clinically acceptable (within 3%) at all depths 2 mm and deeper. As the air gap widens, dosimetric accuracy degrades, especially at the shallowest depths. The depth dependence of pencil beam dose algorithm accuracy is strongest in the shallowest 1 cm, eventually stabilizing beyond 3 cm. Table 1 bins the information from Figs. 1(a) and 1(b), while Table 3 shows the complete set of data acquired. Table 4 shows all Monte Carlo data. Figures 1(a), 1(b), 2(a), and 2(b) show a representative subset of the data which allows the observer to understand the trends while minimizing clutter.
3.D | Range shifter thickness
Previous works by the authors have reported findings from similar tests, which directly compared the air gap and depth dependences of a 3.5 cm (4.1 cm WET) range shifter to a 6.5 cm (7.4 cm WET) range shifter. On average, the dosimetric error of the thicker range shifter was found to be approximately 50% greater than the thinner range shifter. 15 T A B L E 7 Depth, extended air gap, estimated errors, and total expected dose difference for paraspinal fields using PB and MC dose engines.
3.E | Clinical validation of data using patient plans
As a clinical test of this data, shallow QA dose planes of a chest wall patient and a patient with paraspinal mets were calculated with both PB and MC dose engines and compared via c-analysis. The paraspinal plan was also tested with extended air gaps to illustrate the difference between a well-planned treatment with air gaps less than 10 cm and a sub-optimal plan with air gaps greater than 15 cm.
Given the depth and air gap for each field, the expected dose error of the PB and MC calculations were determined by interpolating data in Tables 3 and 4, respectively. The difference of PB and MC dosimetric errors represents the total expected dose difference between datasets. If the data in Tables 3 and 4 Table 7 shows the corresponding data for the paraspinal patient when the air gap has been extended an additional 10 cm for each fieldthese fields have been identified as Field 03a RPO and Field 04a RPO.
A series of c-analyses were performed for each field, with the %D criteria incrementally increased until nearly all points (>99%) passed, as shown in Table 8 This confirms the applicability of the data collected in this work with respect to other clinical patient treatment plans.
3.F | Validation with commissioning data
Because the majority of data collected for this work were based on a single treatment field, three additional fields of varying energy and field size were modeled in the TPS and compared against measured data. Figure 3 shows three separate plotsone for each treatment fieldwhich depict the depth dose curves as calculated by RaySta-
| CONCLUSION
For the first time, this study comprehensively quantifies TPS dosimetric accuracy of range-shifted proton fields as a function of depth, air gap, and range shifter thickness. When pencil beam dose algorithms are used to create superficial PBS treatments, the air gap should be reduced as much as patient setup allows, and range shifter thickness should be minimized to correspond with the range of the machine's minimum energy. Poor modeling of secondary proton scatter generated in the range shifter, also known as the nuclear halo effect, is the main contributor to TPS dose overestimation. 5 As mentioned by RayStation and as confirmed by this study, implementation of a Monte Carlo dose engine has helped mitigate this error.
ACKNOWLEDGMENTS
We wish to thank Dr. Kuanling (Gwen) Chen and Dr. Matthew Maynard for their assistance, expertise, and impromptu sanity checks.
Without their guidance, we could not have completed this project.
CONFLI CT OF INTEREST
The authors have no conflicts of interest to disclose. | 2,550.8 | 2017-12-14T00:00:00.000 | [
"Medicine",
"Physics"
] |
Individual separation of surface, bulk and Begrenzungs effect components in the surface electron energy spectra
We present the first theoretical recipe for the clear and individual separation of surface, bulk and Begrenzungs effect components in surface electron energy spectra. The procedure ends up with the spectral contributions originated from surface and bulk-Begrenzungs excitations by using a simple method for dealing with the mixed scatterings. As an example, the model is applied to the reflection electron energy loss spectroscopy spectrum of Si. The electron spectroscopy techniques can directly use the present calculation schema to identify the origin of the electron signals from a sample. Our model provides the possibility for the detailed and accurate quantitative analysis of REELS spectra.
We present the first theoretical recipe for the clear and individual separation of surface, bulk and Begrenzungs effect components in surface electron energy spectra. The procedure ends up with the spectral contributions originated from surface and bulk-Begrenzungs excitations by using a simple method for dealing with the mixed scatterings. As an example, the model is applied to the reflection electron energy loss spectroscopy spectrum of Si. The electron spectroscopy techniques can directly use the present calculation schema to identify the origin of the electron signals from a sample. Our model provides the possibility for the detailed and accurate quantitative analysis of REELS spectra.
As early as 1957, Ritchie theoretically predicted the excitation of surface plasmons of thin films by fast electrons. Two years later, following the theoretical prediction, Powell and Swan 1,2 discovered this kind of excitation experimentally in the spectra of two free-electron-like materials, i.e. aluminum and magnesium. Since the first observation of surface excitations especially hot topic of interest is to develop a method or technique to separate the surface and bulk properties as observed by electron spectroscopy. We note that, in Ritchie's pioneering work 3 , the surface effect was already divided into two parts: one of them is the additional surface modes of the polarization field in the vicinity of the surface, which have an excitation energy of about ω b √ 2 where ω b is the bulkplasmon excitation energy, and the second one is the coupling between surface modes and bulk modes near a boundary, which results in a reduction of the intensity of bulk excitations. Such a decrease effect on the bulk excitation is known as Begrenzungs effect. The surface excitation together with the Begrenzungs effect forms the surface effect. By using the secondary-electron electron-energy-loss coincidence spectroscopy, a strong reduction of bulk mode in the surface scattering zone has been observed 4 . The low-loss electron energy loss spectra for Ti 3 C 2 T 2 (T = OH or F) stacks of various thicknesses have been measured and it has been found that the intensity of bulk plasmon is significantly reduced as the Ti 3 C 2 T 2 stack thickness is decreased 5 . The plasmon energy of a 2-nm GaN quantum well was larger than that of a relaxed GaN 6 . Those phenomena are considered to be due to the influence of the Begrenzungs effect. However, there is a lack of quantitative analysis methods for dealing with the Begrenzungs effect.
Energy loss of electrons near surfaces raises several interesting problems, among them is the separation of surface and bulk effects. In the standard electron spectroscopy techniques, it is not possible to resolve the clear, distortion-free separation of surface properties from the bulk one. This is due to the fact that electrons always penetrate into the material and move either deep inside the bulk or move near the surface region. The probability of the energy loss can be determined by the dielectric response function, ε q, ω , which is a function of the frequency ω and the wavenumber q of the electromagnetic disturbance. For the accurate theoretical modeling of the electron spectra, the surface effects and the multiple electron scattering in the inelastic interaction must be treated with special care. This special care is especially important at low incident energies and at grazing scattering geometries, where surface effects dominate.
Significant improvements in describing the surface excitations [7][8][9][10][11][12][13][14][15][16][17][18][19][20] have been made in the last decade. Tougaard and Kraaer investigated the inelastic cross sections of several elemental materials using the reflected www.nature.com/scientificreports/ electron energy loss spectroscopy (REELS). They found that the accurate description of the surface excitation, which is enhanced at low incident energies, is very important in the quantitative analysis of REELS spectra 7 . The early theoretical approach employed a simple two-layer model to interpret the measured backscattered electron spectra 8,9 . The top layer with the thickness of several atomic monolayers is characterized with the surface energy loss function (ELF) and the bottom one with the bulk ELF. In some other previous works [10][11][12] , the surface and bulk excitations are considered as two independent events and the corresponding probabilities can be linearly superimposed in a dielectric functional formulation, thus, described by the surface and bulk ELFs, respectively. However, both these models are not so accurate, due to the reason that the surface effect in these two models is isotropic and will not occur in the vacuum while in a real sample it is depth-dependent and can also occur in the vacuum 13,14 . Based on a quantum mechanical approach, Ding [13][14][15] has derived a formalism of position-and velocity-dependent electron inelastic scattering cross section near the surface region via a complex self-energy formula. This quantum mechanical model of the inelastic scattering was applied in the simulation of REELS spectrum for ideal flat Au, Si 16 , and Ag [17][18][19] surface and rough Al surface 21,22 . However, we note here that this sophisticated quantum mechanical model is less computationally efficient compared with a semi-classical model 20 . It has been verified that the depth-dependent differential inverse inelastic mean free path (DIIMFP) produced by the quantum model and the semi-classical model is quite similar and the difference between the REELS spectra simulated by these two models is practically invisible 23 . Therefore, nowadays the semi-classical model, which effectiveness has been verified by many previous works [24][25][26][27][28][29][30][31] , is more frequently widespread. On the basis of the semi-classical dielectric response theory, a theoretical model for the DIIMFP for incident and escaping electrons in a layered structure sample has been developed 32 . By using this layered structure DIIMFP, the simulation of REELS spectrum for carbon contaminated SrTiO 3 surface 33 and Fe/Si overlayer sample 34 have been performed. Although we have in our hands good models to describe the surface effect, they are still not able to separate clearly the spectral components and do a further detailed quantitative analysis. A deconvolution method has been developed by Tougaard and Chorkendorff 35 to extract the DIIMFP from REELS spectra. Such a deconvolution method has been applied to Al 35 and Si 7 . The resulted DIIMFPs of Al and Si have negative values, which are non-physical, around ω b + ω s , where ω b and ω s are the bulk-and surface-plasmon excitation energy, respectively. This is due to that the influence of both the angular distribution of elastic scatterings and the surface effect are omitted. Their method has been improved by considering the surface effect 36 . A trial-and-error procedure was employed to find the best fitting ELF which can be used to calculate the DIIMFP in the best agreement with the DIIMFP extracted from experimental REELS 37 . However, there are still large deviations between the calculated and experimentally extracted DIIMFPs in the energy loss range up to ω b + ω s . Werner 38 hypothesized that the bulk excitation and surface effect are uncorrelated and REELS spectrum can be expressed via a convolution of various excitations with the elastic peak. Then the energy loss distribution of single surface effect and single bulk excitation, which are named as the differential surface excitation probability (DSEP) and DIIMFP, can be extracted from the experimental REELS spectra based on a deconvolution approach. Based on the obtained DSEP and DIIMFP, they can revisit the REELS spectra and perform the quantitative analysis [39][40][41] . However, the generation mechanism of REELS spectra is very complex; it is made of elastic scattering, inelastic scattering, surface effect, multiple scattering effect as well as influenced by experimental condition. Therefore, the REELS spectra are hard to be expressed accurately by convolution formulation. Two peaks at 12 eV and 34 eV appear in retrieved bulk excitation DIIMFP of Si from experimental REELS spectra; this fact indicates that such a retrieved DIIMFP contains partial surface excitation (12 eV) and multiple scattering effect (34 eV). This is due to that the multiple scattering effect and the surface effect cannot be well deducted by using the deconvolution method.
On the other hand, Monte Carlo (MC) simulation method is a powerful tool for the simulation of electronsolid and electron-surface interaction. It can deal with the multiple scattering effect more accurately, and can be used to obtain both the electron energy spectra 29 and secondary electron yields [42][43][44] which are in good agreement with the experimental results. The quantitative analysis of REELS spectra can be done based on a MC simulation method 24,30 . Both the current deconvolution scheme and the MC simulations have a disadvantage, i.e., there is no more subdivision of the surface effect. The quantitative analysis of individual Begrenzungs effect and surface excitation cannot be performed based on the existing methods.
In this work, we present a recipe for individual separation of surface, bulk, and Begrenzungs effect components in the surface electron energy spectra. Our theoretical recipe is based on the evaluation of the depthdependent DIIMFP. As an example, the present recipe is applied to the analysis of REELS spectra of Si at the primary energy of 5 keV. Figure 1 shows the experimental and simulated total REELS spectra with the partial spectral components, as bulk, surface and mixed excitations and Begrenzungs effect components for Si at primary energy of 5 keV. The agreement between the total simulated REELS spectrum and the experiments is excellent. For each detected electron, the present recipe can trace the number of inelastic scatterings and the specific type for each single inelastic scattering. Therefore, it is straightforward the separation of the multiple scattering term for each component. Figure 2 shows the multiple scattering terms of different simulated components in the REELS spectrum for Si at 5 keV.
Results
According to Fig. 2 the signature of the multiple electron scatterings can be well characterized with separate peaks, where each peak can be assigned with an order of the multiple scattering. At higher electron energies, the single scattering for surface excitation dominates (Fig. 2b). The intensity of electrons which suffer no bulk excitation and more than two surface excitations is much stronger than intensity of electrons which suffer no bulk excitation only one surface excitation. In the mixed contribution we highlight the contribution dedicated to the single surface excitation, where again we can separate and well define peaks (Fig. 2d). In the absolute yield the bulk excitation is the largest and the yield of the mixed contribution is the smallest. Figure 3 shows a www.nature.com/scientificreports/ comparison of the relative yields for various orders of excitation components. The green area corresponds to the bulk excitation, the pink area corresponds to the surface excitation and the gray area corresponds to the mixed scattering component. Due to the localization of the surface effect, the intensity of I n b ,n s (ω) term decreases rapidly with the increasing of the number of surface excitation n s as shown in Fig. 3. www.nature.com/scientificreports/ For the clear separation between the bulk and surface excitations, we need to analyze further the mixed scattering component. Figure 4 shows the total spectral component of the mixed term with two partial distributions when the number of inelastic collisions is 2 or 3. Here, we introduce the shorthand b and s to denote the bulk and surface scatterings, respectively. In this notation, the so-called bulk excitation due to the electron inelastic collision in the bulk is denoted by b, while the surface excitation due to the electron inelastic scattering in the surface region is denoted by s. Longer sequences can be referred to as, for example, bs, sb, or even more longer sequences like, bss, sbs, ssb. Moreover, the order from left to right of each symbol denotes the order of different www.nature.com/scientificreports/ collisions. In this notation, there are two types of collisions in the so-called double mixed collision. The first one when the first collision is in the bulk and the second collision is in the surface before the electron escape from the sample (bs). The second case when the first collision is in the surface and the second collision is in the bulk before the electron escape from the sample (sb). Using our MC simulation, we can directly calculate the corresponding contributions of the mixed terms. With increasing the number of collision, the number of different collision sequences increases drastically. In the case of 3 collisions, the number of cases is 6 (Fig. 4c). The intensities of bs, bss, and bbs are slightly lower than that of sb, ssb, and sbb, respectively. This behavior can be interpreted taken into account the different excitation probabilities when an electron passes through the surface region either from the vacuum to the sample or from the sample to the vacuum, i.e. v ⊥ < 0 or v ⊥ > 0 in Eqs. (2)(3)(4). Obviously, the surface excitation mainly occurs when electrons move from the sample to the vacuum (from the vacuum to the sample) in bs, bss, and bbs (sb, ssb, and sbb). The intensity of bsb is much lower than that of sbb and bbs, which clearly shows that the final collision order of an electron depends on the trajectory due to the depth-dependence of surface effect. The collision order of electron can be counted in detail mainly because the MC method traces the whole process of electron transport from entering to the sample to absorption or emission from the surface. This is an important advantage of the MC simulation method in the application quantitative analysis of surface electron energy spectra compared to the convolution method 38,45 .
Discussions
We note that there are two kinds of inelastic scatterings in each single collision when an electron passes through the surface of a sample, i.e. bulk and surface excitations. In order to classify and divide into two parts of a REELS spectra as only surface or bulk-Begrenzungs contribution, as the classification of each single collision, we need to deal with the mixed scatterings. One simple way is to classify the mixed scatterings regarding to if the last collision is surface or bulk before the electron escapes from the sample. Applying this scenario, the individual and separate surface and bulk excitation can be calculated. Figure 5 shows the mixing free individual separation of surface, bulk contribution for Si at incident energy of 5 keV. In summary, a new theoretical recipe for the clear and individual separation of surface, bulk and Begrenzungs effect components in the electron spectra without any mixing between the components was shown. Our model is based on the evaluation of the depth-dependent differential inverse inelastic mean free path. By using this method, one can analyze the contribution from different components in a REELS spectrum in detail. The quantitative analysis of REELS spectrum of Si at the primary energy of 5 keV has been performed. Our work proves that single scattering for surface excitation dominates for the REELS spectrum of Si, due to the localization of the surface effect. The present analysis clearly shows that the final collision order of an electron depends on the trajectory due to the depth dependence of surface effect. This work extends the quantitative analysis method of REELS spectra into the more detailed and accurate realm.
Methods
The solid medium is considered to occupy a semi-infinite space with the surface boundary defined at z = 0 . A sketch of the considered geometry for the problem by indicating the vacuum ( z > 0 ) and solid ( z < 0 ) regions is shown in Fig. 6. When an electron passes through a solid surface, elastic scattering occurs only inside the solid, while there are three situations to be considered for the inelastic scattering process. First, the electron is near the surface region of the vacuum (region I), where only surface excitation occurs. Second, the electron is near the surface region of the solid (region II), where the bulk excitation, Begrenzungs effect and the surface excitation jointly contribute to the inelastic scattering process. Third, the electron is in the interior region of where In Eqs. (2)-(4), ω = ω − qv sin θ cos φ sin α , q � = q sin θ , v ⊥ = v cos α and E = v 2 2 . α is defined as the angle between the surface normal and the electron moving direction. The upper and lower limits of the integrals are q ± = √ 2E ± √ 2(E − ω) . So, according to this definition we have functional form of bulk and surface excitations and also for Begrenzungs term. Equation (2) defines the bulk excitation, which does not depend on the depth and represents the scattering of electrons inside a semi-infinite material. The Begrenzungs term (Eq. 4), occurring only inside the solid, indicates a decrease of the bulk inelastic cross section, which is due to the coupling between the volume and surface modes that are orthogonal 46 . Here we consider this effect separately instead of mixing with surface excitations. One may note that the Begrenzungs term gives negative values and obviously it is impossible to measure practically. The only way for detailed investigation of Begrenzungs effect is to perform a quantitative theoretical analysis based on the experimental spectra. The surface excitation occurs not only inside a solid but also above it in the vacuum near the surface (see Eq. 3). The momentum transfer-dependent ELF, Im −1 ε q, ω , in Eqs. (2)-(4) can be obtained by an extension from the long wavelength limit q → 0 , namely the optical ELF Im −1 ε(ω) , by assuming a dispersion relation. In this work, a FPA-Ritchie-Howie method 29 is employed to extended the ELF, i.e. using the full Penn algorithm (FPA) 47 to extend the ELF for the calculation of the bulk DIIMFP σ bulk while using Ritchie and Howie's scheme 48 for the calculation of the surface excitation DIIMFP σ surf and Begrenzungs effect term σ beg .
Here we would like to highlight again that the Begrenzungs effect is a weakening effect on bulk excitation. So the Begrenzungs effect cannot exist alone, it is closely linked to bulk excitation. The inelastic scattering events are identified either as bulk or surface excitations. The measured or calculated electron spectra can be expressed as a sum of contributions of various scatterings in the form: www.nature.com/scientificreports/ where n b and n s are the number of bulk and surface excitations, respectively. The first term in Eq. (5) represents the elastic peak, I 0 (ω) = I n b =0,n s =0 (ω) . The second term shows that the signal electrons suffer only bulk excitation. Due to the influence of Begrenzungs effect, this term can be expressed as I bulk+beg (ω) = I beg (ω) + I bulk (ω) . The third term is the contribution of electrons suffer only surface excitation, i.e. it represents the pure surface excitations, I surf (ω) . The last term contains the signal electrons which suffer both bulk and surface excitations in direct consequence of multiple scattering. We refer hereafter this term as the mixed term, I mix (ω) , as a superposition between bulk and surface excitations with Begrenzungs effect. So the Eq. (5) can be rewritten as: Given an experimental REELS spectrum, the specific analysis steps of present method are: (a) extract the ELF from the experimental spectrum by the reverse MC method 12,24 ; (b) perform a MC simulation of REELS spectrum by using the obtained ELF; (c) derive spectrum components as given in Eqs. (5), (6) from the MC simulated spectrum. One may also perform a quick analysis based on the existing ELF. In this work, REELS spectrum of Si at the primary energy of 5 keV is used as an example. Mott's cross-section 49 is used to describe electron elastic scattering in a MC simulation of REELS spectrum. The Thomas-Fermi-Dirac atomic potential 50 is used in the calculation of Mott's cross-section. We used ELF from 29 below 200 eV, and Henke's data 51 for 200 eV-30 keV in the calculation of inelastic cross section. Although it has been reported that the negative DIIMFP in vacuum may indicate an energy gain of electrons due to the interaction with the surface plasmon 52 , however, its influence to the REELS spectra at the primary energy of 5 keV can be negligible. Therefore, in the present simulation of the electron spectra such energy gain has not been taken into account. The electrons suffer inelastic scatterings during transport in materials, which are identified either as bulk or surface excitations. The probability of surface excitation can be determined as P surf = σ surf σ total , which depends on the electron energy E , moving direction α , depth z and energy loss ω . Hence, the specific type for each inelastic scattering can be determined by sampling. Using Eqs. (1)-(4), we can distinguish the type of inelastic scattering and count the number of bulk excitation or surface excitation in a MC simulated REELS spectrum. According to Eq. (6), three different components, i.e. I bulk+beg (ω) , I surf (ω) and I mix (ω) , can be obtained.
In order to separate the bulk excitation component and Begrenzungs effect, a virtual situation was considered by assuming that the Begrenzungs effect does not exist. In this simulation the DIIMFP is as follows: σ total = σ bulk + σ surf . Based on the results of the virtual simulation three different spectral components, i.e. I ′ bulk (ω) , I ′ surf (ω) and I ′ mix (ω) are obtained. The Begrenzungs effect is a correction to the bulk excitation rather than to play a major role in the evaluation. We can assume that, there is no difference between I bulk (ω) and I ′ bulk (ω) , so Begrenzungs effect component can be written as I beg (ω) = I bulk+beg (ω) − I ′ bulk (ω) = I bulk+beg (ω) − I bulk (ω).
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. | 5,306 | 2021-03-15T00:00:00.000 | [
"Physics"
] |
Well defined extinction time of solutions for a class of weak-viscoelastic parabolic equation with positive initial energy
Abstract: In the present paper, we consider an important problem from the point of view of application in sciences and mechanic, namely, a class of p(x)-Laplacian type parabolic equation with weakviscoelasticity. Here, we are concerned with global in time non-existence under suitable conditions on the exponents q(x) and p(x) with positive initial energy. We show that the weak-memory term is unable to stabilize problem (1.2) under conditions (1.5) and (1.7). Our main interest in this paper arose in the first place in consequence of a query to blow-up phenomenon.
describes the evolution of the concentration during the propagation of particles in Ω. Here c(x, t, u) describes a source if it is positive or a bowl if it is negative, the diffusion coefficient a(x, t, u, ∇u) reflects the intrinsic ability of diffusion in particles in the medium. Needful to say, it has a numerous generalizations, we can also do with p(x)-Laplacian denoted by ∆ p(x) , which has an exponent variable property.
The fact that p(x)-Laplacian is not homogeneous, makes non-linearities more complicated than the operator p-Laplacian. Studies of various mathematical systems with variable exponent growth conditions have received considerable attention in recent years, which is justified by their various physical applications. However, few papers have treated evolutionary equations of non-local p(x)-Laplacian type (Please see [1,5,12,13,24,25]). Viscoelastic materials demonstrate properties between those of elastic materials and viscous fluid. As a consequence of the widespread use of polymers and other modern materials which exhibit stress relaxation, the theory of visco-elasticity has provided important applications in materials science and engineering (Please see [7,8,10,11,19,21,22]).
The viscoelastic materials show a behavior which is something between that of elastic solids and Newtonian fluids. Indeed, the stresses in these media depend on the entire history of their deformation, not only on their current state of deformation or their current state of motion. This is the reason why they are called materials with memory. The viscoelastic equations with fading memory in a bounded space has been deeply studied by several authors, in view of its wide applicability.
The lack of stability of solutions of viscoelastic partial differential equations is a huge restriction for qualitative studies. In the present paper, we consider for x ∈ Ω, 0 < t < ∞ with initial and boundary conditions where q(·) and p(·) are two continuous functions on Ω such that The viscoelastic term is represented as t 0 µ (t − s) ∆ x w ds, it is called "weak-viscoelastic" when it comes with the time weighted function σ(t), which is considered as a dissipative term and causes stability of systems. The nonlinear term |w| p(x)−2 w is known as the source of instability. The importance of our study lies in the study of the interaction between the exponents of source term and the Laplacian with the presence of weak-viscoelastic term. We take the exponents as a variable functions, with their difficulties in the mathematical point of view, to obtain a very large applications. These contributions extend the early results in literature.
We assume that q(x) satisfies the Zhikov-Fan condition, i.e. for all x, y ∈ Ω, with K > 0, 0 < κ < 1 and Since the relaxation function µ lies with the stability of solutions, we state assumptions on µ and σ as: µ, σ ∈ C 1 (R + , R + ) satisfy For positive constant C depending only on Ω determined by Lemma 2.1, we set for some constant λ > 0 (will be specified later) is related to the parabolic problem and when the exponents q(x) = q, p(x) = p, the existence/non-existence results have been extensively studied (please see [2-4, 14, 15, 23]).
Extinction phenomenon for parabolic equation with nonlinearities in divergence form are investigated in [18], under nonlinear boundary flux in bounded star-shaped region. The authors assumed conditions on weight function to guarantee that the solution exists globally or blows up at finite time. Moreover, using the modified differential inequality, upper and lower bounds for the blow-up time of solutions were derived in higher dimensional spaces.
In the case where q(·) and p(·) are constants, the existence of local solutions of initial-boundary value problem is proved by Akagi in [2] for initial data u 0 ∈ L r (Ω) and 2 ≤ q < p < +∞ and Ω is an open bounded domain in R n , under r > n(p − q)/q. For the case where q(·) and p(·) are two measurable functions, it is well known that some additional techniques must be used to study the existence/nonexistence of solutions for (1.2)-(1.4) and of course the classical methods may be failed unless some developments are made. For the case of nonlocal p(x)-Laplacian equations and in the absence of the memory term (µ ≡ 0), problem (1.2)-(1.4) has been studied byÔtani [20]. The author treated the question of existence and qualitative studies of solutions of (1.2)-(1.4) and showed that the difficulties come from the use of non-monotone perturbation theory. To complete these studies, the questions of blow-up of solutions for the same problem are discussed later by many authors.
In this paper we shall establish a blow-up result of solutions for problem (1.2)-(1.4) in the Lebesgue and Sobolev spaces with variable exponents and positive initial energy.
Preliminary and well known results
We try to list here some useful mathematical tools. First, let p : Ω −→ (1, ∞) be a measurable function. Denoting by We define the p(·) modular of a measurable function w : where The special Orlicz Musielak space L p(·) (Ω) is a Lebesgue space with variable-exponent and it consists of all the measurable function w defined on Ω for which be the Luxembourg norm on this space (see [16]). The Sobolev space W 1,q(·) (Ω) consists of functions w ∈ L q(·) (Ω) whose distributional gradient ∇ x w exists and satisfies |∇ x w| ∈ L q(·) (Ω). This space is a Banach with respect to the norm w 1,q(·) = w q(·) + ∇ x w q(·) .
where Ω is a bounded domain and C > 0 is a constant. The norm of space W 1,q(·) 0 (Ω) is given by is a measurable function and with continuous and compact embedding and where C S > 0 is an embedding constant.
Main results
Here, we present without proof, the first known result concerning local existence (in time) for problem (1.2)-(1.4). (see [22]) Now, to prove the blow up result, we should define the energy functional E (t), associated with our problem by Proof. Multiplying (1.2) by ∂ t w, integrating by parts over Ω, using (1.7) and Lemma 2.6, we get the desired result.
By using conditions (1.6), (1.8) and thanks to there exists a constant by (1.5) and (1.7), we have 1 such that Let f be a function defined by f : Then f is increasing in (0, α), decreasing for ψ > α, f (ψ) −→ −∞ as ψ −→ +∞ and Then there exists a constant β > α such that
Combining with the estimates obtained in the above Lemmas, we state the main result concerned with finite time blow-up.
By differentiating L, we get Using Cauchy Schwarz's inequality and Lemma 2.6 to obtain for some positive constant C 0 > 0 (to be determined later). Then, we have By using Young's inequality, (1.8) and Lemma 2.6 for some constant c > 0, we obtain We then substitute for σ(t) (µ • ∇ x w) (t) from (3.2), hence (3.8) becomes . By using (1.8) and (3.6), the estimate (3.9) takes the form This implies that, we can choose c > C 0 + λ to get Then | 1,815.8 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
Heterogeneous Nucleation and Growth of CaCO 3 on Calcite (104) and Aragonite (110) Surfaces: Implications for the Formation of Abiogenic Carbonate Cements in the Ocean
: Although near-surface seawater is supersaturated with CaCO 3 , only a minor part of it is abiogenic (e.g., carbonate cements). The possible reason for such a phenomenon has attracted much attention in the past decades. Substrate e ff ects on the heterogeneous nucleation and growth of CaCO 3 at various Mg 2 + / Ca 2 + ratios may contribute to the understanding of the origin of abiogenic CaCO 3 cements. Here, we used in situ atomic force microscopy (AFM), scanning electron microscopy (SEM), X-ray di ff raction (XRD) and Raman spectroscopy to study the heterogeneous nucleation and growth of CaCO 3 on both calcite (104) and aragonite (110) surfaces. The results show that (1) calcite spiral growth occurs on calcite (104) surfaces by monomer-by-monomer addition; (2) the aggregative growth of aragonite appears on aragonite (110) surfaces through a substrate-controlled oriented attachment (OA) along the [001] direction, followed by the formation of elongated columnar aragonite; and (3) Mg 2 + inhibits the crystallization of both calcite and aragonite without impacting on crystallization pathways. These findings disclose that calcite and aragonite substrates determine the crystallization pathways, while the Mg 2 + / Ca 2 + ratios control the growth rate of CaCO 3 , indicating that both types of CaCO 3 substrate in shallow sediments and aqueous Mg 2 + / Ca 2 + ratios constrain the deposition of abiogenic CaCO 3 cements in the ocean. perspective the origin of abiogenic CaCO 3 cements in the
Introduction
The supersaturation of near-surface seawater is about six and four times higher than that with respect to calcite and aragonite, respectively, whereas a much smaller fraction of the total sedimentary CaCO 3 minerals is abiogenic [1], which contradicts traditional thermodynamics [2,3]. These abiogenic CaCO 3 minerals, composed of aragonite and high-magnesium calcite in shoal-shallow marine environments [4][5][6], typically form either as heterogeneously precipitated primary marine cements or through post-depositional and diagenetic reactions [1]. In order to further understand the lack of abiogenic CaCO 3 in the modern ocean, many efforts have been conducted to ascertain factors which affect the precipitation of abiogenic CaCO 3 [7][8][9][10][11].
The precipitation of abiogenic CaCO 3 correlates with aqueous chemistry [12]. The pH, temperature, P CO 2 and a HCO − 3 affect the saturation index (SI, which is defined as SI = logΩ = log(IAP/ K sp ), where Ω denotes saturation state, and IAP and K sp represent activity product and solubility product, respectively) of solutions [13] and play key roles in determining the precipitate rate (r) of CaCO 3 polymorphs. The r is constrained by Ω, following the equation r = k (Ω − 1) n [2]. In addition, the formation of CaCO 3 minerals is partly kinetically driven, i.e., calcite growth favors a low reaction rate, while aragonite and vaterite prefer to form in relatively fast reactions [14]. In addition, the precipitation of CaCO 3 (e.g., calcite) depends on the a Ca 2+ /a CO 2− 3 of solutions. Due to the differences in the dehydration properties between Ca 2+ and CO 3 2− and the geometry of calcite step kink sites, the step velocities of the obtuse and acute angled edges can reverse with the variation of a Ca 2+ /a CO 2− 3 at a fixed SI [15][16][17]. On the other hand, impurity ions and molecules in solutions can also influence CaCO 3 precipitation. Inorganic anions (e.g., PO 4 3− and SO 4 2− ) exert the impacts by competing with CO 3 2− for Ca 2+ sites or incorporating in a crystal lattice [13]. Divalent metal cations, such as Mg 2+ , Ba 2+ and Sr 2+ , inhibit calcite growth by step pining, incorporation or kink blocking [18][19][20][21]. Similarly, soluble organic molecules (such as humate, fulvate, citrate and polyfunctional aromatic acids) inhibit the precipitation of aragonite by the modification of functional groups in these additives [11,22,23]. The surface properties of the growth substrate are another factor affecting the precipitation of abiogenic CaCO 3 [24][25][26][27]. The surface charge of an inorganic substrate relates to the growth of CaCO 3 polymorphs. Metastable CaCO 3 phases (e.g., aragonite or vaterite) are favored by negatively charged substrates, whereas positively charged substrates contribute to the formation of the stable CaCO 3 phase (i.e., calcite) [28]. In addition, the lattice mismatch between substrates and surface precipitates is also vital in the growth of abiogenic CaCO 3 [29]. Based on classical nucleation theory, the energy barrier (∆G C ) to form a critical nucleus is positively correlated with γ 3 (∆Gc ∝ γ 3 /(−RTlnσ) 2 , where γ denotes the interfacial free energy, σ the supersaturation, and R and T represent gas constant and Kelvin temperature, respectively [30]. The precipitation rate is subsequently controlled by the lattice mismatch, because a larger lattice mismatch leads to a higher ∆G C [31].
Abiogenic CaCO 3 cements tend to overgrow on shoal-shallow calcium carbonate-rich sediments [32]. Different types of CaCO 3 inorganic substrate should be taken into consideration, to study the precipitation of abiogenic CaCO 3 cements. The substrate effects of CaCO 3 have been found to control the mineralogical properties of carbonate cements [7,13], and plenty of works have also been conducted on the heterogeneous nucleation and growth of CaCO 3 on calcite and aragonite seeds [33][34][35]. However, little attention has been paid to the internal mechanism and crystallization pathways of CaCO 3 precipitates in these systems. Although the precipitation of CaCO 3 on calcite (104) surfaces has been well documented [3,23,36,37], it may not represent the formation of abiogenic aragonite cements, due to the distinct surface properties of the two polymorphs [38]. Additionally, there are some controversies regarding aragonite nucleation and growth in Mg 2+ -bearing solutions. Mg 2+ -induced inhibition was reported in both nucleation and growth [39], whereas either inhibition [33] or non-inhibition [40] was merely found in the growth processes. Therefore, the evolution and mechanism of heterogeneous nucleation and growth of both calcite and aragonite at the nanoscale on CaCO 3 substrates in Mg 2+ -bearing solutions with different concentrations should be investigated, to deeply understand the formation of abiogenic carbonate cements in the ocean.
In this study, we used AFM, SEM, XRD and Raman spectroscopy to investigate how calcite (104) and aragonite (110) substrates affect the heterogeneous nucleation and growth of CaCO 3 under different solution supersaturations at pH = 8.0 ± 0.1. Since Mg 2+ is a chief modifier for CaCO 3 precipitation in sedimentary environments, we altered the Mg 2+ /Ca 2+ ratios from 0 to 3 in the growth experiment. We observed that both the mineral phases and crystallization pathways of CaCO 3 precipitated on different types of CaCO 3 substrates are disparate. In addition, the precipitation rates of CaCO 3 generated on calcite and aragonite substrates are negatively correlated to Mg 2+ /Ca 2+ ratios. These findings reveal the different crystallization processes of CaCO 3 grown on calcite (104) and aragonite (110) surfaces, providing a new perspective on the origin of abiogenic CaCO 3 cements in the ocean.
Sample Preparation
An Iceland spar (from Guizhou, China) was cleaned using ethanol and deionized water (resistivity = 18.2 MΩ cm −1 ). Then, fresh calcite (104) surfaces (2 × 2 × 1 mm 3 ) were prepared by scalpel cutting along the cleavage plane. Additionally, a diamond wire cutter (STX-202A, Kejing Auto-instrument Co., Ltd., Shenyang, China) was used to slice aragonite (110) surfaces (2 × 2 × 1 mm 3 ) from single crystals (from Morocco). To obtain polished aragonite (110) surfaces, a Leica SP1600 (Leica, Wetzlar, Germany) saw microtome was employed. Since polished aragonite (110) surfaces would enhance the growth rate of CaCO 3 by the increase in surface defects (which will reduce the energy barrier (∆G C ) to form a critical nucleus, and the number of nanoparticles will increase after increasing surface defects, thus causing the increase in growth rate), we used them in the next experiments to elevate experimental efficiency. Next, 15 mL acetone was used to wash these polished (110) surfaces through ultrasonic bathing for 1 h, and then these (110) surfaces were taken out from acetone and dried with pure N 2 . Afterwards, these prepared calcite (104) and aragonite (110) samples were glued onto steel pucks with wax for AFM experiments.
Solution Preparation
High-purity CaCl 2 , MgCl 2 , NaHCO 3 , NaCl (purchased from Aldrich and Macklin) and deionized water were used to prepare solutions with different Mg 2+ /Ca 2+ ratios (0 and 3) and saturation indexes, with respect to calcite and aragonite (i.e., SI calcite and SI aragonite ), based on Visual MINTEQ [41] calculations (Table 1). Since the SI calcite is 0.13 greater than SI aragonite (SI calcite = SI aragonite + 0.13) in the same solution, we only used the former to label saturation states of solutions. Solution chemistry in the system is constrained by the amounts of [Mg 2+ ], [Ca 2+ ], CO 2 , pH, and ionic strength (IS). The pH was maintained at 8.0 ± 0.1, and the IS was controlled up to 0.10 M (to avoid the coverage of substrates by too much NaCl, we did not used the salinity of seawater). All solutions were freshly prepared before the growth experiment, to prevent the precipitation of supersaturated solutions. Before introducing growth solutions, the solution with SI calcite = 0.07 was injected into the fluid cell for at least one hour to make the surface of samples flatter. Based on solution chemistry, equilibriums in chemical reactions were taken into consideration as the following.
where (Mg 2+ ) represents the activity of Mg 2+ , and CO 2 is the total concentration of carbonate species. Equations (5) and (6)
Growth Experiments Measured by In Situ AFM
A Bruker Nanoscope IV Scanning Probe Microscope equipped with a flow-through fluid cell and Si 3 N 4 cantilevers was used to collect AFM images. The operational process was similar to that in our previous study [42]. After flatjaw pinchcocks were turned on, a flowing system formed between the O-ring installed on the fluid cell and sample. The flow rate of solutions was performed at 550 uL/min throughout the in situ growth experiment.
SEM, XRD and Raman Spectroscopy Analysis
SU8010 cold field emission SEM (FESEM, Hitachi, Japan), equipped with energy dispersive X-ray spectroscopy (EDS) (AMETEK-EDAX, Mahwah, NJ, USA), was utilized to observe surface morphologies and chemical compositions after growth under different conditions. In addition, mineral phases precipitated on the surfaces of substrates were analyzed by a Rigaku DMAX Rapid II X-ray diffraction system (Rigaku, Tokyo, Japan) (MoKα radiation), at 50 mV and 30 mA. A Micro-confocal Raman spectrometer (in Via, Renishaw-RM 2000 (Renishaw, Gloucestershire, UK)) at 532 nm laser extinction was also used to identify precipitates formed on these substrates.
The Heterogeneous Nucleation and Growth of CaCO 3 on Calcite (104) Surfaces
Under Mg 2+ /Ca 2+ = 0 conditions, spiral growth of calcite, with the monomolecular of growth hillock around 3.1 Å, is observed as the solution system reaches a steady-state (Figure S1a-c and Figure 2a). The morphology of growth hillock is a rhombus which consists of two acute and obtuse angles ( Figure S1d). The terrace width (λ) of steps is negatively correlated to the increase in solution SI calcite (Figure S1a-c) [3]. When the SI calcite of the solutions is equal to 0.50, 0.83 and 1.05, the λ values of obtuse steps are 175.43, 171.25 and 123.60 nm, respectively, while those of acute steps are 77.33, 74.25 and 62.80 nm, respectively. In addition, the calcite (104) surface after growing in the solution at SI calcite = 1.05 and Mg 2+ /Ca 2+ = 0 was characterized by SEM and EDS. We observed that these growth hillocks are covered by a layer of CaCO 3 crust, which is smooth and evenly distributed (Figure S1e-g). (Figure 2e). Except for the needlelike NaCl crystals produced on the substrate, the other precipitates are Ca (1-x) Mg x CO 3 with segmented surfaces. We also used EDS to semi-quantify the Mg 2+ contents in these overgrowths precipitated on calcite (104) substrates in the solution at Mg 2+ /Ca 2+ =3 and SI calcite = 1.05, discovering that it ranges from 24.68 at% (P2) to 43.20 at% (P1). Figure 4). Increasing SI calcite to 0.83, the aggregation rate of nanoparticles is enhanced ( Figure 5). Crusts composed of nanoparticles cover the substrate after 5 min (Figure 5c), and these crusts rapidly evolve to elongated columnar crystals (Figure 5c-h). Further increasing SI calcite to 1.05, the phenomena are generally similar to what is observed in the solution at SI calcite = 0.83, except that crusts only sporadically cover the substrate ( Figure S2). SEM was used to observe aragonite (110) surfaces after the heterogeneous nucleation and growth of CaCO 3 in solutions with different SI calcite . The size and morphology of these crystals are identical, while the orientation of crystals in the lower and upper layers is disparate. Crystals in the lower layer have their c axes parallel to aragonite (110) substrates, whereas those in the upper layer have their c axes perpendicular to the substrates ( Figure 6). The surfaces of these crystals are generally smooth, only with local cracks and holes (as indexed by the red arrows in Figure 6f,h). In addition, nanoparticles are observed from the broken parts and a few crystal surfaces (as indexed by the red arrows in Figure 6e,f). The heterogeneous nucleation and growth of CaCO 3 on polished aragonite (110) substrates were investigated in solutions under Mg 2+ /Ca 2+ = 3. The aggregation process of nanoparticles, which is similar to that without Mg 2+ addition, can be captured by in situ AFM (Figure 7). Meanwhile, the interfaces among these nanoparticles are extremely obvious, even after 180 min (as indexed by the red arrows in Figure 7h). Increasing SI calcite to 0.83 and eventually to 1.05, the aggregation rate of nanoparticles soars, while their solid-solid interfaces are still preserved (Figure 8 and Figure S3). The surfaces after growing in supersaturated solutions with Mg 2+ /Ca 2+ = 3 were analyzed by SEM. Although some rhombohedral particles form on these substrates, tower-like and elongated columnar crystals are major precipitates (Figure 9a,c,e). Additionally, some crystals with smooth surfaces generate around these tower-like crystals (Figure 9c,f). The surface roughness of these tower-like crystals, positively correlated with the increase in SI calcite , is greater than that of crystals precipitated in Mg 2+ -free solutions (Figure 9b,d,f).
Mineral Phases of CaCO 3 Dependent on Calcite (104) and Aragonite (110) Substrates
Mineral phases of CaCO 3 precipitates have been identified by XRD and Raman analyses. The XRD results show that calcite and aragonite tend to precipitate on calcite (104) and aragonite (110) substrates, respectively (Figure 10a). Under our experimental conditions, the addition of Mg 2+ does not induce the formation of protodolomite, dolomite, hydromagnesite (e.g., nesquehonite) or magnesite (Figure 10b). In addition, the relative intensity of XRD reflections is distinct after CaCO 3 grows on calcite (104) and aragonite (110) substrates in different solutions. The Raman results demonstrate that the mineral phase of aragonite formed on aragonite (110) substrates will not be affected by solution SI calcite (Figure 11a). In addition, we cannot observe the obvious change in mineral phase under Mg 2+ /Ca 2+ = 3 compared with that under Mg 2+ /Ca 2+ = 0 (Figure 11b).
Different Crystallization Pathways of Heterogeneous Nucleation and Growth of CaCO 3 on Calcite (104) and Aragonite (110) Surfaces
Classical and nonclassical nucleation and growth theories play critical roles in understanding the formation of minerals. The former stresses the whole processes, including the diffusion of monomers from solution to solid surfaces, the transformation of monomers from surfaces to active sites, and eventually the growth of crystals [43]. The latter emphasizes the formation and aggregation of precursors in the solution [44]. Based on our results, CaCO 3 crystals formed on calcite (104) surfaces crystallize by spiral growth, and these CaCO 3 crystals are not typical rhombohedral calcite particles. In contrast, nanoparticles are initially generated on aragonite (110) surfaces, and then followed by OA and finally form elongated columnar crystals. These crystals precipitated on aragonite (110) surfaces are divided into two layers. The crystals in the first layer are arranged with their c axes parallel to the substrate, while those in the second layer are perpendicular to the substrate. These two different orientations of crystals can be ascribed to the limitation of growth space [45]. The growth of crystals in the second layer is constrained by gaps between crystals formed in the first layer, leading to vertical extension. The SEM results show that the crystals are composed of nanoparticles, which is evidence of particle aggregation. Therefore, we proposed that the heterogeneous nucleation and growth of CaCO 3 on calcite (104) surfaces are in accordance with classical nucleation and growth theory (i.e., monomer-by-monomer addition), while those on aragonite (110) surfaces conform to nonclassical nucleation and growth theory (i.e., precursor attachment).
The results of XRD and Raman analyses demonstrate that the CaCO 3 precipitated on calcite (104) surfaces is calcite, whereas that formed on aragonite (110) surfaces is aragonite, which is attributed to the complete lattice match between minerals with identical phases [31,46]. The relative intensity of XRD reflections, as well as Raman frequencies, of the same type of substrate grown in different solutions is distinct, which is ascribed to different orientations of CaCO 3 precipitates on local surfaces. Additionally, the surface roughness of aragonite (110) is greater than that of the calcite (104) surfaces in this study, causing the drop in local supersaturation on aragonite (110) surfaces and the eventual formation of sporadic rhombohedral CaCO 3 (probably calcite) (Figure 9).
The crystallization pathways of calcite and aragonite on calcite (104) and aragonite (110) surfaces in this study are similar to those precipitated in homogeneous systems [47][48][49]. Hence, the differences in crystallization pathways between calcite and aragonite may be decided by their own crystalline behaviors. A previous study proposed that the binding force of rhombic crystals in 3D directions is identical, while that of prismatic crystals along the c axes is greater than that in both the a and b directions [50], which perfectly explains our AFM observations that aggregates of nanoparticles formed on aragonite (110) surfaces extend along the c axes.
Effects of Mg 2+ and Saturation States on the Heterogeneous Nucleation and Growth of CaCO 3 on Calcite (104) and Aragonite (110) Surfaces
Both Mg 2+ /Ca 2+ ratios and solution supersaturations impact CaCO 3 growth. On the one hand, when Mg 2+ /Ca 2+ ratios increase from 0 to 3, the spiral growth of calcite is inhibited. Sethmann et al. (2010) observed the segmentation of Mg-calcite thin films on calcite (104) surfaces, and they discovered that the premise of ridge formation is to break through the critical thickness (about 12 nm), which signifies the release of compress stress when the thickness exceeds the critical value [51]. Due to the ionic radius of Mg 2+ (0.86 Å) being about 30% smaller than that of Ca 2+ (1.14 Å) [52], the Mg 2+ substitution of Ca 2+ will increase the compress stress. This stress will be released through layer buckling and breaking along the [ [37]. On the other hand, Mg 2+ also decreases the crystallization rate of aragonite. The magnitude of the kinetic coefficient (β) is controlled by the density of kink sites along the step (n k ) and the net probability of attachment to a site ((exp(E k /kT), where k and T represent the Boltzmann constant and Kelvin temperature, respectively, and E k denotes an effective barriers to attachment at a kink). Namely, β~n k exp(-E k /kT) [53]. As many nanoparticles form on aragonite (110) surfaces, the n k reaches the maximum value. Therefore, the exp(E k /kT), instead of the n k , is the major factor affecting β. Since the exp(E k /kT) is related to the desolvation of cations, which is the rate-limiting step during CaCO 3 growth, the dynamics of particle aggregation would be restricted when solvation layers are strongly bounded [44]. The desolvation of Mg 2+ is more difficult than that of Ca 2+ , resulting in a slower crystallization rate of aragonite in Mg 2+ -bearing solutions. As a result, the OA of nanoparticles aggregates along the [001] direction is much easier to be observed.
As supersaturations of solutions are positively correlated with growth rates of CaCO 3 [2], the aggregation rate of nanoparticles is obviously accelerated in solutions with higher supersaturation. However, the solid-solid interfaces among nanoparticles are still obvious under these conditions ( Figure 8 and Figure S3), indicating that it is difficult to eliminate the inhibition of Mg 2+ on the crystalline rate of CaCO 3 , even in highly supersaturated solutions (from 0.50 to 1.05).
Comparison with Previous Studies
The IS and Mg 2+ /Ca 2+ ratios in our study are different with those in seawater and some previous studies; nevertheless, our results can be reasonably inferred to CaCO 3 precipitation in the ocean. Although the IS in our study (0.1 M) is much lower than that in seawater (IS = 0.7 M), Zhong and Mucci (1989) discovered that salinity variations alone will not significantly affect both the precipitation rates and overgrowth compositions of calcite and aragonite [35]. In addition, the critical Mg 2+ /Ca 2+ ratios favoring calcite or aragonite ranges from 1.5 to 2 [30,54], and when the Mg 2+ /Ca 2+ ratios in solutions are less than 7.5, the distribution coefficients of Mg 2+ , both in mineral overgrowths and on their adsorbed layers, are positively correlated with Mg 2+ /Ca 2+ ratios of solutions [34]. Therefore, the experimental consequences obtained at Mg 2+ /Ca 2+ = 3 in this study can represent those in real seawater at Mg 2+ /Ca 2+ = 5.
Calcite Precipitation
The growth features of calcite in Mg 2+ -free solutions (Figure S1a-c) are used for comparison with previous studies. We discovered that the differences in terrace width between obtuse and acute steps (∆λ) are 98.10, 97.00 and 60.80 nm at SI calcite = 0.50, 0.83 and 1.05, respectively. Since the λ of steps is negatively related to the step velocity [3], the difference in step velocity between obtuse and acute steps is positively correlated with solution SI calcite at a Ca 2+ /a CO 2− 3 = 10, which is consistent with previous experimental and fitting data [16,17]. Additionally, when we convert SI calcite = 0.50 to supersaturation σ (SI calcite = log(exp(σ))), the σ value is equal to 1.15, and we found that the λ values of obtuse and acute steps under our experiment (175.43 and 77.33 nm) are slightly higher than those (152.67 and 75.00 nm) reported by Wasylenki et al. (2005), which may be caused by the minor differences of a Ca 2+ /a CO 2− 3 [55].
The inhibiting effect of Mg 2+ on the spreading rate of the calcite growth hillock in this study is similar to that in previous studies. Two basic impurity models for the inhibiting effect of Mg 2+ on the growth of calcite (104) surfaces are widely accepted. One is step pinning [56,57], while the other is incorporation [57,58]. In the former model, the adsorption of Mg 2+ on step sites is a reversible process, while Mg 2+ incorporated into the calcite lattice cannot be reversed in the latter. Based on the inhibiting effect of a newly formed monolayer on the growth of subsequent monolayers, Astilleros et al. (2010) proposed a new solid solution-aqueous solution model, providing an explanation for the formation of "dead zones" in the step pinning model [59]. According to our AFM results (Figure 2b-d), these three models probably coexist. The Mg 2+ ions can pin at non-specific sites, while only incorporated at specific sites. Additionally, the surface properties of the first monolayer formed on the calcite (104) surface would be altered upon Mg 2+ incorporation, causing the growth inhibition of subsequent monolayers. Some researchers have declared that the incorporation of Mg 2+ into calcite leads to the increase in mineral solubility, which contributes to aragonite precipitation [33]. However, based on the thermodynamic simulation, the increase of surface energy by incorporating Mg 2+ into calcite is the main reason for the rise of the energy barrier (∆G C ) to form a critical nucleus of calcite [30]. Similarly, the spreading rates of calcite steps are negatively correlated with the free energy (∆g ± ) (∆g ± = -(L/b)∆µ + 2c<γ> ± , where L, b, ∆µ, c and <γ> ± denote step length, 6.4 Å intermolecular distance along the steps, chemical potential, 3.1 Å distance between rows and step edge free energies along the + and − steps, respectively.) [36]. When the ∆g ± increases with the incorporation of Mg 2+ into the steps, the compress stress increases and the spreading rates are inhibited.
The overgrowth compositions of calcite in Mg 2+ -bearing solutions are also compared. Since the concentration of Mg 2+ in calcite overgrowth is independent of precipitation rates [34], the SI calcite of solutions will not change Mg 2+ contents in overgrowth. The contents of Mg 2+ semi-quantified in our study (24.68-43.20 at%) are in accordance with Hong et al. (2016)'s calculation that the maximal Mg 2+ content which a calcite lattice can sustain before plastic deformation is around 40 at% [37]. However, our value is greater than that obtained in a previous study using the constant disequilibrium technique (crystal-seed systems) (7-10 at%), which is attributed to the effects of the surface or adsorbed layer. As the Mg 2+ /Ca 2+ ratios in the surface or adsorbed layer are higher than those in the overgrowth layer, Mg 2+ contents detected in precipitates are greater when the thickness of the overgrowth layer is extremely thin [34].
Aragonite Precipitation
The influence of Mg 2+ on aragonite precipitation is a controversial topic [33,39,40], which might be partly ascribed to the limitation of research techniques in the late 20th century. These studies determined the precipitation rate by measuring weight change per unit of time, neglecting the reaction pathways. Fortunately, the in situ AFM has been successfully employed to monitor chemical reaction pathways in the past decades [3,9,16,17,20,36,37,53,55]. Our in situ AFM study discovered that the interfaces among nanoparticles formed on aragonite (110) substrates in solutions with Mg 2+ /Ca 2+ = 3 are more obvious than those in Mg 2+ -free solutions, demonstrating that the precipitation rate of aragonite is actually inhibited by Mg 2+ through retarding the dehydration rate of precursors. However, since these previous experiments were carried out in extremely high supersaturated solutions for several hours, the inhibition of Mg 2+ on the precipitation rate of aragonite is not easily detected on the macro-scale. According to our investigation, the aragonite precipitated on aragonite (110) substrates undergoes the oriented attachment of nanoparticles, and thus impacts of Mg 2+ on these nanoparticles should not be ignored.
The stability of Mg 2+ in aragonite structure directly determines overgrowth compositions. Since the coordination number of Ca 2+ in aragonite structure is nine, Mg 2+ , commonly occurring in six and eight coordination, cannot be preserved in aragonite for a long time. Nevertheless, a small amount of Mg 2+ can be conserved in aragonite precursors (e.g., ACC and monoclinic aragonite, i.e., mAra), and the contents of Mg 2+ incorporated in structures are gradually released with the transformation of mineral phases [53,60]. This prolonged precipitation provides suitable conditions for releasing Mg 2+ from the particle precursors back into the solution, leading to low Mg 2+ contents in the ultimate aragonite crystals (<10 at%, which is the maximum value for mAra).
Implications for the Formation of Abiogenic Carbonate Cements
Our experimental results are consistent with previous inhibition models of Mg 2+ on the growth of calcite (104) surfaces. We discovered that the modification mechanisms of Mg 2+ on the growth of calcite (104) and aragonite (110) substrates act on the step sites and nanoparticles, respectively, providing a new perspective on the precipitation of abiogenic CaCO 3 in the ocean.
It was discovered that the ACC preferentially deposits at the edges of nacreous tablets in the early stage of precipitation, and then transforms into tower-like aragonite with time [61]. This precipitation behavior of CaCO 3 is similar to the template effect of soluble macromolecules (SM) on CaCO 3 aggregation [62], which is ascribed to the change in interfacial free energy by organic templates. The regulation of organic templates on nucleation widely exists in the formation of CaCO 3 in vivo or on organic membranes [63,64].
In shallow marine sedimentary environments, the precipitation of CaCO 3 can be divided into three stages. In the first stage, phytoplankton and zooplankton produce biogenic CaCO 3 shells through photosynthesis and metabolism [63], respectively, belonging to direct biological regulation. When these plankton die, the precipitation of CaCO 3 proceeds into the second stage, in which organic membranes on these shells chiefly control the formation of CaCO 3 cements. After these CaCO 3 cements cover the majority of organic membranes, the precipitation of CaCO 3 enters the third stage, in which abiogenic CaCO 3 cements directly overgrow on the CaCO 3 substrate.
The findings presented in this study mainly contribute to the CaCO 3 precipitation in the third stage, without direct or indirect biological regulation. We observed that elongated columnar aragonite in two layers with different orientations forms on aragonite (110) surfaces, while smooth layered calcite precipitates form on calcite (104) surfaces. That is to say, the heterogeneous nucleation and growth of CaCO 3 on aragonite are an important origin of acicular and fibrous aragonite cements, whereas those on calcite are conducive to the formation of the secondary enlargement of calcite cements. Additionally, Mg 2+ will inhibit the step growth of calcite and the dehydration rate of aragonite precursors, without changing the crystallization pathways of CaCO 3 on these two substrates, suggesting that the precipitation rate of CaCO 3 cements is subject to Mg 2+ /Ca 2+ ratios in the ocean. These discussions indicate that the distribution of different types of CaCO 3 in shallow-water sediments and Mg 2+ /Ca 2+ ratios in seawater chiefly control the formation and subsequent input flux of abiogenic CaCO 3 cements.
Conclusions
AFM, SEM, XRD and Raman analyses were utilized to investigate the heterogeneous nucleation and growth of CaCO 3 on calcite (104) and aragonite (110) surfaces in abiotic environments, leading to the following conclusions: (1) smooth layered calcite forms on calcite (104) surfaces; (2) elongated columnar aragonite generated by OA of nanoparticles, aggregating along the [001] directions, precipitates on aragonite (110) surfaces; (3) Mg 2+ inhibits the growth of aragonite and calcite formed on aragonite (110) and calcite (104) surfaces, by retarding the dehydration of precursors and blocking step growth, respectively, without affecting the crystallization pathways of CaCO 3 on these two substrates. The aforementioned conclusions suggest that different types of CaCO 3 in shallow-water sediments determine the mineralogy and morphology of abiotic CaCO 3 cements, and the lack of abiogenic CaCO 3 cements can be partly ascribed to the retardation of Mg 2+ on the crystallization rates of both calcite and aragonite. | 7,298.2 | 2020-03-25T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Non-Contact Detection of Vital Signs Based on Improved Adaptive EEMD Algorithm (July 2022)
Non-contact vital sign detection technology has brought a more comfortable experience to the detection process of human respiratory and heartbeat signals. Ensemble empirical mode decomposition (EEMD) is a noise-assisted adaptive data analysis method which can be used to decompose the echo data of frequency modulated continuous wave (FMCW) radar and extract the heartbeat and respiratory signals. The key of EEMD is to add Gaussian white noise into the signal to overcome the mode aliasing problem caused by original empirical mode decomposition (EMD). Based on the characteristics of clutter and noise distribution in public places, this paper proposed a static clutter filtering method for eliminating ambient clutter and an improved EEMD method based on stable alpha noise distribution. The symmetrical alpha stable distribution is used to replace Gaussian distribution, and the improved EEMD is used for the separation of respiratory and heartbeat signals. The experimental results show that the static clutter filtering technology can effectively filter the surrounding static clutter and highlight the periodic moving targets. Within the detection range of 0.5 m~2.5 m, the improved EEMD method can better distinguish the heartbeat, respiration, and their harmonics, and accurately estimate the heart rate.
Introduction
The remote monitoring of human vital signs has aroused great interest of researchers in various fields, such as medical monitoring, anti-terrorism action, as well as rescue action and security. Vital signs mainly include heart rate, respiratory rate. It can be used to assist doctors in treatment, daily family health monitoring, morbid respiratory pattern monitoring and sleep quality evaluation [1][2][3]. It is not easily affected by temperature, humidity, working environment and other factors, which not only provides a non-invasive and convenient means of detecting vital signs, but also makes special scene applications possible. For example, vital sign information detection is carried out for critical patients such as large-area burn and trauma, infectious patients, and mental patients [4,5], in rescue, counter-terrorism response, emergency searches [6,7], in monitoring of infants and detection of sleep disorders in adults [8].
At present, there are three main types of radar systems used for vital signs detection, namely pulse radar (UWB radar) [9], CW doppler radar [10][11][12], and frequency modulated continuous wave (FMCW) radar [7,[13][14][15][16][17][18]. CW doppler radar is not good at distinguishing clutter from multiple targets because of the lack of modulation spectrum information. Therefore, the vital sign monitoring method based on CW Doppler radar only relies on Doppler information to detect relative motion [1,2,[19][20][21]. Moreover, the limitations of CW in detecting distance can be a disadvantage of using the system [22]; the extracted Doppler information may contain micro-motion interference from the human body and the The best result is achieved with the frontal position at 1 m distance with a median relative error of 8.09% FMCW [4] 114-130 ECG heartbeat waveforms, HRV, and respiratory and heart rate two-step FFT EMD Simultaneously vital sign detection of multiple subjects and analysis of coupling between breathing and heartbeat. The detection accuracy is 2 um. There are some problems such as mode mixing and end effect FMCW [22] 24-24.05 belt sensor Respiratory and position two-step FFT ROl determination and weighting process to minimize the clutter from the debris And other objects under debris the maximum depth of the radar system is 3.28 m behind the wall material.
FMCW [5] The detection range is 0.5-2.5 m, The error with contactor MI3 is less than 4 bpm, and the heartbeat accuracy is more than 95% Alizadeh et al. [15] applied 77 GHz millimeter-wave radar to detect the components of vital signals by extracting the phase of intermediate frequency (IF) signals. The correlation between the respiratory and heartbeat rates estimated by the reference sensor and radar is 94% and 80%, respectively. The proposed MTI-autocorrelation-EEMD can reconstruct the respiratory and heartbeat signals well, but the detection range applied in the experiment is short [9]. Xu Zhengwu et al. [27] studied the detection of human vital characteristics based on 220 GHz solid-state source terahertz biologic radar system, using Empirical Mode Decomposition. However, as mentioned in [28,29], EMD has problems of modal mixing and endpoint effect, which cannot separate respiratory and heartbeat signals well, which limits its application in practice. A joint spectral estimation method based on adaptive recognition embedded Ensemble Empirical Mode Decomposition (EEMD) is proposed for heart rate measurement [30]. Experimental results show that in the detection range of 1-2.5 m, the method can better detect distracted jumping and breathing, and the root mean square error is less than 6 BPM.
Several issues need to be considered for vital sign detection based on the EEMD method, such as (i) the need for a high-resolution feature to detect small displacements on the chest or abdominal wall. Conventional viewpoints in radar systems require a wide range of bandwidth signals. (ii) In the traditional EEMD algorithm, there are respiratory harmonics close to the heartbeat frequency in the decomposed heartbeat component, which hinders the correct assessment of heart rate. (iii) The clutter contributed by the non-periodic moving obstacles around the target, especially those close to the target, will make the system output wrong phase information. A way is needed to overcome the aforementioned problems. Thus, the robustness and accuracy of FMCW radar system in respiratory vital sign detection application can be enhanced. The problem to be solved in this paper is to suppress aperiodic motion information, overcome the influence of respiratory harmonics, and improve detection efficiency.
In this paper, an integrated empirical mode decomposition (EEMD) algorithm based on noise distribution features and a static clutter filtering method are proposed. The static clutter filtering method filters the information of static and aperiodic moving objects in the environment. Then, the phase accumulation method is used to enhance the robustness of phase extraction. In addition, the DACM algorithm is used to identify the phase signal of each chirp to solve the phase jump problem. Finally, the improved EEMD eliminates the influence of respiratory harmonics and improves the detection efficiency. This paper is organized as follows: Section 2 explains the principle of FMCW radar vital sign measurement. In Section 3, the implementation steps of the proposed algorithm are introduced. The experimental results and conclusions are given in Sections 4 and 5, respectively.
Principle of FMCW Radar
FMCW radar's vital signs measurement is based on the phase term of reflected signals from the human body. Figure 1 shows a simplified block diagram of a typical FMCW radar system. The FMCW radar system consists of transmitting (TX), RF synthesizer, receiving (RX), clock, analog-to-digital converter (ADC), digital circuit, single chip microcomputer (MCU) and digital signal processor (DSP). The FMCW radar periodically transmits a chirped signal generated by a synthesizer with a frequency that increases linearly with time through a transmitting antenna, each transmitting chain has independent phase and amplitude control. The RF synthesizer creates the desired frequency, like a chirped signal that changes over time. The signal received by the receiving antenna is amplified by a low noise amplifier (LNA) and correlated with the local chirp of the mixer. Then, after low pass (LP) filtering, a sinusoidal signal containing the instantaneous frequency difference between transmitting and receiving chirps is obtained, which is called the beat signal. Finally, an analog-to-digital converter (ADC) samples the beat signal for subsequent signal processing. With this instantaneous signal, the instantaneous frequency difference can be translated into the instantaneous distance between the radar and the target.
The transmitted signal is usually a sawtooth waveform or a triangle waveform. The sawtooth waveform is adopted in this paper, and the specific mode is shown in Figure 2.
The transmitted FMCW signal can be expressed as [5]: where f c is the starting frequency of the chirp signal, B is bandwidth, A is the amplitude of the transmitted signal, θ(t) is phase noise, T is the width of chirp signal pulse, B T is the slope of the chirp signal, which represents the changing rate of the frequency. by a low noise amplifier (LNA) and correlated with the local chirp of the mixer. Then, after low pass (LP) filtering, a sinusoidal signal containing the instantaneous frequency difference between transmitting and receiving chirps is obtained, which is called the beat signal. Finally, an analog-to-digital converter (ADC) samples the beat signal for subsequent signal processing. With this instantaneous signal, the instantaneous frequency difference can be translated into the instantaneous distance between the radar and the target. The transmitted signal is usually a sawtooth waveform or a triangle waveform. The sawtooth waveform is adopted in this paper, and the specific mode is shown in Figure 2. The transmitted FMCW signal can be expressed as [5]: where is the starting frequency of the chirp signal, is bandwidth, A is the amplitude of the transmitted signal, ( ) is phase noise, is the width of chirp signal pulse, is the slope of the chirp signal, which represents the changing rate of the frequency.
Suppose D(t) is the motion displacement of the chest and R is the distance from the radar sensor to the human body. The distance from the chest to the radar is r (t) = D(t) + quent signal processing. With this instantaneous signal, the instantaneous frequency difference can be translated into the instantaneous distance between the radar and the target. The transmitted signal is usually a sawtooth waveform or a triangle waveform. The sawtooth waveform is adopted in this paper, and the specific mode is shown in Figure 2. The transmitted FMCW signal can be expressed as [5]: where is the starting frequency of the chirp signal, is bandwidth, A is the amplitude of the transmitted signal, ( ) is phase noise, is the width of chirp signal pulse, is the slope of the chirp signal, which represents the changing rate of the frequency.
Suppose D(t) is the motion displacement of the chest and R is the distance from the radar sensor to the human body. The distance from the chest to the radar is r (t) = D(t) + Suppose D(t) is the motion displacement of the chest and R is the distance from the radar sensor to the human body. The distance from the chest to the radar is r (t) = D(t) + R, and the delay is t d = 2 r(t)/c, where c is the speed of light. The received signal can be expressed as: The RX and TX signals are mixed by two orthogonal I/Q channels, then, through low-pass filtering, an intermediate frequency (IF) signal is obtained. The IF signal contains a single tone signal with a constant frequency. Where The approximate equation in (5) is obtained by ignoring the phase square term, which is about 10 −6 (when B T is about 10 12 Hz/s and t d is about 1 ns). Residual noise phase as a delta ∆θ(t) = θ(t) − θ(t − t d ) can be ignored in the approximation for the third time. In general, t = 1 µs, the thoracic displacement R (t) is mm grade, and can be ignored relative to R. ψ(t) varies with the change of D(t) in the range of λ at a fixed distance R.
After the IF signal is obtained, the fast Fourier transform (FFT) is applied to the IF beat signal. Spectral peaks correspond to different distances between subjects. The range FFT of each chirp signal represents a specific distance at different time, and the range FFT at different time represents the variation of ψ(t) in (6) with time. In order to measure the change of vital signal with time, multiple chirp signals are sent in the detection time range, which is equivalent to sampling D(t). Assuming that D(t) is sampled every Tr, it is called a frame period. Therefore, the sign information can be obtained by the phase extraction of distance FFT at continuous time.
Proposed Method
The vital signal measurement process of FMCW radar can be divided into four stages: static clutter filtering, range FFT, extraction and separation of vital signal, estimation of respiration and heartbeat rates. Static clutter filtering can filter out the non-periodic movement of the target around the test target, range FFT can detect the position of the test target. Vital signal extraction is limited to the range of this position, and respiratory and heartbeat signals can be recovered by extracting phase information in the range FFT. Finally, the respiratory rate and heart rate were obtained by frequency estimation of the recovered respiratory and heartbeat signals. In this section, we will illustrate the proposed approach from these four stages and discuss its advantages.
The main processing process is shown in Figure 3. Firstly, static clutter filtering can filter out non-periodic moving targets around the test target. Then, range FFT can detect the position of the test target and limit vital signal extraction to the range of this position. After that, the respiratory and heartbeat signal waveforms were established by extracting continuous time phase information. Finally, the respiratory rate and heart rate were obtained by frequency estimation of the recovered respiratory and heartbeat signals. In this section, we will illustrate the proposed approach from these four stages and discuss its advantages.
A. Static Clutter Filtering
FMCW radars emit chirped signals, and all targets in the radar radi the radar echoes. Although FMCW radar has range detection capabilit guish different targets within the same range cell. When there are statio moving targets near the monitored vibration target, the IF beat signal c signal reflected by these targets. In particular, the non-periodic moving to the target may lead to errors in effective phase extraction, thus affe of the final test results. According to the principle of FMCW radar, the s the target has the same chirp signal distance information for each fr shown in Equation (7), firstly, the IF signal is accumulated, and then it calculated (frames signal cycle time is a vital of integer times). The vital signal so it has accumulative results of 0. The static target information c signal can be filtered by subtracting the average value of the intermediat
A. Static Clutter Filtering
FMCW radars emit chirped signals, and all targets in the radar radiation space scatter the radar echoes. Although FMCW radar has range detection capability, it cannot distinguish different targets within the same range cell. When there are stationary or a periodic moving targets near the monitored vibration target, the IF beat signal contains the clutter signal reflected by these targets. In particular, the non-periodic moving target that is close to the target may lead to errors in effective phase extraction, thus affecting the accuracy of the final test results. According to the principle of FMCW radar, the static target around the target has the same chirp signal distance information for each frame; therefore, as shown in Equation (7), firstly, the IF signal is accumulated, and then its average value is calculated (frames signal cycle time is a vital of integer times). The vital signal is a periodic signal so it has accumulative results of 0. The static target information contained in the IF signal can be filtered by subtracting the average value of the intermediate frequency signal in each frame, and the power of the non-periodic moving target around the target will also be greatly reduced, so as to achieve the filtering of the static target, reduce the power of the non-periodic moving target, and improve the SNR of the periodic vital signal.
y AVG is the average value of N frame echo data, L is the number of frames.
Phase Extraction
In fact, the phase changes measured by the IF signal are very slow, meaning that there is a long system idle time between two consecutive chirps. In non-contact vital sign monitoring applications, the frequency range of interest is about 0.1 Hz to 2 Hz (6 BPM to 120 BPM). The sampling rate of conventional FMCW radar can reach several megahertz, and the sampling length is usually on the order of several hundred. Breathing and chest displacement movements are approximately 12 mm at most, several times the FMCW radar wavelength (4 mm at 77 GHz). If we use the traditional arctangent demodulation technique to extract the phase values, the extracted phase values will exceed the phase range (−π/2, π/2), this leads to phase discontinuities, phase blurring, and thus phase hopping. The two I/Q demodulation signals can be expressed as [5][6][7]: n is discrete sampling points, T m is interval time of every time sampling, φ is signal displacement information, DC I , DC Q is the system offset, in order to ensure the continuity and the accuracy of phase, we adopted an extension DACM algorithm. The DACM algorithm transforms arctan function into derivative operation [5], which can be expressed as: Finally, the integration stage is expressed in discrete form as: Although the extended DACM algorithm solves the phase ambiguity problem, the heartbeat frequency is so small that it is easy to drown in the respiratory harmonic frequency and noise. Therefore, phase difference processing is performed to enhance the heartbeat signal. Phase difference is the difference between adjacent continuous phase values, namely, φ(k) − φ(k − 1). This differential phase expression can suppress phase shift and enhances the continuity of heartbeat signals.
Adaptive EEMD Recognition Method
After phase extraction of Tc (Tc/Tr sampling frames should cover at least one respiratory and heartbeat cycle) time, we first extracted the respiratory and heartbeat waveform, and then separated the respiratory and heartbeat waveform. In fact, since the displacement of the chest cavity is the result of the joint action of respiration and heartbeat, the phase term not only includes the components of respiration and heartbeat, but also the harmonic components of respiration [17,[19][20][21]. The displacement information of vital signs x(t) can be expressed as: After discretization, it can be expressed as: is the m-th harmonic of breath. Amplitude, frequency and phase are a m , f m and ϕ m respectively. In fact, the respiration rate is about 0.1~0.5 Hz, the heartbeat rate is about 0.8~2 Hz, and the amplitude of respiratory vibration is about 10 times higher than that of heart rate. Therefore, the respiratory harmonic frequency may be close to the heartbeat frequency and have a similar or even higher amplitude [31]. Therefore, traditional methods may misjudge heartbeat signals in the frequency domain. EMD has been proved to have the potential ability to deal with this problem [27][28][29]31]. EMD can avoid the impact of respiratory harmonics on heartbeat signals to a certain extent and separate the correct heartbeat signals.
However, EMD has problems such as modal mixing and endpoint effect [27]. In order to solve these problems, Ensemble Empirical Mode Decomposition (EEMD) is proposed [32]. EEMD uses Gaussian white noise to wrap intermediate frequency signals and the dyadic filter banks of EMD. The key of EEMD is to add random white noise to the analyzed signal. However, noise in public places belongs to natural noise and is considered as a symmetrical alpha stable distribution [33,34]. Therefore, an improved EEMD method is proposed in this paper, which uses a symmetric alpha stable distribution sequence instead of a Gaussian distribution for feature extraction of vital signals.
The simulated signal is shown in Figure 4. Parameter Settings are shown in Equation (14), where f re = 0.32 Hz and f hr = 1.54 Hz. The sampling rate was set to 20 Hz, and the phase signal was added with Gaussian white noise with a signal-to-noise ratio of 0.2. REVIEW 8 is proposed in this paper, which uses a symmetric alpha stable distribution sequen stead of a Gaussian distribution for feature extraction of vital signals.
The simulated signal is shown in Figure 4. Parameter Settings are shown in Equ (14), where = 0.32 Hz and = 1.54 Hz. The sampling rate was set to 20 Hz, an phase signal was added with Gaussian white noise with a signal-to-noise ratio of 0.2 In Figure 5a, the original time-varying phase signal is decomposed into four intrinsic mode functions (IMF). IMF2 can be identified as a respiratory component. IMF1 is the component closest to the heartbeat, but it can be seen from Figure 5b that its spectrum contains three components, which constitutes the mode mixing problem. It can be found that it is difficult to distinguish the heartbeat from the spectrum of the original signal. On the other hand, from Figure 6a, we can see that the EEMD with Gaussian white noise decomposed the original signal into 10 IMFs. The spectra of IMF3, IMF5 and original signals are shown in Figure 6b,c. The frequency corresponding to the protruding peak of IMF3 in the range of 1~2 Hz is very close to the real value of heartbeat frequency. However, the interference term still exists in the range of 0.8~1 Hz, which is identified as the third harmonic of respiration. Therefore, we propose an EEMD adaptive recognition method based on the stable distribution of alpha to improve heartbeat measurement. The probability density function of symmetric alpha stable distribution [33] can be expressed as: where v(θ) = 1 We use the maximum likelihood method to estimate the value of parameter alpha. The second, third, fourth and sixth breath harmonics and the SNR was set to 0.2. IMF5 and IMF3 are considered to be the closest respiratory a components, respectively. The heart rate can be estimated from the IMF3 spectrum of ( Figure 4, that is, F re = 0.32 Hz and F hr = 1.54 Hz. The second, third, fourth and sixth breath harmonics were added, and the SNR was set to 0.2. IMF5 and IMF3 are considered to be the closest respiratory and heartbeat components, respectively. The heart rate can be estimated from the IMF3 spectrum of (c).
Symmetric stable alpha distribution random variable X_noise is generated, and the original signal is wound with X_noise instead of gaussian distribution noise. The original signal is reconstructed as: The X_noise i, i = 1, 2, 3, . . . is an independent symmetric stable alpha distributed noise, N std is signal-to-noise ratio.
x(t) is the original signal, The original signal is wrapped by different noises of alpha distribution, join n times, total average number of n is set. The larger n is, the smaller the reconstructed signal noise is. EMD decomposition is performed for the signals with noise addition and noise reduction, respectively, and the mean value of the decomposed two groups of IMF is calculated, and then the mean value of the N group of IMF is calculated to obtain the final IMF group component.
Suppose x[n], n = 1, 2, . . . N is the discrete form of the x(t). IMFq represents the qth IMF output obtained by the method proposed in this paper, and EMDq() represents the qth mode obtained by EMD [30].
The qth IMF is: Relationship between q-order residual and q − 1 order residual as: qresidual q−1 [n] is the discrete form of the residual signal. If the conditions for the end of EMD decomposition are completely followed, the original signal decomposition will get all IMF. However, we do not need to get all IMF. According to the characteristics of respiration and heartbeat signals, if we can identify the IMF components corresponding to respiration and heartbeat, we can also terminate the entire iteration without obtaining all IMF components, and the number of iterations will be reduced, thus reducing the calculation time [30]. The frequency and amplitude range of respiration and heartbeat signals are shown in Table 2. If the sampling time window is the Tr, MaxNUMq is the IMFq maximum points, AmpMax is the maximum average, MinNUMq is the IMFq minimum point, AmpMin is minimum mean: Iteration termination frequency standard: Iteration termination amplitude criteria: α min and α max is minimum and maximum amplitude, respectively, breathing FreMin br and FreMax br is the minimum and maximum respiratory frequency, respectively. we can choose according to Table 1. If the inequality in Equations (23) and (24) is satisfied, the iteration is terminated; otherwise, the decomposition of the next IMF component is carried out until the resulting residual is no longer decomposable (the residual has at least two extreme values).
In order to illustrate the superiority of the proposed method, the spectral comparison of EMD, EEMD and the proposed method is presented in Figure 7. The parameter Settings are the same as those in Figures 5 and 6. It can be seen from the figure that the spectra of heartbeat components obtained by using the method in this paper are purer than those obtained by using the other two methods. The prominent peaks can be clearly identified, and the corresponding frequencies of the peaks are very close to the real values. Compared with EMD, the IMF spectrum of EEMD has filtered out the respiratory and low harmonic components, indicating that the mode mixing problem can be alleviated to a certain extent. However, at the high frequency part of the spectrum, especially around 0.8 Hz, EEMD outputs a significant spectral peak, which interferes with the correct estimation of the heartbeat. of EMD, EEMD and the proposed method is presented in Figure 7. The parameter Sett are the same as those in Figures 5 and 6. It can be seen from the figure that the spectr heartbeat components obtained by using the method in this paper are purer than th obtained by using the other two methods. The prominent peaks can be clearly identif and the corresponding frequencies of the peaks are very close to the real values. C pared with EMD, the IMF spectrum of EEMD has filtered out the respiratory and harmonic components, indicating that the mode mixing problem can be alleviated certain extent. However, at the high frequency part of the spectrum, especially around Hz, EEMD outputs a significant spectral peak, which interferes with the correct estima of the heartbeat. Figure 5, that is, F re = 0.32 Hz and F hr = 1.54 Hz. The second, third, fourth and sixth breath harmonics were added, and the SNR was set to 0.2. Figure 7b shows the spectrum comparison of EMD, EEMD and the heartbeat IMF obtained by the method in this paper. Compared with EMD, the IMF spectrum of EEMD filters out respiration and its low-order harmonic components, indicating that the mode mixing problem can be alleviated to a certain extent. However, in the high frequency part of the spectrum, especially around 0.9 Hz, EEMD still outputs an obvious spectrum peak. The reason for this is that the noise added to the original signal will bring residual signal after each decomposition and interfere with the subsequent modal decomposition. On the other hand, the method proposed in this paper first decomposes the noise into a series of IMF, and then adds them to the corresponding signal, reducing the noise residue after each decomposition, thus alleviating the above problems.
In order to test and compare the time of modal components obtained by EEMD proposed in this paper, EEMD and CEEMDAN under the same conditions, simulated signal decomposition was performed on a desktop computer equipped with an Intel(R) Core(TM) i5-9400 CPU (2.9 GHz) processor and 8 GB RAM(Intel Corporation, Santa Clara, CA, USA). Table 3 shows the time used for EEMD, CEEMDAN, and the method proposed in this paper. It can be seen that, compared with EEMD, the proposed method can save about 2/5 or even half of the time. The more complex the signal is, the more the intrinsic modes are, the more the advantages of this method can be demonstrated (refer to the termination conditions of modal decomposition in this paper). In practical application, the data update interval of non-contact vital signs monitoring system is generally 2~6 s. Assuming that the rate is 20 Hz, the length of the updated sample data is consistent with the above situation, which indicates that the method in this paper has great potential in real-time processing.
Estimation of Heart Rate
For respiration and heartbeat signals with a finite period, the corresponding autocorrelation function will gradually become zero, and a peak value corresponding to the basic period multiple will appear in time: (25) where ρ is the normalization factor, k is the sampling point, m is the lag time, and * is the conjugate. First, the autocorrelation function of heartbeat and respiratory signals in IMF component is calculated. Then, the time corresponding to the autocorrelation peak in the heartbeat and breathing time interval is estimated, whose reciprocal is the corresponding breathing and heart rate. Equation (26) is the calculation method of heart rate and respiration rate: where F s is the sampling rate, T minute is measuring the duration, L Size is breathing or heartbeat IMF component in phase, the total length of data R numpeaks is breathing or heartbeat IMF component of the wave number. T minute = 60 s means the number of heart beats and breaths per minute.
Experimental Results
In the experiment, the 77 GHz~81 GHz Texas Instruments (TI)' millimeter-wave sensor IWR1843 (State of Texas, US) was used in the front of the radar. The original data of the beat signal is sampled by the on-chip ADC and transmitted to the PC through the DCA1000(Texas Instruments, Dallas, TX, USA), a special data acquisition board for millimeter wave radar sensor of TI Company. After the original data is collected, MATLAB (MathWorks, Natick, MA, USA) is used for signal processing as described in Section 3. Experimental application scenarios are shown in Figure 8a. The subjects sat in front of the radar, remained stationary, and wore a smart MI3 wristband to pick up heart rates. It should be noted that, although people sit in front of the radar and remain stable, there are still noises and interference from the human body and the surrounding environment.
meter wave radar sensor of TI Company. After the original data is collected, MATLAB (MathWorks, Natick, MA, USA) is used for signal processing as described in Section 3. Experimental application scenarios are shown in Figure 8a. The subjects sat in front of the radar, remained stationary, and wore a smart MI3 wristband to pick up heart rates. It should be noted that, although people sit in front of the radar and remain stable, there are still noises and interference from the human body and the surrounding environment. IWR1843 has three transmitters and four receivers, we use two transmitters TX and four receivers RX. We use time-division multiplexing (TDM) launch chirp pulse alternately. A single pair of TX/RX antennas can detect the heartbeat and breathing of an individual. We send a chirp pulse with duration Tc = 60 s alternately (In some cases, the target is far away from the radar. In order to obtain a longer detection range, two TXs can be used to transmit signals simultaneously to increase the signal gain. The idle time between pulses is 6 s, sawtooth frequency modulation slope is K = 70 MHz/ s, frame rate is 50 ms, and chirp sampling points are 256. The specific frequency chirp form is shown in Figure 8b, the ADC sampling rate is 5209 ksps. The observation time includes at least two cycles of respiratory signals. In order to better analyze the heartbeat rate and respiration rate per minute, we set the observation time T = 60 s. The specific radar parameter settings are shown in Table 4.
Identify Target Range
During the experiment, the human body was always in front of the radar (see Figure 8a), meaning that the regular vibrations of the target's body were caused only by heartbeat and breathing.
As described in Section 3, a stationary object near the target and the periodic motion target will affect the accurate extraction of heart rate and breathing. In order to verify the IWR1843 has three transmitters and four receivers, we use two transmitters TX and four receivers RX. We use time-division multiplexing (TDM) launch chirp pulse alternately. A single pair of TX/RX antennas can detect the heartbeat and breathing of an individual. We send a chirp pulse with duration Tc = 60 s alternately (In some cases, the target is far away from the radar. In order to obtain a longer detection range, two TXs can be used to transmit signals simultaneously to increase the signal gain. The idle time between pulses is 6 s, sawtooth frequency modulation slope is K = 70 MHz/µs, frame rate is 50 ms, and chirp sampling points are 256. The specific frequency chirp form is shown in Figure 8b, the ADC sampling rate is 5209 ksps. The observation time includes at least two cycles of respiratory signals. In order to better analyze the heartbeat rate and respiration rate per minute, we set the observation time T = 60 s. The specific radar parameter settings are shown in Table 4.
Identify Target Range
During the experiment, the human body was always in front of the radar (see Figure 8a), meaning that the regular vibrations of the target's body were caused only by heartbeat and breathing.
As described in Section 3, a stationary object near the target and the periodic motion target will affect the accurate extraction of heart rate and breathing. In order to verify the static clutter filtering method, in this experiment, we place not only static objects but also linear moving objects around the target.
As shown in Figure 9a, before the static clutter filter processing, the radar detects moving targets and stationary objects, and it is difficult for us to distinguish the location of the tested targets. After static clutter filtering, as shown in Figure 9b, we can see that only the tested targets exist in the whole picture, which proves the effectiveness of static clutter filtering. Through this method, we can define the test range. This can avoid the influence of moving target's displacement at the same distance from the subject to vital signal extraction and improve the accuracy of estimation of heart rate and respiration rate.
of the tested targets. After static clutter filtering, as shown in Figure 9b, we can see t only the tested targets exist in the whole picture, which proves the effectiveness of sta clutter filtering. Through this method, we can define the test range. This can avoid influence of moving target's displacement at the same distance from the subject to v signal extraction and improve the accuracy of estimation of heart rate and respiration ra (a) (b) Figure 9. Before (a) and after (b) static clutter filtering technology for echo signal, the raised line the figure represent targets detected by radar, it can be seen that, before processing, the radar dete moving targets and stationary objects, and it is difficult for us to distinguish the location of the s ject target. After processing, we can see that only the subject target exists in the whole picture.
As shown in Figure 10, we can see a significant peak on the spectrum, with the tar distance of 0.65 m from the radar, and the peak spread distance of about 25 cm on spectrum. This is caused by the range resolution of the 77 GHz radar, which is the abi to distinguish two or more objects. When two objects are close enough, the radar syst will no longer be able to tell them apart. We know that the radar range resolution 2 ⁄ . As shown in Table 3, is 4 GHz, the range resolution is 4 cm. The vibration d placement around the chest caused by breathing and heartbeat is also reflected in the dar echo. . Before (a) and after (b) static clutter filtering technology for echo signal, the raised lines in the figure represent targets detected by radar, it can be seen that, before processing, the radar detects moving targets and stationary objects, and it is difficult for us to distinguish the location of the subject target. After processing, we can see that only the subject target exists in the whole picture.
As shown in Figure 10, we can see a significant peak on the spectrum, with the target distance of 0.65 m from the radar, and the peak spread distance of about 25 cm on the spectrum. This is caused by the range resolution of the 77 GHz radar, which is the ability to distinguish two or more objects. When two objects are close enough, the radar system will no longer be able to tell them apart. We know that the radar range resolution for C/2B. As shown in Table 3, B is 4 GHz, the range resolution is 4 cm. The vibration displacement around the chest caused by breathing and heartbeat is also reflected in the radar echo.
(a) (b) Figure 9. Before (a) and after (b) static clutter filtering technology for echo sign the figure represent targets detected by radar, it can be seen that, before processi moving targets and stationary objects, and it is difficult for us to distinguish the ject target. After processing, we can see that only the subject target exists in the As shown in Figure 10, we can see a significant peak on the spectru distance of 0.65 m from the radar, and the peak spread distance of ab spectrum. This is caused by the range resolution of the 77 GHz radar, w to distinguish two or more objects. When two objects are close enough will no longer be able to tell them apart. We know that the radar ra 2 ⁄ . As shown in Table 3, is 4 GHz, the range resolution is 4 cm. placement around the chest caused by breathing and heartbeat is also dar echo.
Results
During the experiment, our system continuously collects data and randomly selects a frame for processing. EMD method and the method proposed in this paper are used to process the original data, and the decomposition results are shown in Figures 11 and 12, respectively. As shown in Figure 11a, different modes are mixed and contribute a peak between 0.5-0.8 Hz, the amplitude of which even exceeds that of the heartbeat. In Figure 11b, it is difficult to distinguish heartbeat signals from respiratory harmonics. On the other hand, the experimental results in Figure 12 are consistent with those in Figure 7, where the respiratory and heartbeat components are successfully identified as IMF5 and IMF3, respectively. The heartbeat spectrum decomposed by this method is clearer, and the estimated frequency is 1.2 Hz, which is close to the test result of the MI3 bracelet. Figure 10. Target range, radar target distance is 0.65 m, radar range resolution is /2B, r lution is 4 cm.
Results
During the experiment, our system continuously collects data and random a frame for processing. EMD method and the method proposed in this paper ar process the original data, and the decomposition results are shown in Figure 11 a 12, respectively. As shown in Figure 11a, different modes are mixed and contribu between 0.5-0.8 Hz, the amplitude of which even exceeds that of the heartbeat. 11b, it is difficult to distinguish heartbeat signals from respiratory harmonics. On hand, the experimental results in Figure 12 are consistent with those in Figure the respiratory and heartbeat components are successfully identified as IMF5 a respectively. The heartbeat spectrum decomposed by this method is clearer, an mated frequency is 1.2 Hz, which is close to the test result of the MI3 bracelet. In order to verify the superiority of this method, the spectral comparison of heartbeat IMF is presented in Figure 13. The results of Figure 13a-d are respectively from four randomly selected fragments of actual signals. Heartbeat component characteristics obtained by using the method in this paper are more obvious, and it is easier to determine the heartbeat frequency with this system. Although the results obtained by EEMD are much better than those obtained by EMD, there are still residual errors of respiration and its harmonic components, and the magnitude of residual errors of respiration harmonic components is larger than that of heartbeat components, which is consistent with the simulation results in Figure 6. In order to verify the superiority of this method, the spectral comparison of heartbeat IMF is presented in Figure 13. The results of Figure 13a-d are respectively from four randomly selected fragments of actual signals. Heartbeat component characteristics obtained by using the method in this paper are more obvious, and it is easier to determine the heartbeat frequency with this system. Although the results obtained by EEMD are much better than those obtained by EMD, there are still residual errors of respiration and its harmonic components, and the magnitude of residual errors of respiration harmonic components is larger than that of heartbeat components, which is consistent with the simulation results in Figure 6.
In order to verify the robustness of our proposed method, we conduct a long time flow experiment to verify the spectral estimation performance of our algorithm. During the experiment, the sampling rate was 20 Hz, and the stream data of 5000 frames (250 s) were analyzed with a 200-frame distance window (10 s) and a 20-frame interval time (1 s). The time-varying spectrum of heartbeat IMF is shown in Figure 14. It can be seen that in the whole-time frequency spectrum, heartbeat changes are within the range of 1.2-1.4 Hz (72-84 bpm). The reliability and robustness of the proposed method in longterm measurement are proved. In order to verify the robustness of our proposed method, we conduct a long time flow experiment to verify the spectral estimation performance of our algorithm. During the experiment, the sampling rate was 20 Hz, and the stream data of 5000 frames (250 s) were analyzed with a 200-frame distance window (10 s) and a 20-frame interval time (1 s). The time-varying spectrum of heartbeat IMF is shown in Figure 14. It can be seen that in the whole-time frequency spectrum, heartbeat changes are within the range of 1.2-1.4 Hz (72-84 bpm). The reliability and robustness of the proposed method in long-term measurement are proved.
We use root mean square to evaluate the error between our proposed method and the contact test method, and we define root mean square as: where is the heartbeat frequency measured using the method proposed in this paper, is the heartbeat rate measured using MI3, is the number of tests. During the experiment, three adult males and three adult females were selected to test the heartbeat rate at the same radar range, and 150-s stream data were recorded to calculate the root mean square error between the method in this paper and the MI3 brace- We use root mean square to evaluate the error between our proposed method and the contact test method, and we define root mean square as: where f EEMD is the heartbeat frequency measured using the method proposed in this paper, f WATCH is the heartbeat rate measured using MI3, N is the number of tests. During the experiment, three adult males and three adult females were selected to test the heartbeat rate at the same radar range, and 150-s stream data were recorded to calculate the root mean square error between the method in this paper and the MI3 bracelet test results, as shown in Table 5. The details are shown in Table 5. It can be seen that the root mean square error between the heart rate measured by the method in this paper and the results measured by the MI3 bracelet is less than 3. For the same subject, the longer the detection time, the more data frames, and the smaller the root mean square error. When the detection time is greater than or equal to 150 s, the root mean square error is basically close to 2, that is, the heart rate error tested in this paper and the heart rate error tested by MI3 bracelet is below 4 bpm. As shown in Table 6, the results of the proposed method and other methods are compared. Heart rate accuracy improved by about 5%. Experimental results show that this method has good adaptability and reliability. Male1 85 89 82 92 87 87 85 Male2 76 84 74 87 81 79 77 Male3 70 82 72 78 75 72 69 Female1 73 81 78 81 79 72 70 Female2 64 77 69 75 71 67 65 Female3 81 94 90 90 87 85 83
Discussion
The extended DACM allows us to obtain valid heartbeat and respiration-related phase information after correcting for DC offset using the center of the circle dynamic tracking algorithm. The static clutter filtering algorithm, as shown in Figure 9, has great advantages in the extraction of periodic motion signals, reducing irregular wave phenomenon and effectively reducing noise, while retaining the time-frequency characteristics of the original signal.
EEMD algorithm is an improved algorithm of EMD, which solves the problems of modal mixing and endpoint effect in EMD algorithm. However, after the decomposition of the original EEMD algorithm, the residual noise of each IMF component will affect the effective signal. Therefore, this paper proposes an improved EEMD algorithm to extract and distinguish heartbeat and respiratory signals. Firstly, positive and negative white noise were added to the original signal, and the residual noise was reduced after decomposition. Then, according to the characteristics of ambient noise, we added a-distributed noise to the original signal. Finally, according to the characteristics of the signal itself, the number of iterations is determined adaptively, which improves the accuracy and efficiency of EEMD algorithm. In Figure 13, we use the EMD algorithm, EEMD algorithm and improved EEMD algorithm, respectively, to decompose the same signal, which is consistent with our simulation results and verifies the superiority of the improved EEMD algorithm. In order to evaluate the practical value of our algorithm, we analyzed the results obtained by contactor MI3 and the improved EEMD algorithm, as shown in Table 4. The results show that the error between the algorithm and contactor MI3 is less than 4 bpm, which has practical value, and also shows the potential and prospect of contactless detection in the future.
Conclusions
In this paper, a 77 GHz FMCW radar is used to obtain respiratory and heartbeat signals by extracting radar intermediate frequency phase information. This paper systematically introduces radar data signal processing flow and parameter configuration. Some methods are proposed to ensure the accuracy and reliability of life signal extraction, and the results are compared with those of contact equipment. A static clutter filtering method is proposed to eliminate the interference of moving and stationary objects in the target range. An adaptive EEMD method based on symmetric α stable distribution is used to extract human vital signs signals. Experimental results show that the static clutter filtering method can effectively eliminate the interference of non-periodic motion and stationary objects within the target range. The improved EEMD algorithm can effectively distinguish heartbeat and respiration components and accurately estimate heart rate with a root mean square error less than 4 bpm, which proved the feasibility and effectiveness of FMCW radar in remote vital signs monitoring. | 10,918.6 | 2022-08-25T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Parametric gain and wavelength conversion via third order nonlinear optics a CMOS compatible waveguide
We demonstrate sub-picosecond wavelength conversion in the C-band via four wave mixing in a 45cm long high index doped silica spiral waveguide. We achieve an on/off conversion efficiency (signal to idler) of +16.5dB as well as a parametric gain of +15dB for a peak pump power of 38W over a wavelength range of 100nm. Furthermore, we demonstrated a minimum gain of +5dB over a wavelength range as large as 200nm.
A key reason for this success has been the development of highly nonlinear waveguides, in silicon [7][8][9][10][11][12] and nonlinear glasses such as heavy metal oxides [6] and chalcogenide glasses [13][14][15][16][17].In particular, the first report of gain on a chip was in dispersion engineered silicon nanowires [8], where a net on-chip parametric gain of +1.8dB over 60nm was first reported.Since then, a net on-chip gain of over +30dB was obtained in chalcogenide glass waveguides over a 180nm bandwidth [15].However, despite this progress, there is still a strong motivation to explore new material platforms in order to achieve the ultimate objective of high nonlinearity together with extremely low linear and nonlinear losses, as well as manufacturability, material reliability and ultimately, CMOS compatibility.
Recently [18][19][20][21][22] we have demonstrated efficient low-power nonlinear optics in very low loss waveguides and ring resonators, in a high index doped silica glass platform.The key advantages of this system are extremely low linear and nonlinear losses together with high reliability and CMOS compatibility.In this paper, we exploit this same platform to demonstrate net parametric gain via degenerate four wave mixing (D-FWM) in a 45cm long spiral waveguide, obtained with sub-picosecond pump and probe pulses.Uniform nonlinear waveguides offer many advantages over resonant structures like ring resonators, such as much wider spectral bandwidths, while at the same time posing different challenges such as requiring larger pump powers -on the order of watts in our case [18], versus milliwatts for ring resonators [19][20][21][22].
We achieve a signal to idler conversion efficiency of +16.5dB, as well as parametric gain for the signal of +15dB, with 38W of pump power .While the pumping levels are larger than that for chalcogenide waveguides (our threshold power for 0dB gain is 17W, versus 2W for chalcogenides), our platform has a comparable tuning range (for the same level of gain), and, most importantly, shows no intensity-dependent saturation.Our results are a consequence of extremely low linear (< 0.06dB/cm) and nonlinear losses (absent up to 25GW/cm 2 , corresponding to 500W peak power over an effective area of 2μm 2 in our device [21]), a high effective waveguide nonlinearity (220W -1 km -1 ), and near optimum dispersion characteristics (small and anomalous) exhibited by our device in the C-band.The low dispersion also results in a remarkably large bandwidth of almost 200nm (signal to idler), while the high material stability, manufacturability and CMOS compatible fabrication of this integrated platform are attractive features for developing practical devices for systems applications.
Device
The device under investigation is a 45cm long spiral waveguide with a rectangular cross section core of 1.45 μm x 1.50 μm composed of high index doped silica glass [19][20][21][22] (ncore= 1.7 @ 1550nm) surrounded by silica, on a silicon wafer.The layers were deposited by chemical vapor deposition and the spiral was patterned with high resolution optical lithography followed by reactive ion etching.The 45cm long spiral waveguide is contained within a square area as small as 2.5mm x 2.5mm and it is pigtailed to single mode fibers, with a pigtail coupling loss of 1.5dB/facet.The properties of the material and waveguide dimensions have been engineered to reduce the material dispersion near =1550nm, with an anomalous group dispersion ( β2 ) over the wavelength range studied here.Measurements of dispersion in this device [21,23] show that for a TE polarization, β2 is anomalous for wavelengths shorter than 1600nm, varying from 0 to -20 ps 2 /km at 1480nm, and so it is ideal (small and anomalous) over most of the L-band, the C-band and indeed well into the S-band.For a TM polarization, the dispersion is also small and anomalous below 1560nm (most of the C-band).From [21,23] we estimate the third order dispersion, β3 , to be small and to have the same sign as β2 at -0.3ps 3 /km.This very wide anomalous dispersion wavelength range enables a very large FWM phase matching tuning range [8,15,24,25].
Theory
The model used to fit the experimental data is the standard (1+1) nonlinear Schrödinger equation for dissipative-dispersive systems [24]: where A(z,t) is the optical envelope, z is the propagation coordinate, T is a moving time reference defined as T=t-z/vG (here t is the temporal coordinate and vG the group velocity).
The parameters andrepresent the second and third order dispersion, respectively, while is the effective nonlinearity and is the attenuation in the spiral waveguide.We note that the losses of the input and output fiber pigtails were also accounted for when solving Eq. ( 1), which was integrated via a standard pseudo spectral approach.The dispersion coefficients were obtained as a best fit of the group velocity dispersion, reported in [21].Gaussian pulses were assumed for both the input pump and signal envelopes.
Experiment
Sub-picosecond pulses for the pump and the signal were obtained from an OPO system (OPAL, Spectra Physics Inc,) generating 180fs long pulses at a repetition rate of 80MHz.The broadband pulse source (bandwidth = 30nm) was split and filtered by two tunable Gaussian bandpass filters operating in transmission, each with a -3dB bandwidth of 5nm, in order to obtain synchronized and coherent pump and signal pulses at two different center wavelengths, each with a pulsewidth of ~700fs.The pump and signal pulses were then combined into a standard SMF using a (90/10)% beam splitter and then coupled into the spiral waveguide.Pulse synchronization was adjusted by means of an optical delay line, while power and polarization were controlled with a polarizer and a λ/2 plate.Both pump and probe polarizations were aligned to the quasi-TE mode of the device.
Results and Discussion
Fig. 1 shows the measured spectral power densities at the output of the waveguide for a pump wavelength pump=1525nm and three signal wavelengths at signal=1480nm; 1490nm; and 1500nm, respectively.The pump peak power coupled inside the waveguide was varied between 3 and 38W, while the signal peak power was kept constant at 3mW.It is clear that the signal was efficiently converted and amplified into an idler in the C band at wavelengths of idler=1578nm; 1565nm; and 1547nm for all three signal wavelengths.The small discrepancy between theory and experiment in the spectra arises from the non-ideal Gaussian pulses used in the experiments.This was due to a number of factors including the deviation from a Gaussian spectral profiles in the low intensity wings of the spectra as well as to residual chirp on the pulses (see below).
For pump powers larger than 25W, despite the fact that we found the idler and signal pulses to be comparable in power, we observed cascaded DFWM only on the idler side at 1625nm.This indicates that the cascaded interaction between the pump and signal pulses at 1525nm -1578nm (producing a cascaded signal at 1625nm) is phase matched, while the other interaction between the pump and the idler at 1425nm and 1480nm (generating light at 1525nm) is not phase matched.Hence, the dispersion of the refractive index relative to the center wavelength (1525nm) must be asymmetric, implying a significant contribution from , assisted by the low absolute value of [21].As previously addressed, the variation of the experimentally measured GVD dispersion [21] in our device indicates a < 20ps 2 /km over the wavelength range considered here, with a on the order of -0.3ps 3 /km (with the same sign as ).Our numerical analysis confirms that this value is consistent with the experiments (Fig. 1).Fig. 2 compares theory with the experiments for a 1480nm signal and a 1525nm pump.Note that all of the theoretical curves in Fig. 2, except for Fig. 2b), include pulse walkoff effects.Fig. 2a shows the output spectra for a 40W pump, and clearly shows a good agreement with theory based on ideal transform limited Gaussian input pump and signal pulses with a spectral width of 5nm.As was the case for Fig. 1, the slight deviation between theory and experiment is the result of a deviation of the pump spectrum from an ideal Gaussian profile.For comparison, we also show the theoretical CW gain spectrum in Fig. 2b, which is noticeably higher than the pulsed case, and which represents an upper limit for the pulsed FWM where walk-off between the pump and signal pulses reduces the conversion efficiency.Figs.2c and 2d show the peak FWM gain for the idler and signal respectivelyie., the wavelength conversion efficiency (to the idler) and the parametric gain of the signal.The experimental results (black dots) show good agreement with theory (solid red line) that includes walk-off, and both of these are reduced somewhat from the theoretical CW curve (dashed red line) where pump/signal pulse walk-off is absent.Here again, the slight deviation between theory and experiment below pump powers of 25W, observed in Fig. 2(c), is the result of a deviation of the pump spectrum from the ideal Gaussian profile, as well as of a small residual chirp that induces an asymmetric self phase modulation of the pump, visible in both Figs. 1 and 3 (below).The experiments (Figs. 1 and 2) show that the FWM bandwidth becomes wider and flattens, as the pump power increases.For pump powers < 22W, there is quite a sharp roll-off in conversion efficiency as a function of the pump-signal separation, whereas for higher powers >30W, the gain becomes much flatter (and larger).This is also observed in the theoretical CW gain (Fig. 2b) for a 38W pump, where the two gain peaks are separated by more than 100nm, with a -10dB bandwidth for each lobe of > 50nm.
We define the "on/off" conversion efficiency as the ratio of the transmitted pulse energy, rather than peak power, of the idler (signal) to the transmitted signal without the pump [8].This allows us to account for the spectral broadening due to XPM, which lowers the spectral intensity.The net, or "on-chip", gain is then the "on/off" gain minus propagation losses.The experimental on/off efficiency and parametric gain vs. pump peak power are shown in Fig. 2(c-d), for signal=1480nm, along with theoretical calculations for both a CW and a pulsed pump.For a 38W pump power, we measured a maximum on/off FWM conversion efficiency of +16.5dB from signal to idler, and a parametric gain of the signal of +15dB.This translates into a net on-chip conversion efficiency of +13.7dB and a gain of +12.3dB, when the overall propagation loss of 2.7dB is included.It is important to note that even at the highest pump powers used in these experiments we do not observe any sign of saturation.Fig. 3 shows the results of experiments with very wide signal to pump wavelength spacing, for a pump wavelength of 1550nm and a signal wavelength of 1475nm, with a signal peak power of 15mW and a pump peak power varied up to a maximum value of 46W.The FWM idler spectrum is larger than 30nm for pump powers above 45W, showing a +5dB gain even when seeded at the edge of the FWM gain spectrum.The large tuning range of the FWM gain process allows for the generation of an idler with a wide spectral width, as well as a clear separation of the idler and pump spectra.Numerical analysis shows that the small 3 rd order dispersion term ( β3 ) contributes to a red shift of the idler, as well as to the asymmetric nature of the cascaded FWM discussed previously.The interplay between FWM and XPM broadens the spectral bandwidth when the signal is near the edge of the FWM gain curve.While small, this red-shift could be used to help separate the idler and pump, with transform limited pulses (< 100fs) being recovered by subsequently propagating the idler in a negatively dispersive element, as demonstrated recently [26] for SPM.Finally, we performed experiments with two pump pulses at 1525nm and 1550nm in order to study nonlinear parametric processing in strongly cascaded FWM conditions (Fig. 4).When seeded with two pulses of comparable intensity, FWM can produce a cascade of optical pulses with a well defined relation in frequency and phase [27,28], which is potentially interesting in wavelength division multiplexing optical communication (WDM) and in optical metrology.The two pump configuration has also been proposed and demonstrated to achieve parametric gain of a weak signal over very wide and flat bandwidths [5,29], including a recent demonstration in a silicon nanowire [30], by pumping near the zero dispersion point.In our case, however, we could not demonstrate this since we did not have access to a separate signal pulse.Nonetheless, with dual pump pulses we managed to achieve cascaded FWM (Fig. 4) generation of secondary and even tertiary idlers with pump powers of only 1.2W and 2.4W at 1525nm and 1550nm, respectively.The theoretical output spectrum (red dashed line) shows a weaker short wavelength secondary idler than what we observed in the experiment (black line), and we found that by including a fifth order dispersion coefficient of +10 -2 ps 5 /km the agreement between theory and experiment is improved (red solid line) significantly.This method of estimating is novel and useful, since this parameter is normally not easy to measureparticularly in integrated waveguides.
Conclusions
We have demonstrated net parametric gain and a high wavelength conversion efficiency for four-wave mixing in a 45cm high index doped silica glass spiral waveguide.We achieve wavelength conversion over a > 100nm wavelength range using sub-picosecond optical pump and probe pulses, for pump peak powers of ~ tens of watts.We achieve an on/off parametric gain of +15dB and a wavelength conversion efficiency of +16.5dB for a pump peak power of 38W.We explored the generation of large bandwidth optical pulses for ultrafast all-optical applications.Cascaded FWM via dual pump excitation (with peak powers near was also investigated, and the generation of secondary idlers was achieved.This first demonstration of low power parametric gain via FWM in a CMOS compatible doped silica glass waveguide is promising for all-optical ultrafast signal processing applications, such as frequency conversion, optical regeneration, and ultrafast pulse generation.
Fig. 1 :
Fig.1: Experimental (top) and theoretical (bottom) signal intensity spectra for a 1525nm pump and a 1480 nm (a), a 1490 nm (b), and a 1500nm (c) signal.The legend lists pump peak powers (in pseudocolors from blue to red for increasing powers), while the signal peak power was kept to a constant value of 3mW.
Fig. 2 .
Fig. 2. Gain for a 1480nm signal: (a) Experimental Spectra for a 3mW peak power signal alone (blue) and with a 40W pump (red, thin line), and thick line: numerical model including pulse walk-off effects.(b): Theoretical CW gain.(c-d) FWM gain for idler and signal respectively: measurement (black dots), model in the experimental conditions, pulsed regime (red continuous line) and for a CW regimered dashed line (note, the CW represents the maximum achievable gain for the pulsed case).
Fig. 3 :
Fig.3: Ultra-short pulse generation, seeding the FWM interaction at the edge of the CW gain.Top: Experimental output spectrum for a 1550nm pump and a 1475nm signal.Middle: Theoretical output spectrum.Bottom: theoretical CW gain.The legend lists pump peak powers, while the signal peak power is fixed at 15mW.
Fig. 4 :
Fig. 4: Cascaded FWM for dual pump pulses at powers and wavelengths of 1.2W at 1525nm and 2.4W at 1550nm, respectively.Black solid line: experiment, Red dashed line: theory without and Red solid line: theory with an effective of the order of 10 -2 ps 5 /km. | 3,631.6 | 2010-04-12T00:00:00.000 | [
"Physics",
"Engineering"
] |
Nonconceptualism as the key to a new physicalist strategy in the knowledge argument
This paper presents and defends an alternative version to the so-called strategy of phenomenal concepts (aka PCS) in defense of type B materialism in Jackson’s knowledge argument. Endorsing Ball and Tye’s criticism, I argue in favor of the following claims. First: Mary’s newly acquired content is nonconceptual in the light of all available criteria. Second: Mary’s acquisition of such content is precisely what allows us to explain, at least in part, both her epistemic progress (once released from her confinement) and the increase in her expertize regarding her old PHENOMENAL RED. However, although the acquisition of such nonconceptual content is indispensable, it is sufficient to explain Mary’s epistemic progress. Third: assuming that concepts are mental files, after undergoing the visual experiences of red for the first time, such newly acquired nonconceptual content goes through a process of “digitization” so that it can be stored in the mental file PHENOMENAL RED. Fourth and final claim: it is based on this concept of PHENOMENAL RED, now phenomenally enriched by the newly acquired nonconceptual content, that Mary is able to identify introspectively the phenomenal red of her new experience.
Introduction
I imagine that everyone knows Jackson's tale about Mary. She is a super-scientist that has exhaustive knowledge about color and color vision, but who is trapped in a black-and-white room. One day she is released and contemplates the color red of a ripe tomato for the first time. "Oh, this is what it is like to experience red!" she thinks to herself. According to Jackson's anti-physicalism, the assumption that Mary already possesses a complete set of all physical facts about color and color vision forces the physicalist to confront a problem. If Mary already knows everything about color and color vision in a physical sense and if she learns something new by undergoing the visual experience of red for the first time, the anti-physicalist conclusion is that Mary learns something non-physical about color and color vision, or so Jackson argued in the Eighties. 1 The most popular reaction to the knowledge argument is the assumption that, on her release, Mary acquires new special phenomenal concepts of some physical property or fact she already recognized as a physical concept in her confinement. Following Stoljar, we may call this the phenomenal concept strategy (aka PCS). 2 This paper presents and defends an alternative version to the PCS of B-type materialism in Jackson's knowledge argument. I resume Tye and Ball's claim that there are no phenomenal concepts in the special sense that the PCS requires, namely, concepts that one could only acquire by undergoing the relevant experience of red. By Evan's generality requirement, imprisoned Mary already possesses the concept PHENOMENAL RED. Yet, on her release, Mary acquires a new visual representation of the color red, a content that she could only acquire by undergoing the relevant visual experience of the color red.
I argue in favor of the following claims. First: Mary's newly acquired content is nonconceptual in the light of all available criteria. Second: Mary's acquisition of such content is precisely what allows us to explain, at least in part, both her epistemic progress (once released from her confinement) and the increase in her expertize regarding her old PHENOMENAL RED. However, although the acquisition of such nonconceptual content is indispensable, it is sufficient to explain Mary's epistemic progress. Third: assuming that concepts are mental files, after undergoing the visual experiences of red for the first time, such newly acquired nonconceptual content goes through a process of "digitization" so that it can be stored in the mental file PHE-NOMENAL RED. Fourth and final claim: it is based on this concept PHENOMENAL RED, now phenomenally enriched by the newly acquired nonconceptual content of red, that Mary is able to identify introspectively the phenomenal red of her new experience.
This solution to the Puzzle of Mary seems to be so obvious and so natural that it comes to me as a surprise that it has not occurred to anyone before. It is the only solution that really does justice to the thought, expressed above, that when Mary experiences red for the first time she comes to know something she did not know before. Yet, the defense of my claims here is abductive: the inference to the best explanation. For a question of space, I cannot consider the rival positions.
The remainder of this paper is structured as follows. In the next section, I present the PCS and resume Tye and Ball's key criticisms. In the following section, I present and defend my first two claims: (i) what imprisoned Mary lacks is a nonconceptual representation of phenomenal red; (ii) the acquisition of such nonconceptual content is what explains, at least in part, Mary's epistemic progress, and the increase in her expertize in regard to her old physical concept PHENOMENAL RED. The fourth and last thesis is the corollary of the previous section: it is based on this concept of PHENOMENAL RED, phenomenally enriched by the new nonconceptual content, that Mary becomes able to identify introspectively the phenomenal red of her experience of red.
The PCS and its failure The simplest way of regimenting Jackson's knowledge argument, making it easier to understand the recent criticism of the PCS, is as follows: 3 1. Imprisoned Mary knows everything about the physics of color and color vision. 2. On release, Mary comes to know something new. Therefore, 3. She comes to know something non-physical. Therefore, 4. Physicalism is false. 4 I present the general structure of the PCS as briefly as possible. Proponents of the PCS argue that phenomenal concepts have a special nature. They are not just ordinary concepts used introspectively to pick out the phenomenal character of one's experience; they are special concepts in the precise sense that one can only acquire them when one undergoes the relevant experience and attends to the phenomenal character of that very experience. Thus, on her release, Mary stares at the red color of a ripe tomato for the first time and by attending introspectively to the phenomenal red of her new experience she forms a new concept PHENOMENAL RED. This concept enables her to pick out the phenomenal red of her new experience and hence to know what it is like to experience red.
The rationale that supports the PCS assumes that the strategy accomplishes two tasks. First, it is supposed to make sense of an explanatory gap between physical and phenomenal properties. To be sure, the physicalist cannot accept Chalmers's requirement that from the knowledge of all physical and indexical truths, the physicalist should derive a priori his knowledge of phenomenal facts. Still, the physicalist must provide an a posteriori account for the fact that phenomenal properties are physical properties or at least supervene on physical properties.
The second task the PCS accomplishes is to close the putative ontological gap between those same properties: there is no ontological distinction between phenomenal and physical properties. They are one and the same properties considered from different viewpoints: by physical and phenomenal concepts. Given this, red quale ("what it is like to experience red") is just a physical property that is represented by a newly acquired phenomenal concept.
The PCS faces serious objections. Here, I focus on the objection that I consider to be the most relevant. 5 Phenomenal concepts we apply via introspection to pick out the phenomenal character of our experiences are deferential: they can be possessed even if they are only partially understood. As Tye puts it: "[M]aybe fully understanding a general phenomenal concept requires having had the relevant experience; but if such concepts are like most other concepts, possessing them does not require full understanding" (2009, p. 63). Pace Burge, 6 the color concepts are deferential and can be possessed even when they are only partially understood. 7 The same can be said of phenomenal concepts.
How can we ensure that imprisoned Mary possesses the relevant concept PHENOMENAL RED to pick out the red-quale by introspection? 8 The answer is quite obvious. In her confinement, Mary is able to talk and think about the color red and about phenomenal red just as anyone else who has seen the color red. For example, imprisoned Mary may wonder whether phenomenal green makes people calmer than phenomenal red, or whether phenomenal red makes people more tense and agitated. The key point is that imprisoned Mary's use of the concept PHENOM-ENAL RED easily meets Evans's Generality Constraint. 9 Being able to entertain the thought that visual experiences of ripe apples possess the property of being PHENOMENAL RED, imprisoned Mary is also able to employ the very same concept PHENOM-ENAL RED of any other particular of which she possesses a singular concept.
But this raises a question. Since demonstrative concepts are certainly not deferential, could imprisoned Mary possess a demonstrative concept to pick out the phenomenal character of some experience of red? According to Tye, she could also possess a demonstrative concept. 10 Under the qualia realist assumption that the phenomenal character of experience is an intrinsic property of the brain, Mary could possess such a demonstrative concept of what it is like to experience red just by pointing via a cerebroscope to a brain image of someone experiencing red. This means not only that she already possessed a demonstrative concept, but also that this concept is not a phenomenal concept in the relevant sense of being a concept whose acquisition hinges crucially on the subject having the relevant experience.
The moral is that there are no phenomenal concepts in the special and required sense, namely concepts that one could only possess by undergoing the relevant experience. Yet, it seems quite intuitive that, on her release, Mary learns something. The question is: what accounts for Mary's epistemic progress?
Nonconceptual content
Assuming that PHENOMENAL RED is a deferential concept that Mary already possesses in her confinement and that she learns something by contemplating a ripe tomato for the first time, her epistemic progress takes the form of an increase of her expertize with regard to the concept PHENOMENAL RED. In Burge's famous case the patient learns from his doctor that arthritis only occurs in joints. Yet, if imprisoned Mary already possesses exhaustive knowledge of colors and color vision, how could her expertize with regard to the concept PHENOMENAL RED increase? There is only one reasonable explanation. The only ability that imprisoned Mary could lack is the key ability to discriminate the color red from its surroundings and from its background. Given this, it is this discriminatory ability that must account, at least in part, for Mary's epistemic progress, for that assumption that Mary learns something on her release. Now, this newly acquired discriminatory ability is what Dretske famously called "non-epistemic seeing" in opposition to "epistemic seeing". 11 Epistemic seeing is what he later called "factawareness", that is, a perceptual propositional attitude: I see that something is the case. In contrast, "non-epistemic seeing" is what he later called "object-awareness", that is, a perception of things rather than facts. 12 Be that as it may. What matters for us now is the fact that this non-epistemic seeing takes the form of a visual representation of what Mary encounters on her release, namely the visual representation of the color red of a ripe tomato. Again, if I am right, it is this newly acquired visual representation of the color red that must account, at least in part, for Mary's epistemic progress on her release, that is, the assumption that Mary learns something, learns what it is like to experience red.
The question now is about the nature of this newly acquired visual representational content. According to Dretske's quoted book, this seeing is neither propositional nor conceptual. On her release, by staring at the red color of a ripe tomato, Mary starts to represent the color red non-conceptually for the first time. Likewise, by attending to the phenomenal character of her visual experience, Mary starts to represent the red-quale of her new visual experience. Indeed, that seems to be the missing piece in the whole puzzle of Jackson's knowledge argument. However ingenious imprisoned Mary might have been, possessing exhaustive physical knowledge of the red-quale of experience of red, her mastering of the concept PHENOMENAL RED is not really complete. She misses something quite important, namely the nonconceptual representation of the color red and the nonconceptual representation of the red-qualia. If I am right, what imprisoned Mary lacks is not a new special phenomenal concept (which the PCS requires), which she supposedly could not possess before, but rather a new nonconceptual representation of phenomenal red. But why is content nonconceptual?
First, let me remind the reader of a few main features of the notion of nonconceptual content. It is worth noting, though, that while some features are consensual, others are disputable. To start with, it is agreed that, in general lines, a representational content is nonconceptual when its canonical specification does not require from its bearer any concepts whatsoever involved in this specification. 13 Now, if imprisoned Mary possesses exhaustive physical knowledge of phenomenal red, her newly acquired representation of phenomenal red is independent from her previous concept; indeed, it is independent from whatever concepts she might possess. Given this, the reasonable assumption is that Mary's newly acquired representation of phenomenal red is essentially of a nonconceptual nature. 14 Second, following Dretske, 15 it seems to me to be reasonable to assume that nonconceptual states carry information in the socalled analogical form, as opposed to the conceptual content of propositional attitudes that is more plausibly viewed as digital. The distinction between analog and digital representations was clearly presented by Dretske. Take a certain fact, say the fact or state of affairs that some object S has the property of F. A representation conveying the information that S is F is in digital form iff it contains no additional information about S besides F that is not already nested in S's being F. Contrary to this, if the signal carries additional information about S (which is not nested in S's being F) then the signal carries this information in analog form. 16 Now, when imprisoned Mary thinks that phenomenal green is more relaxing than phenomenal red, her thought carries no additional information that is not already nested information that phenomenal green is more relaxing than phenomenal red. In contrast, after undergoing the visual experience of ripe and unripe tomatoes, Mary's mental state carries additional information that is not already nested information that phenomenal green is more relaxing than phenomenal red.
The third feature is disputable: nonconceptual contents are more fine-grained than conceptual contents. Arguably, I can perceptually discriminate many more colors and shapes than I currently have concepts for. For example, I may be capable of discriminating between two colored chips of very similar shades of red, red1281 and red1282. Yet, even if I am an expert on colors I will probably not have the corresponding concepts. Let's suppose that on her release Mary stares at a quite specific shade of red, say red1297, and immediately attends to the correspondent phenomenal red1297. holds the opposite view according to which nonconceptual contents are less fine-grained than conceptual ones. For one thing, according to him, nonconceptual contents are best modeled as Russellian rather than Fregean propositions, namely as a sequence of particulars and properties. McDowell also disputes this claim since he famously holds that demonstrative conceptual contents can be conceived as being as fine-grained as the putative nonconceptual ones. Be that as it may, I cannot engage in this debate here. For one thing, it will lead me far afield. For another, nothing important hinges on it: if the reader is not convinced, he can leave this constraint as a distinguishing feature of nonconceptual contents aside. Given this, I assume without argument the third feature: nonconceptual contents are more fine-grained than conceptual contents.
My point is as follows. However wide Mary's conceptual repertoire might be, however accurate Mary's vision might be, she will never be able to conceptualize that quite specific shade of phenomenal red1297 that she introspectively represents on her release. Why is this so? Well, she will not be able to retain in her memory the newly acquired representation of red in the first place.
Fourth, nonconceptual contents are essentially involuntary and independent of any judgment or of any other sort of propositional attitude. McDowell also disputes this claim as a distinguishing feature of putative nonconceptual contents since he famously claims that the putative representational content of experience is conceptual, albeit not spontaneous. The question is: can we conceptualize something without holding judgments or beliefs? Again, I cannot engage in this debate here for reasons of space. I assume without argument that conceptual concepts always involve a propositional attitude such as judgments or beliefs. But, as before, nothing important hinges on this: if the reader is not convinced, he can leave this constraint as a distinguishing feature of nonconceptual contents aside.
My point is as follows. A mental state presents the world in one way or other (veridicality conditions) non-conceptually when that representation is independent from the person's will or judgment.
That is exactly what happens to Mary when she stares at the ripe tomato for the first time and attends to the correspondent phenomenal red of her experience. Regardless of her will or judgment, the thing appears to her as red, that is, she represents what appears straight ahead of her as being red. The reasonable assumption, again, is that the newly acquired representation of red is nonconceptual.
The fifth feature is a direct consequence of the fourth. As my nonconceptual states represent the world independently of my will and my judgment, such content is not under the control of my propositional attitudes (the so-called thesis of "cognitive penetration of perception"). According to Pylyshyn (1999), a perceptual system is cognitively penetrable if "the function it computes is sensitive, in a semantically coherent way, to the organism's goals and beliefs, that is, it can be altered in a way that bears some logical relation to what the person knows" (1999, p. 343). To be sure, Pylyshyn's position is far from being consensual. 17 Still, it seems reasonable to me to endorse Pylyshyn's view that "the early vision system does its job without the intervention of knowledge, beliefs or expectations, even when using that knowledge would prevent it from making errors" (1999, p. 414). Let's suppose that on her release Mary sees a ripe tomato, however covered up by a yellow light coming from behind her. Knowing that she has a ripe tomato straight ahead (by smelling it and touching it) and knowing that ripe tomatoes are red will not prevent her from seeing it and representing it by sight as yellow. Now, I want to consubstantiate my claim by considering the famous case of Marianna suggested by Nida-Rümelin (1996). Like Jackson's original Mary, Marianna is kept captive in a black-andwhite room. Unlike Mary, however, when Marianna leaves the black-and-white room, she is led into a technicolored vestibule in which there are various patches of different colors on the walls. At this point, she will have experiences she has not had before of red, yellow, blue, and so forth. Yet, because she sees no apples or tomatoes or hydrants, there is no hint for Marianna as to which is which when she stares at any colored patch on the wall of the room for the first time. As she already possesses the concept PHENOMENAL RED, but is unable to recognize introspectively phenomenal red when she attends to the phenomenal character of her visual experience of a red patch on the wall, the reasonable assumption is that Marianna must be representing the color red and phenomenal red non-conceptually. But why is this so?
First of all, in the technicolored vestibule, Marianna's conceptual expertize about color and color vision is of no use. Again, there is no hint for her that what she is contemplating is a red patch, when she stares at one. Secondly, in the technicolored vestibule, there is no cross-modal processing (Cross-modal processing occurs when two sensory systems interact. For example, information processed in one modality, say hearing, might affect information processed in another modality, say vision, or an experience had in one modality, say vision, might affect the experience had in another modality, say touch). Red patches on the wall have no smell, no texture, no taste, etc. Moreover, conceptual recognition of phenomenal red depends not only on the concept PHENOMENAL RED, but also on the knowledge that what she is seeing is a ripe tomato and on her background knowledge that ripe tomatoes are red. But let us consider Marianna's case in light of the enlisted features one by one.
First, nonconceptual contents are those whose subject does not necessarily possess the required concept to specify the content. Now, while we specify as red the content of Marianna's representation in the technicolored vestibule, she herself is unable to recognize as red what she is visually representing and as phenomenal red when she attends to the phenomenal character of her visual experience of a red patch. Thus, by the very general standard definition of nonconceptual contents, Marianna is introspectively representing phenomenal red independently of her old concept PHENOMENAL RED. Things change when she is finally released from the technicolored vestibule and stares at a ripe tomato in an open space in daylight.
Second, imprisoned Marianna represents phenomenal color red by means of a mental state carrying information in digital form rather than in analog form. For one thing, whenever she thinks of red, her mental state carries no additional information about red that is not already nested in her thought. In contrast, in her technicolored vestibule, when she introspectively attends to the red-quale of her visual experience of red, her mental state carries additional information about the luminosity, about the saturation, about the shape of the patch and so on. So, Marianna's newly acquired representation of phenomenal red is nonconceptual.
Third, despite Marianna's fine-grained conceptual ability to think about several shades of red, by staring at several shades of red on the wall of the vestibule, she is representing quite specific shades of red, say red 2345, 2346, 2347, etc. That is something that outstrips Mary's conceptual ability. Moreover, when Marianna stares at a red patch on the wall of her technicolored vestibule, say of the quite specific red 12354, and introspectively attends to the correspondent phenomenal red 12354, she will probably never be able to conceptualize that quite specific shade of red because she will never be able to memorize that specific shade in the first place. Again, by all accounts, the reasonable assumption is that Mary's newly acquired representation of phenomenal red is nonconceptual.
Mental files
Now, even if that nonconceptual representation of phenomenal red is necessary, it is certainly not enough to account for Mary's epistemic progress. Something else is still missing, namely concepts that Mary should use to pick out the phenomenal red she non-conceptually represents. First, the great majority of our nonconceptual representations are not processed further. They are simply discarded. For one thing, they have no cognitive relevance. For another, if they are not discarded, they end up overloading the cognitive system. Second, and most importantly, they can only contribute to cognition when they are properly conceptualized. In Dretske-based informational semantics, information coded analogically must be digitalized, by "abstracting" from all non-relevant details.
When I resume Tye and Ball's criticism, I assume that there are no phenomenal concepts in the special sense that the PCS requires, namely concepts that pick out the phenomenal character of experience introspectively but that could be acquired by undergoing the relevant experience. Even so, I believe that the vast literature on the PCS can give us a clue as to the proper understanding of how nonconceptual contents could be "brought under concepts" to provide cognition.
Let me start by briefly reconsidering the nature of phenomenal concepts. The locus classicus for the PCS is Loar's paper "Phenomenal States". in which he claims that phenomenal concepts are recognitional concepts. 18 A recognitional concept, unlike a theoretical concept, is applied directly on the basis of perceptual acquaintance with its instances, that is, when we recognize an object "as being one of those". without relying on theoretical knowledge or other background knowledge. Carruthers, Tye, and Levine have endorsed similar accounts in the recent past. 19 In contrast, according to another trend, phenomenal concepts are indexical by nature. 20 They are demonstrative-like concepts that pick out brain states in a demonstrative mode of presentation. The suggestion here is that the epistemic gap between physical and phenomenal properties is similar to the familiar gaps between objective and demonstrative concepts. Phenomenal concepts are thought of here as flexible inner demonstratives that pick out the phenomenal character in introspection in the same way that demonstratives pick out objects in space. A further group of philosophers that are worth mentioning define phenomenal concepts by their conceptual role. Phenomenal concepts and physical concepts are associated with distinct faculties and modes of reasoning. 21 However, according to by far the most popular view phenomenal concepts are quotational concepts. 22 That is, they are concepts that somehow contain the very or phenomenal states or images thereof to which they refer introspectively. A similarly interesting suggestion comes from Papineau (2002;2007). Phenomenal concepts are sensory templates whose function is to accumulate information about the relevant referents by storing copies of experience. The idea is that phenomenal concepts use the copies of experience housed in the file in order to mention the experience. But what is a mental file in the first place here?
The basic idea of mental files is not new in philosophy; it was introduced by several authors. 23 The most influential philosophical elaborations of the idea of mental files are certainly due to Perry and, after him, Recanati. Mental files are mental particulars created in someone's mind with the function of representing objects by storing information about the object's properties. They are meant to be singular concepts or concepts of objects, but their distinguishing feature is their de re character: even though they are opened in the individual's mind to store information about an object's properties, they do not present the object as the item that satisfies those identifying properties but as the object that stands in some relation to the individual herself and, a fortiori, in relation to the file itself. For instance, when a predator sees a prey, a perceptual file opens in its mind to represent the prey by storing information about the prey's salient features. Even though the file hosts information about the prey in the form of the prey's salient features, it does not present the prey as the object that possesses those salient features, but rather as the object that stands in a particular perceptual relation to the predator and, a fortiori, as the object that stands in a demonstrative relation to the perceptual file itself.
The simplest files are the perceptual ones (Perry calls them "buffers"). Even though they are retained in the longer-term memory, they are essentially short-term files whose distinguishing feature is that they are currently attached to the perception of the object they are about. They last only as long as the perceptual relations last. When those relations cease, either the perceptual file disappears or it gets linked to other detached, stable files about the same entity. The information temporarily hosted in the perceptual file about the object's properties is either lost or transferred to other permanent files. Thus, if the predator loses track of the prey, either the information concerning its salient properties is lost or it is transferred to a non-perceptual permanent file that it has on that kind of prey. In the case in picture mental files are what Papineau calls "sensory templates" that house digitalized copies or replicas of the experience in question. All non-relevant information is discarded; e.g., if Mary contemplates a quite specific shade of red124568, her memory retains and houses only something approximately like red12.
Is there still a reason to assume that on her release Mary acquires a new concept? Let us take stock. First, as we have seen, there is no reason to assume that there are special phenomenal concepts in the relevant sense of concepts that could only be acquired on the basis of the experience in question. However, one way of distinguishing concepts is by appealing to the criterion of the cognitive significance. So, according to the Fregean classical example, one can only make sense of someone believing that Hesperus is beautiful while, at the same time, believing that Phosphorus is not, under the assumption that "Hesperus" and "Phosphorus" are different concepts of the same planet, Venus. The (rhetorical) question is: is there any possibility for imprisoned Mary to believe that a visual experience of a ripe tomato is phenomenally red while, on her release, not believing that her present visual experience of a ripe tomato is phenomenally red? The only reasonable assumption here is that Mary simply reuses her old physical concept PHENOMENAL RED to pick out the phenomenal red of her new experience of the color of the ripe tomato. In other words, at least in the case of colors, we use the same concept PHENOMENAL RED to conceptualize what it is like to undergo the relevant experience of red.
So, I come to the second claim. Assuming that concepts are like mental files (or sensory templates), after Mary attends to the phenomenal red of her new visual experience of the color red of a ripe tomato for the first time, a digitalized mental picture is housed in her old concept PHENOMENAL RED. That is not a phenomenal concept in the special sense required by the PCS, but rather a concept enriched by phenomenal pictures. We can call it a phenomenally enriched concept. It is only after the incorporation of the digitalized mental image of red into her old file PHENOMENAL RED that Mary comes to know what it is like to experience red. By means of this enriched concept, attending to the phenomenal red of her experience, she picks it out: "Oh, that is what it is like to experience the color red".
Conclusion
In the light of Evan's Generality Constraint, even color concepts and phenomenal concepts are deferential in the sense that one can possess them without full expertize about them. Resuming Tye and Ball's criticism, that is the case of imprisoned Mary. Given this, what imprisoned Mary lacks is the not the concept PHENOM-ENAL RED, but rather a nonconceptual representation of the color red and of the phenomenal red that she acquires on her release. Mary's epistemic progress is explained, at least in part, by the enhancement of her expertize regarding her concept PHENOM-ENAL RED: even if she does not acquire a new concept, she acquires a new nonconceptual representation of phenomenal red.
Yet, that nonconceptual content without concepts produces no cognition. Thus, the nonconceptual content must be digitalized and so be incorporated into a mental file PHENOMENAL RED. That said, Mary only comes to know what it is like to experience red when her newly acquired nonconceptual representation of phenomenal red is digitalized and housed in her old concept PHENOMENAL RED. Corollary: Mary's discovery is not about a new, unknown fact or property. It is the enrichment of her old concept PHENOMENAL RED. | 7,337.4 | 2020-12-01T00:00:00.000 | [
"Philosophy"
] |
Energy positivity, non-renormalization, and holomorphy in Lorentz-violating supersymmetric theories
This paper shows that the positive-energy and non-renormalization theorems of traditional supersymmetry survive the addition of Lorentz violating interactions. The Lorentz-violating coupling constants in theories using the construction of Berger and Kostelecky must obey certain constraints in order to preserve the positive energy theorem. Seiberg’s holomorphic arguments are used to prove that the superpotential remains non-renormalized (perturbatively) in the presence of Lorentz-violating interactions of the Berger-Kostelecky type. We briefly comment on Lorentz-violating theories of the type constructed by Nibbelink and Pospelov to note that holomorphy arguments offer elegant proofs of many non-renormalization results, some known by other arguments, some new.
JHEP01(2014)134
An alternate view on the positive energy constraints 15 1 Introduction By employing the holomorphic arguments of Intriligator, Leigh, and Seiberg [3], one can show that the full non-renormalization theorems of N = 1 supersymmetry apply unaltered to theories with Lorentz violating (LV) interactions of either the Berger-Kostelecky (BK) type [1] or the Nibbelink-Pospelov (NP) type [2]. The essential point of the proof is that Lorentz symmetry plays no direct role in the holomorphy argument. As long as the normal rules of N = 1 SUSY are followed when constructing the model, and as long as the LV interaction creates no new anomalies or other surprises, then the superpotential will be protected against perturbative quantum corrections, and under appropriate conditions an exact expression for the quantum effective superpotential can be obtained, using nowstandard arguments from [3].
JHEP01(2014)134
The rest of the paper is organized as follows: first, we review the general holomorphy arguments for non-renormalization in supersymmetric theories. Next we examine BK-type theories, demonstrating that they satisfy the conditions of Seiberg's holomorphy argument. Third, we show that BK-type theories require additional constraints on the values of the LV coupling constant in order for the positive energy theorem to hold. Next, we comment briefly on NP-type theories, explaining how holomorphy arguments more or less automatically prove that superpotential LV couplings and potentially divergent FI terms are protected against perturbative corrections. Holomorphy arguments go one step farther, and prove that the NSVZ β-function (in holomorphic coupling) remains subject only to one-loop renormalization and that NP-type LV couplings that enter into the gauge-kinetic function are immune to perturbative renormalization (but still subject to wavefunction renormalization). Finally, we summarize and conclude.
Review of Seiberg's proof by holomorphy in standard supersymmetric theories
The arguments of Seiberg et al. [3] hinge on three key points: 1) respect of symmetries, 2) holomorphy of the superpotential, and 3) the fact that holomorphic functions are completely determined by their singularities and asymptotic behavior [9]. All tree-level couplings in the superpotential are treated as auxiliary fields, or fully-fledged chiral superfields that just happen to be non-dynamical. A coupling that explicitly breaks a global symmetry of the rest of the theory in turn provides a selection rule constraining quantum corrections: since symmetry-breaking terms in the quantum effective potential must ultimately descend from tree-level breaking terms, we can employ the usual "that which is not forbidden is compulsory" algorithm simply by pretending that the coupling itself transforms in just the right way to preserve the broken symmetry. This provides a simple check on whether symmetry-breaking terms in the effective superpotential are consistent with the tree-level breaking terms. This is how Seiberg's prescription respects all symmetries, even the broken ones [3,9]. Lorentz-violating theories themselves almost invariably employ that technique for the LV couplings [1,5]. In much of the Lorentz-violating literature, these transformation properties of the LV couplings are dubbed "observer Lorentz invariance." See [4,5] for detailed discussions. In the recent work of [27], native to the AdS/CFT correspondence, this phenomenon is referred to more simply as diffeomorphism invariance. Holomorphy of the superpotential is a proxy condition for invariance under supersymmetry, given that one is constructing a theory using the formalism of superfields. In some sense, this is just another symmetry to respect, but this symmetry is powerful enough to deserve special mention. Supersymmetry is so restrictive that it enables divergence cancellations in 1-loop diagrams in the traditional, pre-Seiberg proofs of non-renormalization. Part of Seiberg's great insight was that holomorphicity could be taken literally and was every bit as restrictive mathematically as supersymmetry invariance is physically. This leads to point 3, which is the punchline: respect of symmetries makes it possible to write JHEP01(2014)134 down the most general holomorphic function of couplings and superfields for the superpotential. Many coefficients are fixed outright by the requirement of holomorphy. Still more coefficients can be obtained by analyzing the theory in some appropriate limit, since holomorphic functions are completely determined by their singularities and their asymptotics. Often these constraints will completely determine the superpotential [3,9].
Berger-Kostelecky Lorentz violation
Even spacetime symmetries could be viewed by a model-builder as "just another set of symmetries." In the BK-type theories, spacetime symmetries are altered by broken Lorentz invariance, and the superalgebra is modified [1,6]. In NP-type theories, spacetime symmetries are altered, but the superalgebra is not [2,7]. In both cases, the theory can still be described in terms of superfields, and the superpotential is a holomorphic function of said superfields. In the BK-construction these superfields are not necessarily the same as the chiral and vector superfields used in traditional SUSY. Whether or not the superalgebra is modified, invariance under the (possibly modified) supersymmetry is still encoded in the holomorphy of the superpotential.
Berger and Kostelecky begin with an ordinary Wess-Zumino model, then add Lorentzviolating interactions to the Kähler potential. 1 They then show that the resulting Lagrangian is almost invariant under ordinary supersymmetry but becomes completely invariant (up to total derivative terms) under slightly modified SUSY transformations [1]. Fermion and boson propagators are modified in the Lorentz-violating theories, but they retain the parallel structure which is essential for brute-force proofs of divergence cancellation in traditional SUSY theories, leading [1] to very plausibly assert that those divergences should still cancel. Berger and Kostelecky construct modified chiral superfields for their LV SUSY theories, which we will exploit to concisely prove that Berger and Kostelecky were correct about the non-renormalization theorem and divergence cancellation.
Berger and Kostelecky construct LV theories using Majorana spinors following the notation conventions of Wess and Bagger's seminal work [14]. We begin by first rewriting in the slightly more modern notation of [15] and writing an LV Wess-Zumino model for a chiral multiplet with Weyl spinors rather than Majorana. Our chiral superfield for normal SUSY theories is The usual Wess-Zumino Langragian in superfield form is given by More general theories could be constructed by promoting W to an arbitrary holomorphic function of Φ and replacing Φ * Φ with a more general Kähler potential. To facilitate contact with the work of Berger and Kostelecky, we expand the basic Wess-Zumino lagrangian as In the conventional picture (i.e. without using superfields), Lorentz-Violating interactions are added in the form of the following term, L LV [1] which can also be obtained from the original Lagrangian by replacing the derivative operator with a so-called "twisted" derivative operator [6]: This operator is also denoted by ∇ m in [18]. Indeed, many quantities in conventional theories can be extended to BK theories by the replacement ∂ µ →∂ µ and "twisting" all vector indices by the δ α µ + k α µ operator used in (2.6) [6]. This "folk theorem" extends to superfields, as we see when looking at the LV version of the chiral superfield [1]: Building the LV interaction terms into a change of the superfield itself obfuscates the nature of the LV interaction as belonging to the superpotential or the Kähler potential. In [1] it is noted in passing that the LV interaction does not affect the superpotential. To understand this, note that the LV coupling k µν appears only in terms including both θ and θ † ; therefore, since the superpotential will only be integrated d 2 θ or d 2 θ † , k µν will never appear in the action in a term born of the superpotential. Thus, the LV interactions are best thought of as part of the Kähler potential in the BK-construction.
When the full Lagrangian for the Lorentz-violating Wess-Zumino model with one chiral multiplet is written by adding up the various pieces of the Lagrangian (equations (2.5) and (2.4)) in conventional notation or by using the normal superfield Lagrangian (2.2) but with the modified LV superfields, the resulting theory is not quite invariant under normal SUSY transformations [1]. If one modifies the superalgebra and SUSY transformations by the same prescription of "twisting" the derivative operator, then the modified Lagrangian
JHEP01(2014)134
is invariant (up to total derivative) under the modified SUSY transformations [1]. In summary, the Lagrangian L = L W Z + L LV is invariant under SUSY generators Q and Q † with superspace representations and anti-commutation relation: where σ 0 andσ 0 are each the 2 × 2 identity matrix, σ i is the ith Pauli spin matrix, andσ i = −σ i . We will strive to avoid the need for tracking spinor indices as much as possible, but when unavoidable we follow [15]. In brief, undotted Greek indices from the beginning of the alphabet (α, β, . . .) denote left-handed Weyl spinor indices while their dotted counterparts denote right-handed Weyl spinor indices (α,β, . . .). Spinor indices are implicitly raised and lowered as needed with the two-index Levi-Civita ε. Our only exception to leaving spinor indices implicit is the gauge superfield strength, W α , which we write out to distinguish from the superpotential, W . There are some trivial but potentially confusing differences in notation. Berger and Kostelecky use θ andθ where we use θ † and θ, respectively. Invariance under the modified SUSY transformations proceeds the same with Majorana or with Weyl spinors, so we do not repeat the proof of invariance from [1]. Similar constructions exist for supersymmetric gauge theories, and we will quote results from these theories only as needed. The main difference between the spinor conventions of [15] and [1] is that the former removes the need for awkward-looking left-and right-handed projection operators involving γ 5 by working with Weyl-spinors so that undaggered spinors are implicitly left-handed and daggered spinors right-handed.
The BK-construction for SUSY gauge theories is constructed similarly [6]. When writing out the vector superfield in terms of component fields, simply "twist" each spacetime index on a field or derivative operator with the δ α µ + k α µ operator. Recasting the results of [6] with Weyl-spinors instead of Dirac we get where the supercovariant derivatives are also twisted by the δ + k operator: The pure gauge Lagrangian is then the usual superspace integral of W α W α . This can be generalized to the non-Abelian case in the usual way. We emphasize that in this construction, the LV interactions live entirely in the gauge-kinetic function, in contrast to the original BK-model with only chiral multiplets, where the LV interaction was implicitly part of the Kähler potential.
Non-renormalization in Berger-Kostelecky theories
As discussed above, supersymmetric BK theories can be constructed out of modified superfields with a superpotential which is an arbitrary holomorphic function of those modified superfields [1], as with ordinary SUSY. Holomorphy of the superpotential now encodes invariance under the modified superalgebra. Seiberg's holomorphy arguments [3,9] then apply in full, as they don't reference a specific form of SUSY but more generally whatever (super)symmetry is "proxied" by holomorphy. Non-supersymmetric Lorentz-violating theories have been shown to be renormalizable in cases of pure gauge [12], in QCD [11], and in the electroweak sector [10]. Additionally, the renormalization of LV φ 4 theory has been worked out to all orders, and renormalization of LV Yukawa theories has been solved to one-loop order [17]. We conclude from this litany of examples that nothing intrinsic to LV interactions impedes the standard program of renormalization. Furthermore, BK-type LV interactions are not chiral in nature and do not introduce any additional fermions, so they are not expected to produce new anomalies. We therefore conclude that the results of [3] apply to supersymmetric BK theories. It is worth noting that a brute force calculation using supergraphs has been carried out in [13] for BK theories with diagonal k µν , confirming the original suspicions of [1] and proving non-renormalization in the special case of diagonal k µν .
Our holomorphy argument goes further and shows that all the non/renormalization results of traditional SUSY apply to all supersymmetric BK theories: the superpotential is not renormalized at any order in perturbation theory, although it may be subject to renormalization through instantons or other non-perturbative effects. Additionally, such non-perturbative renormalization can often be computed using the methods of [3]. With Wess-Zumino models, such as studied in [1], it is quite well known that Seiberg's arguments prove the tree-level superpotential is exact. We have shown that this continues in the presence of LV interactions, and this proof opens the door to further Seiberg-style analysis of BK-type LV extensions to the MSSM.
The non-renormalization theorem goes beyond the LV Wess-Zumino model. Vector superfields for BK-type theories were constructed in [6]. As with chiral superfields in [1], the prescription was to "twist" the derivative operator and all space-time indices. Also as with chiral superfields, the LV coupling appears only in terms with both θ and θ † , so the LV interaction is most properly thought of as part of the Kähler potential. The holomorphy argument is identical to the chiral superfield case. Furthermore, since practically any N = 1 SUSY theory can be built with a collection of vector and chiral superfields with various interactions, our proof of non-renormalization for BK-type theories extends quite broadly. It is important to note, however, that the LV coupling, as part of the Kähler potential in BK-type theories, is not protected against renormalization.
Robustness against coordinate transformations
A cautionary note has been pointed out numerous times [2,6] that the BK-type LV interactions can be absorbed into the metric by the coordinate transformation x µ′ = x µ − k µ ν x ν . It is argued in [6] that this coordinate transformation causes Lorentz-violation to manifest JHEP01(2014)134 itself in peculiarities of the coordinate system, namely non-orthogonality. Nevertheless, BK-type LV interactions could be realized outside the metric in a different setting. Any theory with extended SUSY and multiple sectors, one with BK-type LV and one respecting Lorentz symmetry, would be immune to complete removal of the LV interaction. In such a setup, the coordinate transformation to undo the LV interaction in one sector would reintroduce it in the other sector. A simple demonstration is N = 2 gauge theory with one hypermultiplet where BK-type LV interactions exist for only one of the two N = 1 chiral multiplets that comprise hypermultiplet. The BK-type LV interaction would partially break the SUSY down to N = 1, but attempting to undo the LV interaction with a coordinate transformation would then swap the roles of the two multiplets. Similar constructions have been outlined for N = 1 supersymmetry in [6] and in [13] where the two sectors interact only via soft SUSY-breaking terms. Analogous constructions could be used to partially break N = 4 to either N = 2 or N = 1.
We emphasize that using BK-type LV interactions to partially break extended SUSY results in a theory with manifest Lorentz violation. Furthermore, the details of both nonrenormalization and energy positivity are largely unchanged in the extended SUSY scenario. Thus, we can view the original N = 1 Berger-Kostelecký construction as a laboratory for exploring universal features of this class of Lorentz violating supersymmetric theories.
We speculate that Seiberg's seminal results [20] for N = 2 and N = 4 theories 2 will also continue to hold (in a sense) for BK-type theories. These theories with Lorentz violation in extended SUSY were first constructed in [6]. There are countless examples in the literature of theories that break N = 4 → N = 2, N = 4 → N = 1, or N = 2 → N = 1 where the broken theory inherits many useful properties from the unbroken theory, so partial breaking BK-type LV interactions should likewise inherit many features from the unbroken theory. Our reasons are twofold: first, analyticity/holomorphy is the centerpiece of Seiberg's arguments, and we have shown that these arguments are unchanged by BKtype Lorentz violation. Second, as discussed above, uniform BK-type Lorentz violation is equivalent to a change of coordinates, and it does not seem credible that a change of coordinates, however peculiar and non-orthogonal, could introduce running couplings into a theory well known to be exactly conformal. This would be tantamount to an anomaly in the rescaling symmetry, which does not exist.
One might expect that an LV theory could develop unusual behavior rendering the powerful methods of [3] inapplicable, but such concerns prove groundless. For example, LV theories generically exhibit some form of instability at Planck-scale energies. Fortunately, these are reasonably well understood in the LV literature and can usually be dealt simply by taking the LV theory to be an effective theory with a UV-completion where Lorentz symmetry is restored at some sub-Planckian scale [5]. As long as the cutoff scale for the effective theory is sufficiently below the scale where instabilities develop, Lorentz symmetry is restored long before any instability can develop, as has been thoroughly explained in [5], for example. A second possibility is that modifying the superalgebra will render it inconsistent. For BK-type theories, this is not the case, but care must be taken lest the energy positivity theorem be destroyed.
Extended supersymmetry and Berger-Kostelecký Lorentz violation
We begin with a very brief review of N = 2 SUSY. For more detailed development, the reader is directed to one of the many excellent review articles available on the subject, such as [25,26]. This version of supersymmetry has 4 fermionic generators, Q a α , where α is a spinor index, and a simply labels the SUSY generators. After appropriate unitary transformations have been made to skew-diagonalize the central charges, the algebra of the supercharges is An N = 2 vector multiplet can be thought of as a standard N = 1 vector multiplet and a standard N = 1 chiral multiplet in the same representation of the gauge group. The full set of supersymmetry transformations can be deduced from the superalgebra (2.13) above, but an oversimplified heuristic is that the extra supersymmetry generators mix fields between the two N = 1 multiplets. We will use the same notation for N = 2 as we do for N = 1, with Φ denoting the N = 1 chiral superfield, V denoting the N = 1 vector superfield, and components denoted φ(x) for the complex scalar field, ψ(x) for the Weyl fermion, F (x) for the chiral auxiliary field, A µ for the real vector field, λ for the Weyl fermion, and D for the vector auxiliary field.
The N = 2 vector multiplet Lagrangian can be similarly extended from N = 1 Lagrangians. A general (not necessarily renormalizable) Lagrangian for supersymmetric gauge theories can be written as where K Φ,Φ is a general function of the chiral superfield, Φ, and its complex conjugate, W (Φ) is a holomorphic function of Φ, and W α is the gauge field-strength chiral superfield, given by If we rescale all the fields so that the vector kinetic term is 1 After eliminating the auxiliary fields in favor of their equations of motion, the N = 2 Lagrangian has the following form, expanded out in component fields [26]:
JHEP01(2014)134
where D µ is the (not super) gauge covariant derivative. To add LV interactions, we follow the prescription of [1] and "twist" any derivative that acts on φ or ψ: where we have organized the equation to emphasize the N = 1 supersymmetries. The first line of (2.16) contains all the terms for the Lagrangian of an N = 1 vector multiplet, the second an N = 1 chiral multiplet with LV interactions, and the third line contains the terms needed to combine the two multiplets into N = 2 SUSY if LV were not present. It is easy to see that this preserves gauge invariance by writing the twisted derivative as a product: As in [1] this almost preserves ordinary SUSY. In the N = 1 [1] and the unbroken N = 4 theories [6], we modify the superalgebra (suppressing spinor indices): To implement partial SUSY breaking by LV, we promote k ν µ to an operator that simply multiplies fields φ and ψ (the N = 1 chiral multiplet) but annihilates A µ and λ (the N = 1 vector multiplet). Invariance of the first two lines of (2.16) is obvious. Invariance of the third line is more subtle but can quickly be shown by using the fact that the LV coupling only appears in the variation of ψ and is imaginary, so it will show up with opposite sign in the two terms.
An alternative way to see that (2.16) preserves N = 1 SUSY is to write the Lagrangian using superfield notation: as demonstrated in, for instance, [26]. The LV interaction is hidden within the superfields themselves using the construction of [1], so the Lagrangian appears the same as the non-LV version. However, this obscures the fact that the full N = 2 SUSY of (2.18) is broken down to N = 1. This is manifest in the on-shell component form (2.16), where the kinetic term of ψ is modified by the LV interaction while that of λ is not. A useful heuristic from [26] for identifying the extra SUSY transformations of N = 2 is to make the switch λ → ψ, ψ → −λ in the SUSY transformation relations. It is clear from this that (2.16) does not satisfy full N = 2.
A manifestly non-trivial theory with BK-type Lorentz violation
We wish to reemphasize the most salient features of the theory described by (2.16). It is an N = 1 SUSY gauge theory with an adjoint chiral multiplet where LV interactions affect only the chiral sector. The coordinate transformation that would normally absorb the LV
JHEP01(2014)134
interaction into the metric in a theory with only a vector multiplet [6] or with only chiral multiplets [1] will here have the effect of moving the LV interaction from the chiral sector to the vector sector. Similar constructions are possible in N = 4 super Yang-Mills or in N = 2 theories with hypermultiplets.
Since the bulk of this paper addresses non-renormalization and energy positivity, we emphasize that the extra structure of theories with extended supersymmetry does not impair any of the N = 1 arguments. In fact, spurion analysis and constraints such as holomorphy and R-symmetry will likely introduce additional constraints on a theory using BK-type LV interactions to partially break extended SUSY. However, those constraints will depend on the particulars of the model in question. In this paper we focus only on model-independent results that apply to any model in the BK class. As such, we will work in the N = 1 theory, even though the theory is likely trivial, so as to avoid introducing any model-specific features into our results.
Energy positivity in the BK construction
Examination of the modified superalgebra relation (2.10) reveals the concern at once. The operator Q, Q † is positive definite by construction. In traditional SUSY this guarantees energy positivity by well known arguments. With the modified superalgebra of BK-type Lorentz violation, positive definiteness of Q, Q † can actually require negative energy if the components of k µν are too negative. By inspection one can see that the choice k 00 < −1, for example, will require negative energy. 3 Clearly the components of k µν must be subject to additional constraints if the positive energy theorem is to survive.
It is worth noting that ambiguities arise when defining the Hamiltonian for the Dirac equation in the presence of Lorentz violation, and that it is necessary to perform a spinor-field redefinition in order to have a hermitian Hamiltonian for Dirac particles [5]. Fortunately, the redefinition of what is meant by the "Hamiltonian" and "energy" does not impact this discussion, since the questions here relate to p 0 , the space-like p i , and the LV coupling k µν . The phrase "energy positivity" describes the p 0 ≥ 0 condition, and even after redefining spinor fields, it remains true that p 0 is equal to the Hamiltonian.
In this section we take the expectation value of Q, Q † for various generic spin-0 and spin-1/2 states and explore the constraints on k µν necessary to preserve the positive energy theorem.
Constraint from spin 1/2 particles at rest
Taking the expectation value of Q, Q † for a generic spin 1/2 state, |ψ , yields the following modified positive energy condition: We are interested in constraints on k µν such that (3.1) guarantees p 0 ≥ 0, i.e. energy positivity. We will evaluate this with the assumption that |ψ is a generic but normalized
JHEP01(2014)134
two-component spinor, parametrized as |ψ = a b . This yields When evaluated in the rest frame of the particle, the inequality becomes This expression does not lend itself easily to analysis and completely obscures the rotational symmetry of our theory (when k ν µ is taken to transform appropriately). To simplify this expression, we note that the terms of (3.3) involving the k 0 i have the structure of a dot product of two 3-vectors. Define k = k 0 1 , k 0 2 , k 0 3 and a = 2Re(a * b), 2Im(a * b), |a| 2 − |b| 2 . The vector a has unit norm since spinor |ψ is normalized. With this replacement, equation (3.3) becomes manifestly invariant under rotations: We can now more easily explore different scenarios by considering the orientation of the vector a relative to k. The case a ⊥ k gives us a constraint on k 0 0 (also mentioned above, obtained by inspection of (2.10)): where we have chosen strict inequality, since any value of p 0 would still satisfy the inequality if we chose k 0 0 = −1. Once we have fixed 1+k 0 0 to be positive, the worst case scenario arises when a is chosen to be anti-parallel to k. Satisfying (3.4) with positive p 0 then requires In other words, if k 0 0 or k violate the bounds set by (3.5) and (3.6), then there exists some spinor |ψ such that p 0 < 0 for that state in order to satisfy equation (3.1). Thus a BK theory violating either of those equations is unstable in a manner that cannot be rectified with a UV completion. Similar constraints were explored via the dispersion relation in [18], with the restriction to the case k µν = αu µ u ν . They found that |α| ≪ 1 together with u µ u µ = ±1, 0 were sufficient to ensure consistency and that the LV terms could be treated as "small corrections". We go beyond the "small correction" case here to explore more detailed constraints for future model builders that may succeed in finding additional SUSY-scale suppression of LV coupling constants.
Constraints from scalar particles
Let us now evaluate (3.1) with scalar states instead of fermions. The equation becomes A simple starting expression is obtained by evaluating this in the rest frame of the state φ, we see that p 0 ≥ 0 is guaranteed only if If the µ0 components of k violate this inequality, then any state with scalar particles necessarily has negative energy, even when the particles are at rest. Equation (3.7) can be used to obtain more general constraints by plugging boosted values of 4-momentum. This is discussed below in section 3.3.
As mentioned earlier, stringent phenomenological limits on the size of Lorentz violating couplings exist. For the non-supersymmetric Standard Model Extension, the results of recent literature are nicely summarized and tabulated in [8]. The supersymmetric LV parameter k µν is related to the non-SUSY c and k F coefficients from [8]. The most forgiving of these constraints is O 10 −10 , so consistency constraints (3.5) and (3.6) are more or less automatically satisfied in any phenomenologically interesting theory. However, should a means be found to give Berger-Kostelecky twisted SUSY-LV couplings additional suppression of order the SUSY-breaking scale (as has been done with non-twisted SUSY-LV in [2]), such a theory would need to respect these O(1) constraints.
General form of constraint
If we allow any p i to be non-zero, then the bound of (3.6) no longer applies. We must re-examine the constraint condition (3.2). We first consider, for simplicity, a particle moving in the 1-direction. Instead of (3.3), we now find (3.9)
JHEP01(2014)134
This can be simplified by use of the previously introduced vector k and a new vector that captures information about the space-space components of the second row of k ν µ . Let Then, mirroring the procedure that led to (3.4) we can reorganize (3.9) as This makes it easy to generalize to the case of arbitrary 3-momentum by introducing one such R for each space direction. Define a more general construction of R i as Then the general SUSY constraint equation for arbitrary 3-momentum is (3.13) It will simplify computations to rearrange this expression into a term which is constant for all choices of spinor and a dot product term which varies from spinor to spinor as follows: (3.14) The worst case scenario occurs when p 0 k + i p i R i is anti-parallel to a, so the strictest constraints from (3.14) are Similarly, equation (3.7) now applies in full when we consider moving scalar particles. There are two ways to further think about constraints (3.7) and (3.15): first, we can obtain the momentum by applying a particle boost (i.e. a boost that does not affect k µν ); second, we can impose a mass-shell condition on the momentum.
Boosted particles
Consider first a boost in the 1-direction, such that p ′ 0 = γp 0 and p ′ 1 = −vγ, where γ = 1/ √ 1 − v 2 as usual. Under this boost, equation (3.15) becomes The generalization to arbitrary boosts is straightforward but unilluminating. Consistency of the twisted superalgebra then demands the components of k µν are chosen so that no choice of boost speed v violates inequality (3.16) and its generalizations unless v is high enough that the UV completion of the LV theory should be used, i.e. if γp 0 is greater than the cutoff scale. We find it convenient to think about this in the following way: k 00 sets a scale for the upper limit of the absolute value of the other components of k µν
Enforcing the mass shell or dispersion relation
Another important feature of BK-type LV theories is the modification of the dispersion relation of particles due to Lorentz violation [5] . Instead of p 2 = −m 2 , the appropriate relation is as found by looking at the propagator of the fields in [5]. Lorentz violating terms in the Lagrangian change the pole of the propagator in precisely the same way as simply applying the "twisting" rule of thumb to the traditional relation. We may profitably think about this as a modification to the mass shell condition. Where we normally think of the on-shell condition as −m 2 = −p 2 0 + p 2 (with p denoting the space-like components of momentum), the relationship is now much more complicated, with direction-dependent corrections to the old terms (arising from the diagonal elements of k µν ) and the addition of cross-terms (arising from off-diagonal elements of k µν ).
Consider, for simplicity, a particle moving purely in the 1-direction. In a Lorentz invariant theory the mass shell condition would require p 1 = ± p 2 0 − m 2 . In a BK-type LV theory, that relation becomes − 4k 01 + 2 k 2 01 ± 4k 01 + 2 k 2 01 2 + 4 1 + 2k 11 + k 2 11 × (3.18) where k 2 µν = k µα k ν α . Note that similar but less general constraints from the dispersion relation have been obtained in [18] for the form of k µν considered there.
For a particle moving in a fixed direction, the mass shell condition can be used as a constraint to eliminate one of the space-like components of momentum in favor of an expression similar to (3.18).
A general boost would be parameterized by the three components of boost velocity, v i , subject to the constraint v 2 1 + v 2 2 + v 2 3 ≤ 1. The mass shell condition could be used to eliminate one of these degrees of freedom in favor of a constraint of the form of (3.18). The resulting expression is complicated and unilluminating. A better use of this constraint in model-building would be to first propose a choice of k µν then check to see whether (3.15) and (3.7) can be violated for some on-shell choice of momentum.
An alternate view on the positive energy constraints
The results from section 2.2 on the non-renormalization theorem had a reassuring interpretation when viewed from the complementary perspective of the transformation that "undoes" the LV interaction in a single-sector non-extended SUSY theory: x µ′ = x µ − k µ ν x ν . It would be very disturbing indeed if a simple linear coordinate transformation invalidated the non-renormalization theorem or the positive energy theorem, unless the coordinate transformation was singular or otherwise illegal. An obviously illegal choice of k 00 = −1 marks a theory that transparently violates SUSY's positive energy theorem. Viewed as a coordinate transformation, it is equally obvious that the transformation is singular if any diagonal element of k equals −1. However, k 00 < −1 continues to violate the positive energy theorem, whereas the coordinate transformation is no longer singular, but would change the signature of the metric. A natural first guess is that legal choices of k µν correspond to coordinate transformations that preserve the signature of the metric, or even the signs of all the diagonal entries. Enforcing this condition requires, for example, which is not obviously related to the other constraints on energy positivity from this section. We conjecture that some appropriate condition exists for k µν when viewed as a coordinate transformation that captures both the rest frame constraints as well as the boosted particle constraints.
Review of Nibbelink-Pospelov construction
The approach of Nibbelink and Pospelov (NP) does not alter the superalgebra. Rather, they construct LV operators that explicitly break the boost part of the superalgebra but preserve the subalgebra generated by translations and supercharges only [2]. Their construction is native to superspace and follows the usual convention of a holomorphic superpotential and non-holomorphic Kähler potential. As with the BK construction, Nibbelink and Pospelov work with 4-component Dirac spinors in the language of Wess & Bagger [14]. We apply the same translations to the conventions of [15] as we did with the BK-construction. Nibbelink and Pospelov classify the possible types of LV operators consistent with exact SUSY up to dimension 5. We list here for reference those LV operators relevant for SUSY gauge theories. Charged chiral superfields have only a single Kähler potential term at dimension 5 (and none at lower dimensions) [2]: The gauge sector has one dimension 4 Kähler term [2]: and three superpotential or gauge-kinetic terms [2]:
JHEP01(2014)134
where parentheses denote symmetrization of indices, and the gauge super field strength W α is given by We take pains to distinguish the holomorphic superpotential W from the gauge superfieldstrength W α by always including the spinor index of the latter, even when contracted. To accommodate this convention, we have used the notation of [7] for this operator but made the spinor indices explicit. The operators of (4.3) all represent modifications to the gauge-kinetic function. As [2] explains, only the last term of (4.3) is non-vanishing for SQED or SQCD, and even that is only true for SQED. The gauge super field strength W α is gauge invariant only for a U(1) group, but replacing the ordinary spacetime derivative with a covariant derivative destroys the chirality condition, making the term not supersymmetric.
Non-renormalization in NP-type theories
As in the BK-construction, holomorphy is key. With the NP-construction, the superalgebra is unmodified, so holomorphy of the superpotential encodes invariance under traditional SUSY. Thus, even with NP-type LV interactions, the superpotential is immune to perturbative renormalization, and even non-perturbative renormalization is subject to tight controls. None of the well-known LV operators in the NP-construction can be added to the superpotential of SQCD, so we do not exhibit an exact superpotential calculation. An SQCD model including NP-type LV interactions in the gauge superpotential or in the Kähler potential would at most alter the running of the gauge coupling, changing Seiberg's results only by altering the coefficient of the beta function.
Weinberg [16] extends Seiberg's proof in three important ways: first, he extends the SUSY non-renormalization theorems to non-renormalizable theories, so one need not worry that higher dimension LV operators ruin these familiar results. Second, he clearly demonstrates that superpotential terms dependent on the gauge superfield strength W α are also protected against perturbative renormalization. Third, he proves that FI terms in U(1) theories are also non-renormalized, as long as the U(1) charges of all the chiral superfields add to zero. This condition is already a well-known necessity for anomaly cancellation, and is included in the SQED model considered by [2] as well as the richer models of [7].
Weinberg's argument about the FI term has to do with gauge invariance. After promoting the FI coupling to a superfield, the FI term would not be gauge invariant if the coupling depended on any other superfield. The only gauge-invariant correction to the FI coupling arises from a diagram that vanishes when the charges are chosen as above [16].
These conditions do not change in the presence of NP-type Lorentz violation. This provides a very elegant alternative proof of the result from [7] that NP-type LV interactions do not induce a potentially divergent FI term. If the FI term is not present in the bare Lagrangian, it will not be induced in the effective Lagrangian, even by LV interactions. Conversely, if the bare Lagrangian includes an LV coupling in an FI term, that coupling will be protected against perturbative renormalization. This can be seen simply via holomorphy, without the need for computing divergent loop diagrams.
JHEP01(2014)134
Additionally, SUSY gauge theories are subject to powerful restrictions on the renormalization of the gauge coupling. When the fields are normalized with "holomorphic" coupling, the famous NSVZ β-function is only renormalized at one-loop order. When the fields are rescaled to canonical normalization, the β-function has a slightly different form but is exact. The original results were obtained in [21,22]. An alternative derivation of the same results was obtained in [23]. An illuminating discussion of the alternative computation can be found in [24]. However, Weinberg offers an interesting proof of the one-loop-only renormalization result that holds for arbitrary superpotential interactions and arbitrary gauge-kinetic function couplings [16]. We briefly summarize his technique here and extend it to Lorentz-violating theories.
Weinberg begins by using Seiberg's spurion prescription, treating new coupling constants as background superfields with appropriate transformation properties for maintaining all symmetries of the lagrangian. Of particular importance will be the R-charge of the coupling and the nature of the coupling as either a chiral or vector superfield. New interactions in the Kähler potential must have vector superfield couplings, and as such these new coupling constants can only appear in non-perturbative corrections to the chiral pieces of the action, namely the superpotential and the gauge-kinetic function. So the Kähler LV interactions of (4.1) and (4.2) cannot contribute to perturbative renormalization of the effective superpotential or gauge-kinetic function. Weinberg then counts the number of graphs of different types that could contribute to a term in the effective superpotential and/or gauge-kinetic function. He considers graphs with E V external gaugino lines and an arbitrary number of external Φ lines, Φ being any component field of the chiral superfield(s) charged under the gauge group, and I V internal V-lines, V denoting any component of the vector superfield. Let A m denote the number of pure gauge vertices with m ≥ 3 V-lines, which will bring factors of the holomorphic coupling, τ . Let B mr denote the number of vertices with m ≥ r V-lines and any number of Φ lines which arise from extra terms in the gauge-kinetic function with r factors of the vector superfield strength, W α . In the tree-level Lagrangian, the coefficient of such interactions is denoted f r , so each of these diagrams will bring a factor of the appropriate f r . Finally, let C m denote the number of vertices with 2 Φ lines and m ≥ 1 V-lines, arising from the traditional Kähler potential term, Φ † e −V Φ. Matching gauge lines with the various types of vertices yields Weinberg's relation: (4.5) Diagrams corresponding to the A m and C m terms arise from standard terms in SUSY gauge theories. Those corresponding to B mr terms come from new interactions in the gauge kinetic function. Since W α has R-charge +1, the couplings f r must have R-charge 2 − r for the new term to have the requisite R-charge of +2 for gauge-kinetic terms. Since the gaugino component of the vector superfield has R-charge +1, the focus on graphs with E V external gauginos enforces a relationship between E V and the coefficients B mr appearing in the diagram, which in turn allows one to compute the number of factors of the gauge coupling in terms of the A, B, C coefficients. Enumerating the possibilities shows that only five distinct choices of the A, B, C coefficients are legal, all of which contribute graphs independent of the gauge coupling, and all of which allow only a single non-zero JHEP01(2014)134 coefficient. One of the five choices allows for a single B mr = 1, which is just the tree-level contribution; the other choices only turn on an A or C. Exhausting this enumeration shows that the gauge coupling only receives perturbative corrections at the one-loop level and that all coefficients f r in the gauge-kinetic function receive no perturbative corrections, apart from wavefunction renormalization which is not addressed by Weinberg's argument. Thus all LV coupling constants for superpotential or gauge-kinetic function terms in the NP-construction are protected against perturbative renormalization. In [7], β−functions were computed to first-order in LV for all LV couplings relevant to N = 1 SQED. Our results are in perfect agreement with their findings for the beta function T λ µν from (4.3). We take this one step further, showing that to any order in the LV couplings, this term is only subject to wavefunction renormalization.
Berger-Kostelecký models with charged matter
Now that we have fleshed out Weinberg's argument, we can apply it also to the BK-construction for pure gauge theories. Since the LV coupling enters into redefinitions of the vector superfield, it will be part of the gauge-kinetic function, and thus protected against perturbative corrections. It is an interesting puzzle whether the BK-construction can accommodate charged matter, as the gauge-and matter-sector LV couplings appear to have such wildly different renormalization properties. It may simply not be possible. Another, more tantalizing possibility is that quantum effects might force the LV couplings to differ in the two sectors, thereby breaking supersymmetry. A third possibility is that gauge-chiral interactions will cancel against pure chiral interactions and ultimately protect the LV coupling, k µν , despite its presence in the Kähler potential.
A comment on the possibility of SUSY-scale suppression of LV couplings
There is some discrepancy in the literature over whether SUSY breaking effects can lead to additional suppression of Lorentz violating couplings. When Lorentz violation in a Wess-Zumino model occurs via the cutoff regularization procedure as studied in [19], it is found quite generally that quantum effects rescale Lorentz violating couplings by a term proportional to (M/Λ) 2 log(M/Λ), where M is the SUSY scale and Λ is the Lorentz-breaking scale. On the other hand, the results of [13] indicate that SUSY-scale suppression of LV couplings is incompatible with gauge theories and can only occur with neutral chiral superfields.
While each of these works looks at different models of Lorentz violation, the "no-go" results of [13] for LV SUSY gauge theories are compatible with our results. Generically, we find that LV interactions in the superpotential or the gauge-kinetic function are protected against perturbative renormalization by an extension of Seiberg's holomorphy arguments. Those concerned with fine-tuning problems will need to consider more exotic models than the original BK-and NP-constructions. The model we put forward in (2.16) is one such candidate. In that theory, the LV interactions affect only the adjoint chiral multiplet and not the gauge multiplet itself, and since the LV interaction lives in the Kähler potential for chiral superfields, it will not be protected against running. This toy model serves as "proof of concept" both that charged fields can exhibit LV interactions and that LV couplings could be brought within phenomenological limits by additional scale suppression from SUSY breaking effects.
JHEP01(2014)134 5 Conclusion
Lorentz symmetry is not a necessary ingredient in Seiberg's holomorphy arguments. Thus, Lorentz-violating SUSY theories of both Berger-Kostelecky and Nibbelink-Pospelov type preserve all the divergence-cancellation and non-renormalization aspects of traditional SUSY theories.
NP-type theories always preserve SUSY's positive energy theorem since they do not alter the superalgebra, and LV couplings in superpotential terms are protected against perturbative renormalization. While the LV couplings are still subject to non-perturbative effects, this is limited to wave-function renormalization, and Seiberg's techniques for obtaining exact quantum superpotentials continue to apply. Kähler potential LV interactions are not protected. Kähler potential as well as gauge field-strength superpotential LV interaction terms may alter the gauge-coupling beta function and in turn change some of the constants in the exponents of Seiberg's exact formulas, but in the absence of matter LV terms in the superpotential, Seiberg's exact results are altered only in a trivial way [3]. The NP construction of LV superpotential terms does not appear compatible with gauge invariance for any but abelian gauge theories, so LV terms that might have a more dramatic impact on Seiberg's results are disallowed [2,7].
In BK-type theories, the single LV interaction is built into a redefinition of the superfields and the superalgebra itself. The construction is such that the LV coupling constant will only survive Grassman integration in the Kähler potential, so the superpotential remains unchanged. Seiberg's holomorphy arguments guarantee that the superpotential in BK-type theories remains non-renormalized (perturbatively), but this offers no protection to the LV coupling constant itself in Wess-Zumino models.
The positive energy theorem only continues to hold if the LV coupling, k µν obeys constraints (3.5), (3.6), (3.8), in addition to (3.7) and (3.15), which must hold for arbitrary on-shell momentum below the cutoff scale of the effective theory. While these constraints are many orders of magnitude less stringent than current phenomenological limits, they become important in models with O(1) LV couplings that are suppressed as we run to lower energies. As noted above, such suppression requires different LV couplings for gauge and matter multiplets, which typically requires some level of SUSY breaking itself.
We have laid out such an example whereby BK-type LV interactions can be used to partially break extended SUSY, rendering a theory that possesses both (some) unbroken SUSY and BK-type LV interactions that are robust against coordinate transformations. We currently know of no such model that has been fully fleshed out in the literature. We save detailed investigation of such models for the future, noting for the time being that the positive energy theorem has essentially the same form regardless of the degree of supersymmetry. Our results here for the possibly trivial N = 1 BK-type models serve as a model-independent baseline set of constraints for non-trivial BK-type models using LV to partially break extended SUSY. Each specific non-trivial realization of will likely carry additional, model-specific constraints.
This work opens the door to the application of powerful modern techniques in supersymmetry, such as Seiberg's holomorphy arguments, to theories with Lorentz-violation. To our knowledge, the main body of the Lorentz violation literature has not yet employed JHEP01(2014)134 these techniques. 4 It would be interesting to extend the "mixed sector" BK-type approach described in (2.16) to N = 4 as well as extending NP-type theories to that degree, in order to compare with the general AdS/CFT computations of [27]. It will also be particularly interesting to consider BK-type Lorentz violation in the context of N = 2 gauge theory with matter, where the Lorentz violation affects only the matter multiplets. The machinery of Seiberg-Witten theory should apply, with SUSY breaking originating from the LV couplings rather than mass terms. | 11,452 | 2014-01-01T00:00:00.000 | [
"Physics"
] |
Hierarchical micro/nanostructured silver hollow fiber boosts electroreduction of carbon dioxide
Efficient conversion of CO2 to commodity chemicals by sustainable way is of great significance for achieving carbon neutrality. Although considerable progress has been made in CO2 utilization, highly efficient CO2 conversion with high space velocity under mild conditions remains a challenge. Here, we report a hierarchical micro/nanostructured silver hollow fiber electrode that reduces CO2 to CO with a faradaic efficiency of 93% and a current density of 1.26 A · cm−2 at a potential of −0.83 V vs. RHE. Exceeding 50% conversions of as high as 31,000 mL · gcat−1 · h−1 CO2 are achieved at ambient temperature and pressure. Electrochemical results and time-resolved operando Raman spectra demonstrate that enhanced three-phase interface reactions and oriented mass transfers synergistically boost CO production.
S3
In addition, the Ag HF electrode was also treated with other different electrochemical oxidation times (30 s, 60 s, 120 s, 180 s, 300 s) at the fixed potential of 2.0 V (vs. Ag/AgCl), followed by the same electrochemical reduction at the fixed potential of -0.5 V (vs. Ag/AgCl) with the fixed reduction time (600 s) to obtain a series of activated Ag HF electrodes. After an overall comparison of the CO2 electroreduction performances, the activated Ag HF electrode obtained from Ag HF-redox-240s had the best electrocatalytic activity . Therefore, the activated Ag HF electrode in the main text and Supplementary Information refers to the electrode that underwent 240 s of oxidation and 600 s of reduction unless otherwise stated.
Synthesis of activated Ag foil
Ag foil and activated Ag foil working electrodes were used as references. A piece of Ag foil was ultrasonically cleaned in acetone and ethanol, and after drying in air, the side and back of the Ag foil were sealed with epoxy to obtain a Ag foil electrode with an exposure geometric area of 2 cm × 2 cm. And the synthesis procedure for activated Ag foil was the same as that of activated Ag HF. Thus, the activated Ag foil electrode also possessed the same exposed geometric area of 4 cm 2 .
Gas Permeation Tests
Gas permeation tests were performed with a custom gas permeability device that could record the permeability of H2, He, CH4, N2 and CO2 through the hollow fiber under different transmembrane pressure drops. According to the Yasuda-Tsai equations 1,2 , the permeability coefficient K of porous hollow fiber can be expressed as follows: = + − Eq.
(3) where K0 is the Knudsen permeability coefficient, B0 is the geometric factor of hollow fiber wall, P is the mean pressure on both sides of the fiber, and η is the viscosity of N2 gas. The values of K0 and B0 can be calculated from the slope and intercept of the plot of K to P. The effective porosity ( /q 2 ) can also be estimated by the Knudsen permeability coefficient K0 in Equation (4): 2 = ( . ) ( ) Eq. (4) where is the porosity, q is the tortuosity factor, R is the gas constant, T is the temperature, and M is the molecular weight of the gas.
CO2 Electroreduction and Product Quantifications
The potentiostatic electroreductions of CO2 over all electrodes were performed at ambient temperature and pressure on the Biologic VMP3 potentiostat using the gas-tight electrolysis cell, which comprised two symmetrical compartments made of quartz glass with an inner height of 5.0 cm, an inner length of 5.0 cm and an inner width of 1.5 cm (Supplementary Figs. 11b-d).
The cathodic and anodic compartments were separated by a Nafion 117 membrane, and the electrolysis cell was equipped with a KCl-saturated Ag/AgCl reference electrode in the cathodic compartment and a platinum mesh counter electrode in the anodic compartment.
CO2-saturated KHCO3 aqueous solutions with different concentrations were used as the electrolyte solutions, which were cycled in both the cathodic and anodic compartments at a fixed flow rate of 20 mL·min -1 by using two identical peristaltic pumps (Jihpump BT-50EA 153YX). Prior to the experiments, the electrolysis cell was vacuumized and then purged with CO2 for 30 min.
S4
Under the similar electrolysis conditions, CO2 flow rate of lower than 10 mL•min -1 resulted in very low CO faradaic efficiencies and CO2 conversion rates. While both the CO faradaic efficiency and CO2 conversion rate increased rapidly when CO2 flow rate was larger than 10 mL•min -1 , and up to 60 mL•min -1 . Further increasing the CO2 flow rate to more than 60 mL•min -1 led to the slow increase of CO faradaic efficiencies and the rapid decrease of CO2 conversion rate. In order to obtain both appropriate CO faradaic efficiency and CO2 conversion rate, the CO2 flow rate was fixed at 60 mL•min -1 during CO2 electroreduction unless otherwise stated ( Supplementary Fig. 13). In the situations with very large currents (>400 mA), the Biologic VMP3 potentiostat was connected to a VMP3 booster chassis with an option of 10 A current.
The retention time values of CO2 through the different electrodes have been estimated obeying the equations below based on their structure and porosity. That is the CO2 retention times through Ag HF and activated Ag HF are 31.6 and 30.5 ms, respectively. The retention time of the electrodes was calculated as follows: Eq. (6) = π ( ) Eq. (7) where τ is the retention time, Vwall is the pore volume of hollow fiber wall, Vin is the volume of hollow fiber inner channel, νCO2 is the flow rate of CO2, n is the number of hollow fiber tubes, is the porosity, q is the tortuosity factor, Dout is the outer diameter of hollow fiber, Din is the inner diameter of hollow fiber, and L is the length of hollow fiber.
The theoretical limits of CO partial current density, i.e., jCO,lim(gas) and jCO,lim(sol) were calculated by the following two Equations (8) and (9), respectively. The former jCO,lim(gas) is the theoretical limit of CO partial current density with all gas-phase CO2 molecules input into the electrolysis cell were electroreduced to CO. The latter jCO,lim(sol) is the theoretical limit of CO partial current density with all CO2 molecules dissolved in the electrolyte solution were electroreduced to CO 3-5 . , ( ) = Eq. (8) , ( ) = Eq. (9) where α is the number of transferred electrons for producing CO, F is the Faraday constant (96485 C•mol -1 ), S is the electrode area (4 cm 2 ), km is the mass transfer coefficient (km =1 to obtain the value of jCO,lim(gas)), νCO2 is the flow rate of CO2, Vm is the gas mole volume (24.5 L•mol -1 at 25 °C , 101.325 kPa), D is the diffusion coefficient of CO2 (2.02 × 10 -9 m 2 •s -1 ), c is the saturated bulk concentration of CO2 ( 34 mol•m -3 at 25 °C , 101.325 kPa), δ is the diffusion layer thickness, which is estimated to be 14.0 μm using the rotating disk electrode model with the Levich equation 3 .
The experimental CO2 conversion rate was determined in accordance with the following equation: = × % Eq. (10) The theoretical limit of CO2 conversion rate was calculated using Equation (11) below: For the long-term performance test of CO2 electroreduction, the fixed potential of -0.83 V (vs. RHE) was applied to the activated Ag HF electrode. The electrolyte was CO2-saturated 1.5 M KHCO3 and the CO2 flow rate was kept at 60 mL•min -1 . The catholyte and anolyte were cycled at a flow rate of 20 mL•min -1 , accompanied by the supplement of ultrapure water to maintain a constant concentration of 1.5 M KHCO3. The exhaust from the cathodic compartment was measured by the online gas chromatography (GC) during the whole 170-hour test.
All the current densities in the main text and Supplementary Information were based on the electrode geometric area.
Gas-phase products from the cathodic compartment were directly vented into a gas chromatograph (GC-2014, Shimadzu) equipped with a Shincarbon ST80/100 column and a Porapak-Q80/100 column using a flame ionization detector (FID) and a thermal conductivity detector (TCD) during the electroreduction tests and online analysis. A GC run was initiated every 15 min. To ensure the accuracy of the gas-phase products, when the CO concentration in the exhaust was lower than 10%, the FID detector was used for CO quantification; when the CO concentration in the exhaust was higher than 10%, the TCD was used as the main detector of CO, and the FID was used as the auxiliary detector. The TCD quantification was used for H2 quantification. All faradaic efficiencies reported were based on at least five different runs. High purity argon (99.999%) was used as the carrier gas of GC. In all the potentiostat electrolysis tests, H2 and CO were the only gas-phase products, and their faradaic efficiencies were calculated as follows: where Cproduct is the concentration of the gas-phase products (ppm), νCO2 is the flow rate of CO2 (60 mL·min -1 ), t is the reaction time, α is the number of transferred electrons for producing CO or H2, F is the Faraday constant, Vm is the gas mole volume, and Q is the total quantity of electric charge. The possible liquid-phase products from the cathodic compartment after potentiostatic electrolysis for 1 h were analyzed using an offline GC-2014 (Shimadzu) equipped with a headspace injector and an OVI-G43 capillary column (Supelco, USA). There were no liquidphase products detected by offline GC. The postreaction catholyte solution was also further analyzed by using a 600 MHz nuclear magnetic resonance (NMR) spectrometer (Bruker). After an hour of electrolysis, an aliquot of catholyte solution (0.5 mL) was mixed with 0.1 mL of DSS (6 mM) and 0.1 mL of D2O, which were used as internal standards. No liquid-phase product was detected by NMR. The diagram for the detailed fabrication procedures of Ag HF is shown in Supplementary Fig. 1, and the related experimental descriptions can be found in the Materials and Methods section. The whole fabrication of Ag HF involved only basic laboratory apparatuses under relatively mild conditions. Notably, the above fabrication process produced one batch of Ag HF with a total length of more than 180 meters, demonstrating its high potential for scalable applications.
Supplementary Figure 2 | Electrochemical redox activation treatments of the Ag HF electrode to obtain different activated Ag HF electrodes, and their CO2 electroreduction performance. a, Electrochemical oxidation and reduction current density curves during different oxidation treatments (from 30 s to 300 s) at 2.0 V (vs. Ag/AgCl) in 0.5 M KHCO3, and the subsequent respective reduction treatments for 600 s at -0.5 V (vs. Ag/AgCl) in the same electrolyte solution. b, Comparison of the CO partial current densities of the Ag HF electrode and the different activated Ag HF electrodes. The CO and H2 faradaic efficiencies and total current densities over c, the Ag HF electrode and d-i, the different activated Ag HF electrodes in CO2-saturated 1.5 M KHCO3. Error bars in b-i were obtained from the average of six individual tests.
The Ag HF electrode was subjected to different oxidation treatments (30 s, 60 s, 120 s, 180 s, 240 s and 300 s) and subsequent respective reduction treatments for 600 s to obtain a series of activated Ag HF electrodes (see the aforementioned Preparations section for details), denoted as Ag HF-redox-30s, Ag HF-redox-60s, Ag HF-redox-120s, Ag HF-redox-180s, Ag HF-redox-240s,
S8
and Ag HF-redox-300s, respectively. From the oxidation and reduction current density-time curves of these activated Ag HF electrodes ( Supplementary Fig. 2a), the amounts of accumulated charge at different oxidation times were proportional to those in the corresponding reduction stages, implying the redox reactions highly obeyed aforementioned Equations (1) and (2), respectively.
The comparison of CO2 electroreduction performance over the Ag HF electrode and all the activated Ag HF electrodes is shown in Supplementary Fig. 2b, and their detailed CO and H2 faradaic efficiencies as well as the total current densities are also presented in Supplementary Figs. 2c-i. One can see that the CO partial current density showed obvious superiority at the more negative potentials with increasing oxidation time. And the Ag HF-redox-240s electrode delivered the highest jCO among all the activated Ag HF electrodes. Therefore, the activated Ag HF electrode in the main text and Supplementary Information referred to the Ag HF-redox-240s electrode unless otherwise stated. The electrochemical oxidation and reduction treatments of Ag HF electrode to obtain the activated Ag HF electrode (Ag HF-redox-240s) were monitored by the time-resolved operando Raman spectroscopy. As shown in Supplementary Fig. 3, the Raman spectrum of Ag HF (at 0 s of the oxidation stage) showed the peaks at approximately 1012, 1360, 1603 and 1660 cm -1 , which were assigned to bicarbonate ions (HCO3 -) adsorbed at the electrode surface as the νHO-COO-, νsHOCOO-, νasHOCOO-and δHO-H (in H2O) modes, respectively, according to previous reports 6,7 . Once the oxidation reaction occurred (as short as 1 s of the oxidation stage), new Raman peaks appeared at 682, 1047, 1296 and 1517 cm -1 , which could be assigned to the as-formed Ag2CO3 species as βO-C-O, νCO3 2-, νsO-C-O and νasO-C-O, respectively, 8,9 besides bicarbonate ion related peaks. With increasing oxidation time (2 s to 7 s), the intensities of the Ag2CO3-ralted peaks increased rapidly, and reached the maximum at 8 s during the oxidation stage. Further increasing the oxidation time (8 s to 240 s), the intensities of the Ag2CO3-related peaks remained constant. Combining the electrochemical oxidation current density curve ( Supplementary Fig. 2) and the Raman observations, it was found that the oxidation reaction of Ag to Ag2CO3 occurred very quickly on Ag HF surface at the initial stage, and then expanded to the subsurface or substrate to some degree, which was responsible for the constant peak intensities while keeping the oxidation current densities of 40-120 mA· cm -2 after 8 s.
As for the subsequent electrochemical reduction process, the intensities of the characteristic Ag2CO3 peaks faded rapidly, and became very weak at 21 s of the reduction stage. The Ag2CO3related peaks were almost negligible at 24 s of the reduction stage and disappeared in the following reduction stage. Interestingly, the Raman observations on the electrochemical reduction were in consistence with the variation of the reduction current density curve ( Supplementary Fig. 2). That is the reduction current density of Ag HF-redox-240s decreased to zero after 24 s at the potential of -0.50 V (vs. Ag/AgCl) ( Supplementary Fig. 2). These timeresolved operando Raman results confirmed the transitions between Ag and Ag2CO3 compositions obeying Equations (1) and (2) during the electrochemical redox activation treatments of Ag HF to obtain activated Ag HF. Figure 4 | SEM images of the outer surface of a, Ag HF and b-g, different activated Ag HF electrodes with (a1-g1) low, (a2-g2) medium and (a3-g3) high magnifications, respectively. a, Ag HF, b, Ag HF-redox-30s, c, Ag HF-redox-60s, d, Ag HFredox-120s, e, Ag HF-redox-180s, f, Ag HF-redox-240s, and g, Ag HF-redox-300s.
S12
The outer surface morphologies of Ag HF and activated Ag HF with different redox pretreatments were investigated by SEM observations. As shown in the Supplementary Fig. 4a, the outer surface of Ag HF exhibited the abundant micrometer-sized pores with relatively smooth substrate. Once the electrochemical redox activation treatment even a slight oxidation as short as 30 s was applied, the outer surface morphology of the Ag HF-redox-30s changed greatly ( Supplementary Fig. 4b). Numerous nanorods covered the outer surface of Ag HF-redox-30s, making the pores indistinct. With increasing oxidation time, the outer surfaces of activated Ag HF electrodes exhibited increasing surface coarseness and decreasing diameter of the as-formed nanorods (Supplementary Figs. 4c-g). Note that these nanorods partly ordered and gathered at the outer surface of Ag HF-redox-240s ( Supplementary Fig. 4f). These hierarchical micro/nanostructures comprising partly ordered nanorods on the surface and micrometer-sized pores beneath the surface may maximized the three-phase reaction interfaces, resulting in the best electrocatalytic activity ( Supplementary Fig. 2). The morphologies of pristine Ag powder and as-prepared Ag HF were investigated by SEM observations, as shown in Supplementary Fig. 5. The particles in the Ag powder were spherical with a relatively even particle size (~ 60 nm), but they appeared to aggregate ( Supplementary Fig. 5a). In contrast, both the inner and outer surfaces of Ag HF showed a well-integrated substrate without spherical or granular particles ( Supplementary Fig. 5b), implying that the silver particles were completely sintered and fused to form an integral hollow-fiber base during the fabrication process, thereby benefiting mechanical strength reinforcement and electron transfer. Both the outer and inner surfaces of Ag HF possessed abundant irregular micrometer-sized pores with a pore size of 5-20 µm. TEM was used to further investigate the morphologies of Ag powder and Ag HF, as shown in Supplementary Fig. 6. From the low-magnification TEM image, the particles in the Ag powder were spherical with a particle size range of 20-120 nm ( Supplementary Fig. 6a), in agreement with the SEM observation ( Supplementary Fig. 5a), while the fused nanorod-like particles obtained by scraping off the outer surface of Ag HF were presented ( Supplementary Fig. 6b). Furthermore, the high-magnification TEM image showed that the lattice spacing of the Ag powder were 2.36 and 2.04 Å, corresponding to the (111) and (200) planes of metallic Ag, respectively ( Supplementary Fig. 6a). Ag HF also presented a lattice spacing of 2.36 Å, corresponding to the Ag (111) plane ( Supplementary Fig. 6b). These results indicate that Ag HF had the same metallic Ag phase as the pristine Ag powder. The cross-section morphologies and pore structures of Ag HF and activated Ag HF were studied by SEM, as shown in Supplementary Fig. 7. Ag HF and activated Ag HF possessed similar wall thicknesses of ~50 µm and outer diameters of ~425 µm; additionally, their pores in the wall were interconnected (Supplementary Figs. 7a1, b1). Different from the symmetrical outer and inner regions of Ag HF ( Supplementary Fig. 7a2), partly ordered nanorods gathered at the outer region of activated Ag HF, presenting a distinct configuration of hierarchical micro/nanostructures ( Supplementary Fig. 7b2) derived from the electrochemical redox activation treatments. That is the CO2 flow rate was kept at 10 mL•min -1 during the activation treatments and the redox reactions (referring to the aforementioned Equations (1) and (2)) occurred only at the outer region of the hollow fiber wall. Gas permeation was used to study the structural features of Ag HF and activated Ag HF before and after the reaction, as shown in Supplementary Fig. 8. All the gas permeances of H2, He, CH4, N2 and CO2 remained almost constant at different pressure drops, and the large permeance values indicated the high permeabilities of Ag HF and activated Ag HF before and after reaction. Moreover, the gas permeances were inversely proportional to the square roots of their molecular weight (the insets in Supplementary Fig. 8), implying that the gas transport mechanisms through all the hollow fibers were dominated by Knudsen diffusion 1,2 . Furthermore, the effective porosities of Ag HF and activated Ag HF, calculated by using the nitrogen permeance data according to Equation (4), were 38% and 32%, respectively. On the basis of the porosity and structure, the CO2 retention times through Ag HF and activated Ag HF were 31.6 and 30.5 ms, respectively. In addition, the postreaction activated Ag HF possessed an effective porosity of 29% and a CO2 retention time of 30.0 ms, close to those of activated Ag HF before the reaction, implying the structural stability due to the tough framework of activated Ag HF. The phase compositions of all involved silver samples were studied by XRD. As shown in Supplementary Fig. 9, Ag foil, Ag powder and Ag HF showed three diffraction peaks at 38.1°, 44.3° and 64.4°, corresponding to the (111), (200), and (220) planes of metallic Ag (JCPDS no.04-0783), respectively. After electrochemical oxidation treatment, in addition to metallic Ag peaks, many new peaks appeared in electrooxidized Ag HF, which were assigned to the various planes of Ag2CO3 (JCPDS no. 26-0339). This result indicated that the electrochemical oxidation reaction obeyed Equation (1). By the subsequent electrochemical reduction treatment, all Ag2CO3 peaks converted back to metallic Ag peaks in activated Ag HF. In addition, the postreaction activated Ag HF also presented the same phase compositions as the activated Ag HF before the reaction. These XRD results indicated that all involved silver samples had only a metallic Ag phase with the same crystal form except for the electrooxidized Ag HF intermediate. The surface compositions of all involved silver samples were studied by XPS. As shown in Supplementary Fig. 10, the Ag 3d spectra in Ag foil, Ag powder and Ag HF showed the main Ag 3d5/2 and Ag 3d3/2 core peaks at binding energies of 368.3 and 374.3 eV, respectively, indicating metallic Ag 0 characteristics. Regarding electrooxidized Ag HF, the Ag 3d5/2 and Ag 3d3/2 peaks were at 367.8 and 373.8 eV, respectively, corresponding to the characteristic peaks of Ag2CO3 (referring to the standard spectrum of silver carbonate). This result implied that the surface of electrooxidized Ag HF was covered with Ag2CO3. Furthermore, the XPS spectra of activated Ag HF before and after the reaction suggested the same metallic Ag 0 surfaces, indicating the stable metallic Ag 0 active component during CO2 electroreduction. Supplementary Figure 11 shows optical images of the Ag HF electrode and electrolysis cell in different views and states. The electrolysis cell comprised two symmetrical compartments made of quartz glass with an inner height of 5.0 cm, an inner length of 5.0 cm and an inner width of 1.5 cm. The Ag HF working electrode consisted of ten Ag HF tubes (i.e., Ag HF array), and each tube had an exposed length of 3 cm (Supplementary Fig. 11a). The working electrode and the Ag/AgCl reference electrode were in the cathodic compartment, and the Pt mesh counter electrode was in the anodic compartment ( Supplementary Fig. 11b). The cathodic and anodic compartments were separated by a Nafion 117 membrane ( Supplementary Fig. 11c). During CO2 electroreduction, CO2 penetrated through the wall of the activated Ag HF tubes via the copper tube, forming a large amount of bubbles ( Supplementary Fig. 11d). The detailed faradaic efficiencies of CO and H2 as well as the total current densities of activated Ag HF in different KHCO3 solutions are presented in Supplementary Figs. 12a-e. As the applied potential negatively shifted, the CO faradaic efficiencies decreased, while the H2 faradaic efficiencies and the total current densities rapidly increased, especially at more negative potentials. Moreover, CO faradaic efficiencies in low concentration KHCO3 solutions were higher than those in high concentration solutions at similar potentials. Furthermore, the CO partial current density showed superior in the relatively concentrated solutions with the best performance in 1.5 M KHCO3 (Supplementary Fig. 12f). Therefore, CO2-saturated 1.5 M KHCO3 aqueous solution was chosen as the electrolyte solution for CO2 electroreduction unless otherwise stated. By varying the CO2 flow rates, which further affect the retention time and mass transfer that are related to the main structural factors, the variations of electrocatalytic performance over the activated Ag HF electrode including CO2 reduction and HER processes have been clearly presented. As shown in Supplementary Fig. 13a below, the CO2 flow rate significantly influenced the product faradaic efficiency and CO2 conversion rate over activated Ag HF electrode under the constant current density of 1.2 A•cm -2 . Only H2 was detected when no CO2 flowed through the porous electrode, indicating the dominant HER. With increasing CO2 flow rates, the H2 faradaic efficiencies monotonically decreased, and the CO faradaic efficiencies correspondingly increased resulting in a total faradaic efficiency of 100%. This implies that high local CO2 concentration generated by the sufficient CO2 flow suppressing HER while facilitating CO2 reduction. However, compared to the CO faradaic efficiency, the CO2 conversion rate exhibited different variations with respect to CO2 flow rate. That is the CO2 conversion rate increased rapidly at first with the gradually increasing CO2 flow rates, and a maximum conversion of 68% was yielded at 40 mL•min -1 with a CO faradaic efficiency of 81%. Interestingly, the CO2 conversion rate faded with further increasing CO2 flow rates, even down to 32% at 100 mL•min -1 . These results imply that CO2 reduction kinetics may also be affected by the electrode intrinsic structures besides the competitive HER.
Supplementary
Furthermore, variations on the faradaic efficiency ratio of FECO/FEH2 and the CO2 conversion rate with respect to the retention time (obtained from Equations (5), (6) and (7) on basis of the electrode intrinsic structure characteristics) under the constant current density of 1.2 A•cm -2 can be clearly seen in Supplementary Fig. 13b. The FECO/FEH2 ratio increased when the retention time decreased. Regarding the theoretical limit of CO2 conversion rate, i.e., ConCO2,lim (referring to Equation (11)), it remained at 100% in the retention time range from 183 to 46 ms (corresponding to CO2 flow rates from 10 to 40 mL•min -1 ), and then decreased rapidly with the further decreasing retention time. In fact, the experimental results of CO2 conversion rates were quite low at long retention time situations and then close to the theoretical values at short retention time situations. In order to obtain both appropriate CO faradaic efficiency and CO2 conversion rate, the CO2 flow rate was fixed at 60 mL•min -1 during CO2 electroreduction unless otherwise stated.
In addition, the theoretical limit and experimental values of CO partial current density under different CO2 flow rates were further studied. Actually, there are two kinds of theoretical limits of CO partial current density: (1) all gas-phase CO2 molecules input into the electrolysis cell are reduced to CO with a 100% conversion rate, i.e., jCO,lim(gas), which is calculated using above Equation (8); (2) all CO2 molecules dissolved in the electrolyte solution are reduced to CO, i.e., jCO,lim(sol), which is calculated using above Equation (9). As shown in Supplementary Fig. 13c, the experimental jCO values over activated Ag HF were far larger than those of jCO,lim(sol). Although the experiment results were still lower than those of jCO,lim(gas) as an extremely ideal case, the activated Ag HF delivered a maximum jCO of 1.40 A•cm -2 at 60 mL•min -1 , superior to the previous reports (Supplementary Table 1). In contrast, all jCO values over activated Ag foil (Fig. 4b) were lower than those of the theoretical jCO,lim(sol). Moreover, on basis of Equation (8), the mass transfer coefficients were 0.4, 0.6, 0.7 and 0.7 over activated Ag HF, corresponding to the CO2 flow rates of 10, 20, 40 and 60 mL•min -1 , respectively, which were much larger than those over activated Ag foil. Typical GC curves from the FID and TCD over activated Ag HF are shown in Supplementary Fig. 14. When the CO concentration in the exhaust was lower than 10%, the FID was used for CO quantification (Supplementary Fig. 14a). When the CO concentration in the exhaust was higher than 10%, the TCD was used as the main detector of CO, and the FID was used as the auxiliary detector ( Supplementary Fig. 14b). The TCD was always used for H2 quantification. Moreover, H2 and CO were confirmed to be the only gas-phase products. The 1 H-NMR spectrum of the postreaction catholyte solution after 1 h of CO2 electroreduction over activated Ag HF at -0.83 V further verified that no liquid-phase product could be detected ( Supplementary Fig. 14c). As shown in Supplementary Fig. 15, the CO2 conversion rates of activated Ag HF were comparable to those over prominent catalysts reported in electrocatalysis in the potential range of -0.35 to -0.70 V. With negative-shifting potentials, the CO2 conversion rates over the activated Ag HF electrode further increased rapidly and reached 28%, 37%, 54% and 65% at -0.72 V, -0.75 V, -0.83 V and -0.89 V, respectively ( Supplementary Fig. 15), far outperforming the previously reported electrocatalysts (Supplementary Table 1).
Supplementary Figure 16 | ECSA measurement results.
Cyclic voltammetry curves of a, Ag foil, b, activated Ag foil, c, Ag HF, and d, activated Ag HF in CO2-saturated 1.5 M KHCO3. e, Plot of Δj (the difference of cathodic and anodic current densities, jc-ja) against the scan rates from cyclic voltammetry curves. The plots in Supplementary Fig. 16e, same as Fig. 4a in the main text. All the current densities in the main text and Supplementary Information were based on the electrode geometric area.
The ECSAs of Ag foil, activated Ag foil, Ag HF and activated Ag HF were determined by measuring their double-layer capacitance (Cdl) values via their cyclic voltammetry curves, as S26 shown in Supplementary Figs. 16a-d. The Cdl, which was proportional to the ECSA, was obtained by linearly fitting the absolute value of the slope of Δj (the difference of cathodic and anodic current densities of the cyclic voltammetry curves) against the scan rates. Activated Ag HF possessed the largest ECSA with a Cdl value of 30.9 mF•cm -2 , and this value was 2.7, 2.7 and 10.3 times those of activated Ag foil (11.4 mF•cm -2 ), Ag HF (11.3 mF•cm -2 ) and Ag foil (3.0 mF•cm -2 ), respectively ( Supplementary Fig. 16e).
Supplementary Figure 17 | Electrocatalytic performance of activated Ag HF and other
counterparts. CO and H2 faradaic efficiencies, and total current densities over the electrodes a, Ag foil, b, activated Ag foil, c, Ag HF, and d, activated Ag HF in the potential range of -0.35 to -1.15 V in CO2-saturated 1.5 KHCO3. e, Comparison of the CO partial current densities over these electrodes. The plots in Supplementary Fig. 17e, same as Fig. 4b in the main text. Error bars in a-e were obtained from the average of six individual tests.
Ag foil showed very low CO2 electroreduction activity, and the CO faradaic efficiencies at all potentials were less than 22% (Supplementary Fig. 17a). After electrochemical redox treatments, the CO faradaic efficiencies of activated Ag foil improved to some degree ( Supplementary Fig. 17b). At -0.50 V, the CO faradaic efficiency of activated Ag foil reached a maximum of 57%. While activated Ag foil also delivered CO faradaic efficiencies that were far less than 50% at other potentials ( Supplementary Fig. 17b). The results implied that the hydrogen evolution reaction was still dominant with the activated Ag foil. As shown in Supplementary Fig. 17c, Ag HF showed slightly better CO2 electroreduction activity than activated Ag foil, i.e., higher faradaic efficiencies and total current densities at similar potentials. With respect to activated Ag HF, all of the CO faradaic efficiencies and total current densities increased greatly ( Supplementary Fig. 17d). The comparison of the CO partial current densities indicated the obvious superiority over the activated Ag HF electrode compared with the other electrodes ( Supplementary Fig. 17e).
S28
Supplementary Figure 18 | CO2 electroreduction path. Reaction steps for the electroreduction of CO2 to CO on silver catalysts. Supplementary Figure 18 shows a possible mechanism for CO2 electroreduction on silver catalysts, in which the reaction paths proposed were consistent with previous reports 10, 11 . The initial step (Step 1) in the overall two-electron reduction of CO2 to CO on the silver surface was a one-electron transfer step forming adsorbed *COO -. Then, Step 2 was a chemical step involving the protonation of *COOto form a *COOH intermediate. Subsequently, Step 3 was an electrochemical path coupled to a chemical reaction involving the proton-electron transfer and instantaneous dehydration to form an adsorbed *CO intermediate. Finally, Step 4 was the desorption of *CO from the silver surface to obtain the CO product. The Tafel slope of activated Ag foil was 113 mV•dec -1 , close to that (108 mV•dec -1 ) of activated Ag HF with the non-CO2disperser mode (Fig. 5c), implying that Step 1 with the theoretical value of 118 mV•dec -1 12 was the rate-determining step for both electrodes. In contrast, activated Ag HF with the CO2disperser mode showed a Tafel slope as low as 63 mV•dec -1 , suggesting Step 1 was not the ratedetermining step. In principle, each one of Step 2, Step 3 and Step 4 was probably to be the ratedetermining step of activated Ag HF with the CO2-disperser mode. According to the previous report 12,13 , if Step 3 was the rate-determining step, the Tafel slope will be generally less than 40 mV•dec -1 . In addition, if Step 4 was the rate-determining step, the Tafel slope will be ∞ (infinity) 13,14 . Thus, the Tafel slope value (63 mV•dec -1 ) of activated Ag HF with the CO2disperser mode ruled out the situations of Step 3 and Step 4 as the possible rate-determining steps. Consequently, Step 2 with the theoretical value of 59 mV•dec -1 was the rate-determining step 13,14 for activated Ag HF with the CO2-disperser mode, in agreement with many reports [15][16][17] . The result meant that the CO2-disperser mode of activated Ag HF played a crucial role in CO2 electroreduction, which might induce the synergistic effects to alter the route of CO2 reduction. Isotopic trace experiments were conducted to study the mass migrations involved in CO2 electroreduction over activated Ag HF. For comparison, the feedstocks were supplied into the cathodic and anodic compartments of the electrolysis cell according to the below four situations, respectively, which were subjected to the potentiostatic electrolysis under the same reaction conditions (see the Materials and Methods section for details).
For both activated Ag HF and activated Ag foil, typical operando Raman spectra in a wide range of 300-2000 cm -1 showed two kinds of Raman peaks above and below 1000 cm -1 , respectively, as shown in Supplementary Fig. 20. The Raman peaks above 1000 cm -1 included four peaks centered approximately at 1012, 1360, 1603 and 1660 cm -1 , which were assigned to bicarbonate ions (HCO3 -) adsorbed on the electrode surface as νHO-COO-, νsHOCOO-, νasHOCOO-and δHO-H (in H2O) modes, respectively, according to previous reports 6,7 . The other Raman peaks below 1000 cm -1 showed only two Raman bands at 532 and 390-410 cm -1 , which could be assigned to the adsorbed intermediate vibrations, i.e., ν*COO-and νAg-*COOH, in consistence with previous reports 18,19 . The lower νAg-*COOH frequency (393 cm -1 ) of activated Ag HF compared with that (408 cm -1 ) of activated Ag foil suggested a weaker bonding strength between *COOH and the activated Ag HF surface. Furthermore, the *COOand *COOH intermediates appeared in chronological order in the time-resolved operando continuous Raman spectra ( Fig. 6b and Supplementary Fig. 21c), implying the step-by-step reduction of CO2, i.e., the initial step to form *COOand the second step to form *COOH, in agreement with the proposed mechanism ( Supplementary Fig. 18). Considering its higher sensitivity and intensity, the *COOintermediate was given more attention in the following.
For comparison of the relative intensity of *COOin these two electrodes, namely, activated Ag HF and activated Ag foil, we estimated the relative ratio of the integral peak areas of adsorbed *COOand aqueous HCO3 -(i.e., νasHOCOO -+ δHO-H), which are marked with shadows in Supplementary Fig. 20. The *COO -/(νasHOCOO-+ δHO-H) ratio was 6.7 for activated Ag HF after power-on for 2720 ms, and this ratio value did not change in the following stable state. In contrast, this ratio in the stable state was 3.3 for activated Ag foil, which was only half of that of activated Ag HF. This result implied that more *COOintermediates were formed and adsorbed on the surface of activated Ag HF. The formation and evolution of key intermediates over activated Ag HF and activated Ag foil were monitored by time-resolved operando Raman spectroscopy during the power-on and power-off stages, respectively ( Supplementary Fig. 21). After power-on for 720 ms (t1), a new Raman peak appeared at 532 cm -1 over activated Ag HF, corresponding to the adsorbed *COOintermediate ( Supplementary Fig. 20). Then, the peak intensity increased quickly and reached the maximum at 2720 ms (t2) (Supplementary Fig. 21a and Fig. 6d). Regarding activated Ag foil, the *COO -Raman peak appeared at 660 ms (t1'), and the peak intensity reached a maximum at 3080 ms (t2') ( Supplementary Fig. 21c and Fig. 6d). Notably, the normalized *COOpeak intensity of activated Ag HF was almost double that of activated Ag foil in the stable state. These results indicated that more *COOintermediates were formed and adsorbed over activated Ag HF in a shorter time, implying the superior capability of CO2 activation, which probably profited from the reduced CO2 diffusion distance in the CO2-disperser mode.
Subsequently, we investigated the variation of adsorbed *COOover activated Ag HF and activated Ag foil during the power-off stages (Supplementary Figs. 21b, d. As soon as the power was turned off, the 532 cm -1 ν*COO-peak quickly redshifted for both electrodes due to the Stark effect [20][21][22] , indicating the distinct impact of electric field on the adsorption of intermediates ( Supplementary Fig. 22). Then, the intensity of the *COO -Raman peak decreased gradually. The *COOpeak vanished over activated Ag HF after power-off for 1050 s (t3) (Supplementary Fig. 21b and Fig. 6d), whereas over activated Ag foil after power-off for 1400 s (t3') (Supplementary Fig. 21d and Fig. 6d), indicating a faster dissipation of adsorbed *COOover activated Ag HF. This result implied that the one-way CO2 flow manner of activated Ag HF facilitated the desorption of adsorbed intermediates or species on its surface (vide infra).
The above time-resolved operando Raman results suggested that the oriented mass transfers induced by the CO2-disperser mode of activated Ag HF could not only favor the diffusion of CO2 to active sites but also facilitate the desorption of adsorbed species from the electrode surface, thereby resulting in the improved overall kinetics of CO2 reduction. Consequently, activated Ag HF also demonstrated the promotion in mass transfers of CO2 electroreduction in addition to enhanced three-phase interface reactions. Figure 22 | Electric field impact on Raman spectra. Raman spectra (300−600 cm −1 ) of activated Ag HF and activated Ag foil during frequent switching between power-on and power-off.
Supplementary
To study the Stark effect of electric field on the adsorbed intermediates, Raman spectra over activated Ag HF and activated Ag foil were recorded during frequent switching between power-on and power-off, as shown in Supplementary Fig. 22. In power-on situations from power on-1st to power on-5th, all *COO -Raman peaks were located at 532 cm -1 for both activated Ag HF and activated Ag foil, while the *COOpeak quickly and consistently redshifted for both electrodes when switched to power-off, indicating the reproducible occurrences of the Stark effect [20][21][22] . This result indicated that electric field could play a crucial impact on the adsorption of surface intermediates. In detail, ν*COO-redshifted to 512 cm -1 over activated Ag HF, whereas to 519 cm -1 over activated Ag foil. Note that there was a 7 cm -1 shift in *COOvibration peak between activated Ag HF and activated Ag foil during all power-off situations. The lower frequency suggested the weaker interaction of *COOwith the surface of activated Ag HF, which was another sign of its easier desorption of adsorbed intermediates or species when compared to activated Ag foil. These results implied that activated Ag HF was intrinsically favorable for the desorption of adsorbed surface species. The desorption of *COOover activated Ag HF with non-CO2-disperser mode was also monitored by time-resolved Raman spectra. As shown in Supplementary Fig. 23, the *COOpeak vanished over activated Ag HF with non-CO2-disperser mode after power-off for 1380 s (t3''), which was close to the dissipation time of 1400 s (t3') over activated Ag foil ( Supplementary Fig. 21d). This result also demonstrate that the CO2-disperser mode played a key role for the desorption of adsorbed species. | 9,241.4 | 2022-06-02T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
Needle-Type Imager Sensor With Band-Pass Composite Emission Filter and Parallel Fiber-Coupled Laser Excitation
This paper presents an implantable fluorescence system with a composite emission filter and fiber-coupled laser excitation. The composite structure of the short-pass interference filter and absorption filters exhibited a band-pass spectrum between 510 and 570 nm, which is close to green fluorescent protein (GFP) emission. A high-quality excitation light was achieved by utilizing a blue laser through a low-numerical-aperture optical fiber. This coupling method is beneficial in delivering a narrow spectrum and controllable irradiation light in a specific area to minimize auto-fluorescence from the tissue. The fabricated lensless system performance is experimentally validated by imaging emission. The proposed device is capable of perceiving fluorescence emission from microspheres and GFP noticeably.
detection in different brain areas. For example, the fiber optic fluorescence imaging system based on micro-fabricated components enables highly stable long-term optical imaging and manipulation of neuronal activity in deep brain regions [5], [6] and a multimode fiber can be used for a high-resolution endoscopic application [7]- [9]. Another approach is miniaturized fluorescence microscopy, which integrates micro-optic and semiconductor optoelectronics in a compendious package. This imaging system coincides with mechanical flexibility features as it can be plainly mounted in a rodent's head and inherently conserve high-quality images, resembling the conventional lens-based microscopy [10]- [12]. However, the behavior experiment, which requires intensely freely moving, either optical fiber fluorescence or midget microscopy is too rigid and exorbitant in both size and weight [13].
In an effort of resembling a freely moving condition, a lensless fluorescence system is a favorable imaging modality because of its measly physical embodiments [14]. The absence of optical components in this system has significantly diminished the system. For the implanted device, an inevitable trade-off exists between spatial resolution and invasiveness features. Practically, it is difficult to establish a prover value of the resolution and low invasiveness simultaneously. Therefore, the image sensor is designed according to the variety and area of target detections. The examples include planar-type implanted devices placed on the brain surface for blood observation [15] and hemodynamic response [16], [17]. This implanted version can typically accommodate a broad configuration with several excitation light sources and pixels. In contrast, a needle-type and finer structure are required for devices implanted in the deep brain region, such as for amygdala [13] and optical theranostic applications in a deep tissue [18].
However, the resolution of the existing lensless fluorescence system is much lower than that of the conventional lens-based microscopy. The undirected fluorescence emission, which declines faster as a distance function from the target than its excitation counterpart, leads to low signal-to-background levels. In addition, the fluorescence emission is incoherent, and thus, incompatible for any type of image processing method based on the source-shifting technique, such as holographic digital reconstruction and related super-resolution techniques. In the hardware approach, the insufficient rejection performance of the emission filter contributes to the modest resolution of the lensless system.
A recently developed hybrid emission filter, composed of an interference filter and an absorption filter via a fiber optic plate (FOP), exhibits a high-performance excitation light rejection ratio of approximately 10 8 :1 at a wavelength of 450 nm, even in a lensless setup [19], [20]. Notwithstanding this accomplishment, the FOP that is utilized as a substrate for the interference filter, as well as absorption filter-surface protection, has escalated the device thickness into millimeter range. In view of invasiveness, this hybrid filter structure is not suitable for an implantable imaging system, especially for deep brain observation. This paper proposes a thin composite emission filter, integrated with a fiber-coupled laser, for a high-excitationrejection fluorescence implantable imager. The sandwich-like structure of the short-pass interference filter, as well as yellow and green absorption filters, exhibits a high-quality band-pass transmission between 510 and 570 nm, which is close to green fluorescent protein (GFP) emission, in an amiable thickness for the implanted device. In addition, laser coupling is beneficial in providing a narrow spectrum and controllable irradiation in a specific area. Moreover, as a remote delivery of the light, the sample temperature increases due to the light source illumination can be suppressed. As a result, the proposed imaging system enables us to perceive the fluorescence emission from microspheres, as well as GFP, in the brain slice noticeably.
The rest of this paper is organized as follows. Section II presents the overview of an implantable imager with a composite emission filter and fiber-coupled laser excitation. Section III describes the device fabrication process, including a multilayer filter-stacking stage, device assembly, and CMOS image sensor specifications. Section IV presents the filter surface examination, excitation light source characterization, pixel sensitivity, fluorescent microsphere detection for spatial resolution examination, and an in vitro experiment using GFPmodified mice brain slice results.
II. IMPLANTABLE IMAGER OVERVIEW
As illustrated in Fig. 1(a), the structure of an implantable imager composed of a CMOS image sensor, multilayer filters, and a fiber-coupled laser. A sandwich-like filter, which is accomplished by directly stacking multilayer filters, plays an important role in preserving the implanted device thickness; meanwhile, the complementary structure of the interference filter and absorption filters enhances the excitation light rejection ratio by generating a composite band-pass transmission close to the GFP emission. This band-pass spectrum resulted from the coalescence absorption and reflection mechanism of all the filters. Fig. 1(b) shows the transmission spectrum of a 550-nm short-pass interference filter (SPF550) and yellow and green absorption filters are relatively close to the GFP emission region. Meanwhile, the narrow spectrum of blue laser excitation light is outside of the detection region. Fig. 1 (c) shows the future application of fabricated fluorescence implantable devices for brain activity observation. This implanted device is a wire-based type that utilizes the wires for electrical support function and data communication.
The electrical wire can be bundled with the optical fiber of excitation light. Practically, only the imaging area, which is represented by an image sensor integrated with a filter, is implanted in the brain. The PCB and optical fiber will be fixed on the cement over the rodent's head.
An interference filter, which is typically formed by the periodical structure of dielectric materials with different refractive indices, reflects light in its rejection band so that it is almost free from the auto-fluorescence, as that occurring in the absorption filter. A 550-nm short-pass interference filter reflects red fluorescence from the tissue and passes a GFP emission that is shorter than the interference cut-off region. The interference filter is selected to realize a selective highquality band-pass transmission spectrum, as its short-pass rejection band can be designed by selecting several dielectric layers. By contrast, it is almost impossible to prepare a shortpass absorption filter with sufficient color selectivity. However, as the interference filter transmission spectrum is angledependent, a high-angled fiber-coupled laser and scattered components from the observation targets that are out of its rejection band will pass through the filter. This excitation light infiltration, particularly for targets with both excitation and emission peaks close to each other, such as GFP, will significantly reduce the excitation rejection performance [19].
An absorption filter that resolves this high-angled excitation problem as its transmission spectrum is independent of the incident angle of light. For instance, a yellow filter was employed to entirely absorb the blue excitation light. Yet, under the intense excitation light condition, a radiative process frequently occurred and some energy was emitted as fluorescent light out of the absorption band, reducing the observation target emission detection [19], [20]. Consequently, an effective limitation exists while improving the excitation rejection level by increasing the absorption thickness.
Regarding the filter structure, the green absorption filter will absorb a part of auto-fluorescence from the yellow filter and ensures that the emission light that reaches the image sensor comes from GFP. As the green filter transmission spectrum started from 440 nm, the amount of blue excitation light will pass through the filter and reach the image sensor. This infiltration light will alleviate the excitation rejection ratio. Thus, the green and yellow absorption filters are working complementarily to enhance the detection selectivity. Fig. 2. shows the transmission spectrum of the yellow filter and integrated yellow and green filters.
To manage the excitation direction, a blue laser that is coupled on a low-numerical-aperture (NA) optical fiber placed on the edge and almost parallel with the sensor is required. A controllable blue light from the optical fiber minimizes the auto-fluorescence from the tissue due to the abundance of the pigmented substance with an absorption maximum in the blue-green region of the spectrum [21]. In addition, the blue laser narrow spectrum is able to circumvent the undesirable detection of green light which is inherently emitted from a blue light-emitting diode (LED). Furthermore, by assigning the optical fiber parallel to the image sensor makes possible decoupling the illumination and detection of optical pathways; the excitation and emission light travel in a different path through the sample. As a result, this configuration can improve detection selectivity and rejection ratio and avoiding the overexposure of the sensor because of overwhelming excitation light as well.
A. Imaging Device Assembly
Generally, there are two approaches for integrating an emission filter, typically an interference filter, with a CMOS image sensor: on-chip deposition method and transferred filter method. The on-chip deposition method has advantages such as reduced module thickness, the potential for implementation of different filters on pixels, and the elimination of costly external glass substrates with multiple filters [22], [23]. Conversely, the transferred filter method utilizes a prefabricated interference filter that is either commercially available or specifically designed by the user. This method is much simpler and cheaper, as the fabrication process is not as complicated as the on-chip deposition, which requires fabrication process adjustment to compensate for the material mismatches between the CMOS die and the optical filter [24]. Traditionally, a highquality interference filter can be fabricated on a firm and stable glass substrate, which is quite difficult to grow in a polymerbased substrate.
Our composite filters were fabricated using the transferred method, combined with a spin-coating technique, for absorption filter deposition prior to the assembly process. The laser lift-off (LLO) method was employed to separate the fabricated filter from its substrate. A commercial 550-nm short-pass interference filter (49-826, Edmund Optics, USA) with an ultraviolet (UV)-grade fused silica substrate, which allows high-energy laser treatment for interference filter separation, was used. The yellow and green dye-based absorption filters were deposited to the interference filter, respectively. First, Valifast Yellow 3150 (Orient Chemical, Japan), cyclopentanone (Wako, Japan), and NOA63 (Norland Product, USA) were mixed in a weight ratio of 1:2:1. This mixture was spincoated onto the interference filter at 1000 rpm, then cured by UV irradiation for 30 s, and finally, heated to 150 • C for 45 min. After that, a green absorption filter was directly spincoated onto the yellow filter layer at 1000 rpm. Finally, this multilayer filter was cured by UV irradiation and heated to 120 • C for 2 min and 200 • C for 20 min. The fabricated filter was cured at room temperature for 24 h before being used in the assembly process. The device assembly process is shown in Fig. 3 (a), and is described as follows: -A CMOS image sensor chip was fixed onto the filter layers using epoxy resin (Z-1; Nissin resin, Japan) by heating at 120 • C for 25 min. The pixel area of the image sensor directly contacted the filter. -The filter layer was cut by a high-precision laser (Q-switched Nd: YAG laser, λ = 266 nm) according to the image sensor size. After that, the filter and substrate were separated using the LLO method. This method is traditionally used to separate semiconductor structures such as thin-film GaN from their substrate by utilizing a high-power pulsed laser from the backside of the substrate for selective laser ablation and thermal decomposition of the interfacial layer [25]- [28]. In our fabrication setup, the fourth harmonic pulses of a Q-switched Nd: YAG laser (λ = 266 nm) irradiated the interfacial area between the fused silica substrates and the interference filters. For a fast and large-yield separation area, rectangular laser fields were stitched using the step-and-repeat technique [29]. After the laser irradiation, the image sensor and filters were manually removed from the substrate. For the electrical wiring connection, the filter at the chip pad region then removed by a low-energy Nd: YAG laser. -Next, a fabricated device was bonded on the designated printed circuit board (PCB) using epoxy resin (Z-1; Nissin resin, Japan) by heating at 120 • C for 30 min for immobilization. -The electrical bonding pads of the CMOS image sensor were connected to the PCB via aluminum (Al) wires using a wire bonder (7700CP, West Bond Inc., Anaheim, CA, USA). Then, epoxy resin (Z-1; Nissin resin, Japan) was added to the wires and heated at 120 • C for 25 min for wire protection. -For waterproofing and biocompatibility features, the imaging device was coated with a Parylene-C film using a Parylene coating chamber (PDS2010 Specialty Coating Systems, NIST, USA). The fabricated device thickness, as shown in Fig. 3 (b), is 166 μm whereby the CMOS sensor and composite filter thickness are 150 μm and 16 μm, respectively. This thickness level is acceptable to measure the fluorescence emission in the deep brain region as it is thin enough to limit damage to surrounding tissue as it is inserted in the brain [13]. However, in a practical application, increasing device thickness due to the presence of optical fiber should be considered. For instance, it will append about 100 μm for a multimode optical fiber, which will literally escalate the damage of the brain during the insertion process. One way to resolve this thickness issue is by utilizing an on-chip waveguide instead of a bulky-optical fiber for delivering the excitation light. This fine waveguide thickness can be flexibly designed according to the light intensity requirement. Yet, the implementation of the on-chip waveguide on the fluorescence imager is beyond the scope of this article. The total weight of the fabricated device, including PCB, is about 0.05 g.
B. CMOS Image Sensor Circuitry
A needle-type image sensor, which contains 40 × 400 pixels, was designed in our lab and fabricated by the foundry using 0.35-μm 2-poly 4-metal standard CMOS technology (AMS). The chip dimensions include a width of 500 μm, length of 5100 μm, and thickness of 150 μm. Fig. 4 (a) depicts the schematic circuit of a needle CMOS sensor. It consists of 40 × 400 active pixel array, control circuitry for selecting the pixels via a Y-scanner (row) at the left-side and column amplifier and X-scanner at the bottom of the pixel array. Bias circuit and power-on-reset are supporting circuitry for the imaging process.
For the imaging function, a pixel array with a size of 7.5 μ m × 7.5 μm/pixels was selected to provide sufficient spatial resolution for imaging the colony of the brain neural cell. This pixel uses a three-transistor active pixel sensor (3-Tr APS) for transducing the optical to an electrical voltage. As can be seen in the pixel sensor schematic, 3-Tr APS consists of a photodetector and three transistors for switching and a source follower.
The operation of an APS as follows. At the beginning of detection, the photodiode (PD) is reset by sending the Y RST command activating the switch. This action forced the PD to a specific voltage level. Next, the integration period started by turned off the Y RST transistor, which makes PD is electrically floated. During this period, the incident light produces carriers and accumulated in the PD junction capacitance; the voltage of PD decreases proportionally to the input light intensity. The total amount of the incident light can be obtained by measuring the voltage drop. After an accumulation time, the Y S E L turned on the switch and then transferred the PD levels of selected pixels to be read out in the vertical output line, Pix_out. After the voltage reading process is finished, the Y RST is turned off and repeat a similar process for the next pixel.
The operational sequence of the pixel selectivity relies on the Y-scanner for selecting the light-sensing rows one-by-one. Every pixel in the selected rows is connected to the column circuitry, which comprises the source follower circuit and column selecting transistor switch, X S E L , via each column signal line. The column amplifier output is then connected to the buffer circuit for signal maintaining, before sending it out to the next data processing stage. For this data communication and electrical function, the image sensor has four pads connectors, V DD , GND, CLK, and V OUT as can be seen in Fig. 4 (b). The image sensor specifications are shown in Table I.
A. Filter Surface Examination
We investigated surface morphologies and optical properties of the interference filter after the lift-off process. The surface filter profiling is important for determining laser ablation effects on the interference filter structure. As the high-intensity laser beam hits the interface area between the substrate and the filter, its temperature rises and at some level it will force the material to undergo a phase change from the solid to gaseous state. This phase transition results in a mass loss from the lowermost layer. The decreasing number of layers in the interference filter structure may predispose its optical properties. Fig. 5 (a) shows the difference in height of the irradiated filter captured by a surface profiler (ET200, Kosaka Laboratory, Japan). As can be seen, the inhomogeneous surface varies about ± 0.02 μm from its initial position. This number is closely associated with a single-layer thickness, which means the laser ablation of the lift-off process has eradicated a single layer, yet did not affect another layer. Furthermore, we compared the transmission spectrum of the interference filter before and after the lift-off process. As can be seen in Fig. 5 (b), both filters show similar transmission patterns. This means, as the interference has many periodical layers, diminishing a single layer does not change the operational rejection band. The percentage of transmission after lift-off is lower about 10 % than that of the initial filter, which may contribute to reflection performance deterioration. Fig. 6 (a) shows the fiber-coupled laser profile examination setup, which the assembled optical fiber and an image sensor was immersed in 100 μM of Uranine-filled glass cuvette. The tapered optical fiber was placed onto PCB on the topside of the cuvette. A blue laser (λ = 473 nm) illuminated the samples and generated an observable light path as can be seen in Fig. 6 (b).
B. Laser Profile Characterization
The low-NA multimode optical fiber (ϕ core = 25 μm, NA = 0.1) delivers a narrow beam along the image sensor area. As we utilized a commercialized laser beam without any optical treatment, the laser intensity did not uniformly propagate throughout the imaging areas. The intensity profile as denoted with the yellow line was measured to obtain the spatial distribution of laser excitation. Fig. 6 (c) shows the light intensity fitted with Gaussian distribution and the full half-width maximum (FWHM) is 35.52 μm. With this spatial distribution profile, the disperse observation targets (i.e GFP) will receive different excitation light intensity and then produce unequal fluorescence emission levels. In addition, as the laser beam could not cover the pixel areas entirely, it is difficult to perceive all the emission in a single-mode detection.
To resolve this drawback, image processing can be used to concatenate images from different light source positions.
C. Pixel Sensitivity Characteristic
The composite filter response of incident light was examined by pixel sensitivity measurement. The fabricated device integrated with a composite filter was irradiated with a light source ranging from 400 to 675 nm (MicroHR Spectrometer, Horiba, Japan). To imitate the light path that comes to the image sensor in the real applications, which contains many scattering components, we did not use an objective lens instead of employing an optical aperture for shaping the light source beam. We measured the pixel sensitivity spectrum of the fabricated device at a normal incident angle, which is the optimum angle for the interference filter to operate in its rejection band. However, in the brain observation experiment, incident light does not always come from this normal angle; Mostly, the light travels in a random path with a different angle. Therefore, we varied the incidence angle of excitation light to examine the transmission spectrum of fabricated composite filters.
As shown in Fig. 7, particularly in a normal incident angle, the composite filter exhibited a band-pass transmission spectrum in the range 510-570 nm, which is relatively associated with GFP emission, and all lights out of the transmission band were significantly reduced. This band-pass transmission profile is a result of a complement filter mechanism. A wavelength longer than 570 nm was reflected by the short-pass interference filter, whereas a light shorter than 510 nm was absorbed by the yellow and green absorption filter. From this bandpass transmission profile, the composite filter is intended to suppress auto-fluorescence from the tissue.
The interference filter spectrum shifted to a shorter wavelength by increasing the incident angle, while the absorption filter spectrum is fixed for all incident angles. Thus, the transmission band is narrowed by an increase in the incident angle and leads to a lower sensitivity profile. In addition, the incident angle inclination results in sensitivity degradation. This effect reduces the effective NA of the sensor, and thus, the resolution degradation ratio with the distance is decreased.
The interference filter has a rejection band. Light with a wavelength longer than the rejection band edge of the shortpass filter is reflected. Due to the spectrum shift with an incident angle, the highly tilted excitation light and the autofluorescence from the tissue are transmitted. However, in the proposed filter, these components are reduced by the yellow and green absorption filters. The thin absorption filters work well for low-incident-angle light.
We confirmed that the transmission shifting of the interference filter due to the incidence angle variation will not affect the composite filter bandwidth characteristic. The filter transmission spectrum operates close to the GPF region in all angle variation. In the proposed device the optical fiber was placed parallel to the image sensor so that the direct excitation light on the image sensor can be sufficiently rejected.
D. Fluorescent Image From Microspheres
We first verified the fluorescence detection capability of our device using imaging fluorescent microspheres. We observed emission from green-yellow fluorescent microspheres of 15-μm diameter (F8844, ThermoFisher Scientific, Massachusetts, US), for mimicking fluorescently labeled cells. The excitation and emission peaks of the microspheres were 505 and 515 nm, respectively. These microspheres were double the pixel size used in the device and directly contacted the image sensor surface to avoid resolution degradation.
The fluorescence imaging of the fabricated device was performed with a fiber-coupled blue laser (λ = 473 nm) placed at the edge and almost parallel with the image sensor on the opposite side of PCB. Due to the presence of wire connection from PCB to image sensor, it was difficult getting the almost perfectly parallel illumination if the fiber was placed on the PCB side. This new setup will not much affect the filter selectivity as discussed in subsection IV.C. The position of excitation light was fixed during the microsphere emission observation. The laser power measured by an optical power meter (PM100USB, Thorlabs, USA) was 100 μW/cm 2 .
As shown in Fig. 8(a), the fluorescent microsphere (ϕ = 15 μm) emission was clearly observed in different intensity profiles, which can be classified into saturated and unsaturated profiles. This classification is due to the relative position of microspheres to the excitation light. Some of the beads received more intensity than other beads and become saturated. This saturated bead produces larger shapes than its real dimension because of the overwhelming emission, which will lead to inaccuracy for the spatial resolution measurement.
A region of interest (ROI), of around 125 μm × 125 μm dimensions, was selected from the single and unsaturated emission, as denoted with a dashed rectangle in Fig. 8 (a). Fig. 8(b) and Fig. 8 (c) shows, respectively, the ROI region with a yellow line alongside the x-pixels and the intensity profile of the fluorescent microsphere fitted with the Gaussian function.
To clarify the spatial resolution, the FWHM was calculated as 22. 3 μm ± 1.21 μm. The discrepancy profile of the microsphere emission from its actual dimension was due to the emission filter thickness, which increased the distance between the fluorescent microspheres and the image sensor so that the emission light spreads over the filter and then degrades the resolution. Nevertheless, the resolution of the fabricated devices is still acceptable for brain activity observation [30]. One way to improve the spatial resolution of the implantable device is by utilizing the incident-angle-selective pixel technique. In this technique, pixels, which normal and angle-selective, detect different incident angles via the designed metal aperture structure. As a result, the image reconstruction process can acquire a spatial resolution close to the pixel pitch [31].
E. In Vitro Experiment
Once the responsivity of fluorescence emission was observed, an in vitro experiment was performed to confirm the detection performance in biological samples. We used 100-μm-thick brain slices obtained from an adult mouse (GAD67), which was genetically modified by GFP. All procedures for preparing the animal tissue were carried out in accordance with the guidelines of the Nara Institute of Science and Technology. The brain slice was directly placed onto the surface of the image sensor, while the edge of the optical fiber was inserted between the samples and image sensor.
In this experiment, we used an identical light source setting with the microsphere experiment (P optical = 100 μW/cm 2 ) whereas the optical light source and placed the fiber at the edge of the imaging area. The positions of both the fiber and the image sensor can be controlled manually to deliver the excitation light at the specific area of the samples, and simultaneously, image sensor position adjustment can be performed for obtaining the best imaging result. In addition, as the laser beam could not reach the entire imaging area at one irradiation position, a different light source position can be tailored to obtain a larger area of detection by employing image processing.
This setup is intended to deliver the light source almost parallel to the image sensor. When we utilized the taper for holding the optical fiber on the PCB's side, it was difficult to get a parallel illumination for the very thin brain slice sample due to the presence of wire connection. Therefore, the optical fiber position was moved to another image sensor's edge, which is on the opposite side of the PCB (Fig. 9 (a)). Both of the excitation light and the image sensor positions were controlled by the vertical movement of the optical fiber holder. The small difference in height of the optical fiber and the image sensor produces a various incident angle of the excitation light path through the brain tissue. Tough this fiber-coupled laser arrangement is obviously difficult for implantable device applications as it needs more space and treatment. Yet, the advanced techniques in an on-chip waveguide are apparently able to resolve the parallel-excitation light irradiation problem with a very low invasiveness feature. Fig. 9 (b) shows the fluorescent images obtained using the fabricated devices at three light source positions by changing the angle of the fiber position. The excitation light travels from the bottom side of the images. In the first position (image 1), the optical fiber is almost parallel to the image sensor so that the laser beam irradiated the farthermost area of the brain slice. Then, increasing the angle beam by tailoring the optical fiber position leads to irradiated area displacement; it becomes closer to the light source (images 2 and 3, respectively). This finite detection area comes from the non-uniform laser beam profile and the gap between the brain slice and image sensor due to the optical fiber thickness. However, all the fluorescent images show that the fabricated device can perceive several identical bright areas, which are identified as GFP emission noticeably. The opacity areas observed for each image were due to the excitation light limitation in both intensity and direction, incorporated with scattering and absorption by the brain tissue. Therefore, some areas did not get enough irradiation to generate an observable emission. As predicted, the direct contact between the image sensor and the brain slice improved the spatial resolution imaging. To overcome the excitation light coverage limitation and expand the detection area, the image processing technique was employed by combining images from different fiber positions. Then, a little contrast adjustment was applied to improve the image quality. Image processing was performed using MATLAB (2018a, Mathworks, MA, USA); its result is shown in Fig. 9 (c). From this image, the bright areas occupied more than half of the image sensor area. In addition, it can be clearly seen that the observed fluorescence emission patterns are identical and did not change with the incident angle of excitation light variation. Thus, it can be stated that the observable pattern is the emission from GFP. Fig. 10 (a) shows the hippocampus area captured by lensbased fluorescence microscopy (BX51W1, Olympus, Japan). The target detection area was indicated with a yellow dashed rectangle. The detection comparison of the brain slice between the fabricated device and lens-based fluorescence microscopy can be seen in Fig. 10 (b), whereas the fluorescent image obtained using the fabricated device (image 2) can observe a similar pattern with the lower resolution compared to fluorescence microscopy (image 1). In addition, some saturated emission was observed close to the side of the device. This saturated pixel resulted from the high intensity of the excitation light and might originate from the leakage light caused by filter defects at the side of the images sensor. This shortcoming can be reduced by controlling the excitation light intensity and applying the black resist on the entire side of the image sensor.
V. CONCLUSIONS
We proposed a new method for stacking composite multilayer filters, combined with fiber-coupled laser excitation, for high-spatial-resolution fluorescence imaging. The fabricated device demonstrated the capability of capturing fluorescence emission from the microspheres, as well as GFP, in the brain slice. We expect this method to open up an entirely highquality fluorescence imaging application with implantable imagers. | 7,068.6 | 2020-02-27T00:00:00.000 | [
"Physics"
] |
E2F1 induces TINCR transcriptional activity and accelerates gastric cancer progression via activation of TINCR/STAU1/CDKN2B signaling axis
Recent evidence indicates that E2F1 transcription factor have pivotal roles in the regulation of cellular processes, and is found to be dysregulated in a variety of cancers. Long non-coding RNAs (lncRNAs) are also reported to exert important effect on tumorigenesis. E2F1 is aberrantly expressed in gastric cancer (GC), and biology functions of E2F1 in GC are controversial. The biological characteristics of E2F1 and correlation between E2F1 and lncRNAs in GC remain to be found. In this study, integrated analysis revealed that E2F1 expression was significantly increased in GC cases and its expression was positively correlated with the poor pathologic stage, large tumor size and poor prognosis. Forced E2F1 expression promotes proliferation, whereas loss of E2F1 function decreased cell proliferation by blocking of cell cycle in GC cells. Mechanistic analyses indicated that E2F1 accelerates GC growth partly through induces TINCR transcription. TINCR could bind to STAU1 (staufen1) protein, and influence CDKN2B mRNA stability and expression, thereby affecting the proliferation of GC cells. Together, our findings suggest that E2F1/TINCR/STAU1/CDKN2B signaling axis contributes to the oncogenic potential of GC and may constitute a potential therapeutic target in this disease.
Gastric cancer (GC) is still one of the most significant health problems in the world with particularly high frequencies in East Asia. 1 The roles of genetic dysregulation, epigenetic changes and signaling pathways involved in cancer have recently been studied intensively. [2][3][4] The use of gene expression data to predict carcinogenesis holds promise in GC diagnosis and prognosis. Thus, novel prognostic and diagnostic factors that are associated with GC progression would be of great clinical relevance.
The E2F transcription factors are key participants in a number of cellular events such as cell cycle, DNA synthesis or nuclear transcription. The E2F family of transcription factors is composed of activator (E2f1-3a) and repressor (E2f3b, [4][5][6][7][8] factors and is predominantly regulated by the Rb family of proteins (Rb, p107 and p130), 5,6 and the activating E2F transcription factors E2F1, E2F2 and E2F3 are central to regulation of the cell cycle genes. 7 E2F1 is the most thoroughly investigated member of the E2F family in human malignancies. E2F1 has pivotal roles in tumor progression by modulation of both coding and non-coding transcripts, 8,9 and was reported to act as oncogenes or tumor suppressors to modulate tumorigenesis depending on different cell context. 8,10,11 Accumulating evidence revealed E2F1 exert important effect on GC progression; however, the biology functions remain argued. [12][13][14] TINCR, a long non-coding RNA (lncRNA) producing a 3.7-kb transcript, was first reported to bind to staufen1 (STAU1) protein and mediate differentiated mRNA stabilization. 15 STAU1 protein is a double-stranded RNA-binding protein, and has various roles in gene expression. STAU1 binds to an STAU1-binding site in the 3′-untranslated region (3′-UTR) of its target mRNAs inducing mRNA degradation, which is termed STAU1-mediated mRNA decay (SMD). 16 SMD is a translation-dependent mechanism that occurs when STAU1, together with the nonsense-mediated mRNA decay factor UPF1, is bound sufficiently downstream of a termination codon. 16 Recently, we found that the expression of TINCR was elevated at the mRNA levels in GC cells and tissues and the upregulation of TINCR is induced by the transcription factor SP1. 17 TINCR regulates cell growth, cell cycle progression by affecting KLF2 mRNA stability via SMD. 17 Here we report a novel pathway involved in E2F1 and TINCR in tumor development and GC cell growth. In this study, we found that: (a) E2F1 could promote GC proliferation and cell cycle progression; (b) patients with high E2F1 expression in their GC cells have a poor prognosis; (c) E2F1 could induce TINCR transcription activation; and (d) TINCR forces cell growth, cell cycle progression by affecting CDKN2B mRNA stability via SMD.
Results
E2F1 is overexpressed in GC tissues and cell lines, and upregulation of E2F1 indicate poor outcome of GC. To investigate the role of E2F1 in the progression of human GC, a human microarray data sets (GSE51575) (26 paired cancer and noncancer tissues) was obtained to analyze E2F1 mRNA expressed between GC and paired non-tumor tissues. The result showed that E2F1 mRNA was 3.34-fold higher in gastric tumor tissues (T) compared with paired adjacent Figure 2 Functional roles of E2F1 in vitro and in vivo. E2F1 knockdown in GC cells transfected with siRNAs against E2F1 or E2F1 upregulation by pmaxGFP-E2F1 vector. E2F1 depletion inhibits GC cell growth, as detected by the (a) MTT assay and (c) colony-formation assay, whereas ectopic expression of E2F1 promotes GC cell growth, as examined by the (b) MTT assay and (d) colony-formation assay. Bars: S.D.; *Po0.05, **Po0.01. (e) Cell cycle analyses in the BGC823 and MGC803 cell lines. Relative to scrambled siRNA-transfected cells, E2F1 knockdown induced significantly increased the number of cells in the G0/G1 phase and reduced the number of cells in the S phase. Relative to empty vector-transfected cells, E2F1 upregulation promotes cell cycle progression. Representative FACS images and statistics based on three independent experiments. Bars: S.D.; *Po0.05, **Po0.01. (f) Representative data showed that overexpression of E2F1 significantly promote tumor growth in nude mice xenograft model. MGC803 cells were transfected with empty vector or E2F1 expression vector and then injected into mouse flanks. Tumor growth was measured every 2 days after injection, and tumors were harvested at day 16 and weighed. (g) Detection of the cell proliferation markers PCNA in xenograft tumors by IHC normal tissues (ANTs) (Figure 1a). We plotted a receiver operating characteristic (ROC) curve with the non-tumorous tissues adjacent to the tumor tissues as a control based on GSE51575 database. The cutoff value for predicting GC tissues from normal tissues was 8.91 (normalized intensity value). The area under the ROC curve (AUC) was 0.922 (95% confidence interval (CI) = 0.813-0.978, Po0.0001), with the sensitivity and specificity were 0.923 and 0.846, respectively ( Figure 1b). We further confirmed E2F1 expression levels between clinical gastric tumors (T) and paired ANTs from 80 cases of GC patients by immunohistochemistry (IHC) in our cohort. Our results showed that E2F1 was predominantly located in the nucleus of GC cells (Figure 1c). E2F1 expression found in GC tissues was significantly higher than in their adjacent tissues (Po0.001, Figure 1d and Supplementary Table S2). We also confirmed that E2F1 expression was significantly increased in larger tumors (P = 0.023) and advanced TNM stages (P = 0.037, Figure 1d). We further evaluated the expression levels of E2F1 in GC cell lines. The results showed that the expression levels of E2F1 were significantly increased in all tumourigenic GC cell lines than that in non-tumourigenic cell lines ( Figure 1e). In addition, E2F1 expression is positively associated with FP (free progression) (hazard ratio (HR) = 2.02; 95% CI, 1.63-2.49; Po0.001) and overall survival (OS) (HR = 1.91; 95% CI, 1.59 − 2.29; Po0.001) in GC, which was supported by Kaplan-Meier plotter analysis (www.kmplot. com), using microarray data from 876 GC patients 18 (Figure 1f).
Functional roles of E2F1 as a tumor activator in vitro and in vivo. To elucidate whether E2F1 could have a role in accelerating GC progression, gain-and loss-of-function approaches were used to evaluate the biological function of E2F1 in GC cell lines. We used chemically synthesized small interfering RNAs (siRNAs) to knockdown endogenous E2F1 in BGC823, which have relative high E2F1 expression. In addition, E2F1 was overexpressed by transfecting the pmaxGFP-E2F1 vector into MGC803 cell lines, which have relative low E2F1 expression. The depletion and ectopic expression of E2F1 in cells was confirmed by western blot (Supplementary Figure S1A). MTT ((3-(4, 5-dimethylthiazol-2yl)-2, 5-diphenyltetrazolium bromide) tetrazolium) and colony formation assays revealed that cells transfected with siRNAs but not scrambled in BGC823, had significantly inhibited growth and proliferation of GC cells (Figures 2a and c). Meanwhile, ectopic overexpression of E2F1 by transfecting the MGC803 cell lines with the pmaxGFP-E2F1 vector, selected by the addition of G418, significantly promoted GC cell proliferation in vitro (Figures 2b and d). We also examined the effects of E2F1 on GC cell cycle progression. As illustrated in Figure 2e, inhibition of E2F1 markedly blocked the cell cycle at the G1-S phase, whereas overexpression of E2F1 promotes cell cycle progression. We extended the study of the E2F1 growth promotion role to in vivo athymic (nu/nu) mouse models, the results showed that E2F1-transfected cells developed significantly larger tumors than empty vectortransfected cells (Figure 2f). IHC staining analyses showed that alteration of E2F1 expression significantly changed the expression of the cell proliferation markers proliferating cell nuclear antigen (PCNA) in gastric cells (Figure 2g). E2F1 upregulate TINCR expression in GC cells. Accumulating data revealed that E2F1 promote cancer progression by activation transcription of downstream oncogene in both coding and non-coding regions of the genome. Our previous study identified a lncRNA, TINCR, promotes GC proliferation and overexpression of TINCR indicates worse prognosis of GC. To unravel whether TINCR was regulated by E2F1 expression in GC, we examined the TINCR core promoter region for transcription factor binding sites, and identified six tandem putative E2F1-binding sites at the regions − 366 to − 355 bp (E1), − 257 to − 239 bp (E2), − 136 to − 124 bp (E3), − 41 to − 30 bp (E4), − 16 to 0 (E5) and +56 to +73 bp (E6) in the TINCR promoter ( Figure 3a). We cloned the human TINCR promoter fragment (nucleotides − 1000 to +163) into pGL3 vector for a luciferase activity assay. TINCR transcriptional activity was induced by E2F1 overexpression (Figure 3a). The results suggested that E2F1 participate in TINCR transcription regulation. To validate this finding, we deleted these binding sites individually and used them repeated as the reporter assay. The results showed that the deletion of the E2F1-binding motif E6 significantly impaired the effect of E2F1 on TINCR transcription activation, suggesting that E2F1 binds to their special binding motifs to regulate TINCR transcription ( Figure 3b). To corroborate this notion, we performed in vivo chromatin immunoprecipitation (ChIP) assays to address whether E2F1 bind to the TINCR promoter region. The ChIP assay revealed that endogenous E2F1 bound to the TINCR promoter ( Figure 3c). We next determined whether the overexpression of TINCR is mediated by E2F1, we applied loss-and gain-of-function approaches. We showed that the ectopic expression or siRNA knockdown, respectively, increased or reduced E2F1 enrichment on the TINCR promoter (Figure 3c), and resulted, respectively, in TINCR upregulation or downregulation in GC cells (Figure 3d). The correlation of E2F1 and TINCR gene transcription were further elucidated in tissues sample, and the result revealed that of TINCR expression is positively correlated with E2F1 mRNA levels in GC (Pearson R = 0.469, Po0.001) (Figure 3e). Hence, these results suggest that E2F1 serve as the transcriptional factors to activate TINCR transcription and upregulate its expression.
Overexpression of TINCR is potentially involved in the tumor promotion function of E2F1. Our previous work found that TINCR could promote GC cell line BGC823 and SGC7901 proliferation. 17 Here, we further confirm the result in MGC803 and AGS cell lines. We used chemically synthesized siRNAs to knockdown endogenous TINCR in MGC803 and AGS cell lines, which both were considered appropriate for TINCR depletion (Supplementary Figure S1B). MTT assays show that siRNA transfectionmediated TINCR knockdown resulted in a significant decrease in cell viability rate in MGC803 and AGS, which tend to exhibit naturally high TINCR expression levels ( Figure 4a). These observations were further confirmed by EDU (red)/DAPI (blue) immunostaining assay (Figure 4b). To investigate whether TINCR was involved in the E2F1-induced increase in GC cell proliferation, we carried out rescue experiments. After transfection with si-TINCR, MGC803 cells were co-transfected with pmaxGFP-E2F1. MTT assays indicated that the co-transfection could partially rescue pmaxGFP-E2F1-promoted proliferation in MGC803 cells. (Figure 4c). Moreover, we found that co-transfection of pmaxGFP-E2F1 could rescue the upregulated expression of CDKN2B protein induced by the depletion of TINCR (Figure 4d). These data indicated that E2F1 promotes GC cell proliferation partly through the upregulation of TINCR expression.
TINCR targets CDKN2B by SMD. Our previous study revealed that most TINCR molecules are located within the cytoplasm, and are bound to STAU1 protein in GC cells, and the results are further confirmed in MGC803 and AGS cell lines (Supplementary Figure S2). KLF2 mRNA was detected as a bona fide SMD target, which was mediated by TINCR in GC cells in our recent publication. We hypothesized that CDKN2B, which was elevated upon TINCR depletion, may also be direct TINCR-STAU1 complex targets. First, we analyzed the RNA interactome analysis data followed by deep sequencing (RIA-sequencing) provided by online GEO data sets (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi? acc = GSE40121), and found that CDKN2B are also bound to TINCR mRNA (Supplementary Table S3). And the binding regions are located at the 3′-UTR region of CDKN2B mRNA (Supplementary Figure S3A). Previous evidence confirmed that TINCR interacts with mRNA through a 25-nucleotide motif that was strongly enriched in Figure S3B). 19 We also speculated the CDKN2B sequence bound to the TINCR box (Supplementary Figure S3C).
In order to confirm the above speculation, we performed in vitro assays in GC cells. First, we knockdown endogenous TINCR and STAU1 in GC cells, which both were considered appropriate depletion (Supplementary Figure S1B and C), and the abundance of CDKN2B mRNA increased upon TINCR and STAU1-depleted GC cells (Figure 5a). Second, RNA immunoprecipitation (RIP) assays showed a remarkable enrichment of CDKN2B by STAU1 antibody compared with IgG control, indicating STAU1 could bind to CDKN2B mRNA (Figure 5b). Third, we determined whether the binding regions are located in 3′-UTRs, and cells were transfected with these test plasmids: pLUC-CDKN2B 3′-UTR, the STAU1-FLAG expression vector, pLUC-ARF1 SBS, and phCMV-MUP reference plasmid, which encodes major urinary protein (MUP) mRNA. The two latter of these served as a positive and a negative control, respectively, for STAU1-FLAG binding. 16 Anti-FLAG could immunopurifiy Rluc-CDKN2B 3′-UTR, endogenous TINCR and Rluc-ARF1 SBS, but not MUP mRNA (Figure 5c). Those results indicate that CDKN2B is a bona fide SMD target in GC cells.
To further determine whether TINCR is required for the co-IP of STAU1 with CDKN2B mRNA, MGC803 cells that transiently transfected with control siRNA or siRNA against TINCR were immunoprecipitated using anti-STAU1 antibody. Compared with control siRNA, siRNA-TINCR reduced bỹ 2-fold the co-IP of STAU1 with CDKN2B mRNA (Figure 5d). Furthermore, the RNA pull-down assay revealed that TINCR interacted with CDKN2B mRNA (Figure 5e), and the depletion of STAU1 significantly reduced the interaction of TINCR with CDKN2B mRNA (Figure 5f), corroborating that STAU1 is required for the association between TINCR and CDKN2B mRNA. More importantly, the CDKN2B mRNA half-life was significantly increased upon downregulation of STAU1 or TINCR, whereas it was decreased after TINCR overexpression (Figure 5g). Our findings suggest that TINCR affects CDKN2B mRNA stability and expression through SMD.
Discussion
Recent findings have suggested that E2F family proteins have important roles in human malignancies. 10 E2F1, a key regulator for the G1/S phase transition in the E2F family, 20 was reported to upregulate in GC. 14 However, the function role in GC progression remains disagreed. In this study, we found that E2F1 expression was significantly upregulated in GC tissues compared with corresponding non-cancerous tissues. Specifically, E2F1 expression levels could be used to discriminate the cancer tissues from non-tumorous tissues. Moreover, patients with higher E2F1 levels appeared to have a greater tumor size, higher tumor stage and shorter survival than the lower group. Our results indicate that E2F1 expression provided a significantly predictive value and prognostic marker for patients with GC.
Our data revealed that silencing E2F1 expression led to significant inhibition of cell proliferation, whereas E2F1 overexpression contributed to cell growth and tumorigenicity. Knockdown of E2F1 expression contributed to G1 phase arrest and an S phase reduction, whereas ectopic overexpression of E2F1 promoted cell cycle progression. Accumulation data revealed that E2F1 exert cell cycle modulation function by regulation of both coding and non-coding transcripts. A novel lncRNA, named TINCR, a potent cell cycle modulator in GC was identified in our recent work. Hence, we speculate that E2F1 and TINCR occurrence of mutual reaction. In this study, we found E2F1 could bind around +56 to +73 bp of TINCR promoter region and specifically activated its transcription. The G1-S transition in the cell cycle in mammalian cells is controlled by cyclins, cyclin-dependent kinases (CDKs), and their inhibitors, and deregulation of CDK inhibitors is a common feature in tumor cells. 21 CDKN2B serve as potent growth inhibitors of cell cycle checkpoints. 21 Notably, consistent with our recent report, CDKN2B was found to be remarkably upregulated upon TINCR or E2F1 knockdown in MGC803 and AGS cells. Taken together, CDKN2B could be crucial TINCR and E2F1 target.
LncRNAs can act together with specific proteins to perform various functions depending on their subcellular location, 22,23 and TINCR is a predominantly cytoplasmic lncRNA in GC cells, indicating its action in post-transcriptional gene regulation. The results of RNA IP and RNA pull-down assays show that TINCR could bind STAU1, which is consistent with our previous data. 17 STAU1 is a cytoplasmic protein and exerts multiple effects as a post-transcriptional regulator. Our teams have identified that TINCR targets KLF2 transcript through TINCR-STAU1 complex formation. Here, this study found that CDKN2B is also a target of STAU1. In addition, CDKN2B mRNA stability and the effects of binding to STAU1 are influenced by TINCR depletion. As evidenced above, TINCR may affect CDKN2B expression through SMD by TINCR-STAU1 complex formation. The pathway via which E2F1 and TINCR regulate cell cycle and cells proliferation has been depicted in Figure 6. The nuclear transcription factor E2F1 induces TINCR overexpression. TINCR recruits STAU1 to the 3′-UTR of CDKN2B mRNA, degrading CDKN2B through the UPF1-dependent mRNA decay mechanism. Subsequently, CDKN2B depletion promotes cell cycle progression and tumorigenicity. Here, we explored a novel pathway involved in E2F1, TINCR and CDKN2B in GC development.
We describe here a novel mechanism underlying GC cell proliferation through a molecular cross talk between E2F1, TINCR, STAU1 and CDKN2B. Further insights into the functional and clinical implications of the pathway may contribute to early GC diagnosis and help with GC treatment. (2) RLuc-CDKN2B 3′-UTR; (3) phCMV-MUP, which encodes MUP mRNA that lacks an SBS and serves as a negative control for STAU1-FLAG binding; and (4) Rluc-ARF1 SBS, which contains an ARF1 SBS downstream of the translation termination codon of C-terminally deleted renilla luciferase and serves as a positive control for STAU1-FLAG binding. After cell lysis, total RNA and protein were purified from the lysate before and after IP using FLAG antibody or nonspecific rabbit (r) IgG. The three leftmost lanes represent two-fold serial dilutions of RNA and demonstrate that the RT-PCR is semiquantitative. Schematic representations of the pLUC-CDKN2B 3′-UTR and pLUC-ARF1 SBS test plasmids (above). RT-PCR analysis demonstrates that CDKN2B 3′-UTRs, endogenous TINCR, and ARF1 SBS bind STAU1-FLAG, whereas MUP mRNA does not (below). Results are representative of three independently performed experiments. (d) Inhibiting CDKN2B mRNA interacting with STAU1 upon TINCR depletion, detected by RIP experiments. MGC803 cells were transfected with control (Scrambled) or si-TINCR, and cellular extract was prepared for RIP assay using SATU1 antibody 24 h after transfection. Error bars represent S.D, n = 3. *Po0.05. (e) Biotinylated TINCR RNA pulls down the full-length CDKN2B mRNA detected by RT-PCR analysis. A nonspecific RNA (GAPDH) is shown as a control. (f) STAU1 depletion reduced the interaction between TINCR with CDKN2B mRNA. MGC803 cells were transfected with control (Scrambled) or si-STAU1, and cell lysates were incubated with biotin-labeled TINCR; after pull-down, mRNAs were extracted and assessed by qRT-PCR. Error bars represent S.D., n = 3.*Po0.05; **Po0.01. (g) TINCR or STAU1 control CDKN2B mRNA stability. RNA stability assays were performed in MGC803 cells using Actinomycin D to disrupt RNA synthesis degradation rates of the mRNA CDKN2B over 12 h. *Po0.05; **Po0.01
For ARF1 SBS mRNA: 5′-cacaagtcgacGTGAACGCGACCCCCCTCCCTCTC ACTC-3′ (sense) and 5′-aaggatccCCAGGTGCCCATGGGCCTACATCCCC-3′ (antisense), where the bold nucleotides specify the SalI site, and the italic nucleotides specify the BamHI site. To construct the luciferase reporter vectors, the core promoter of the TINCR gene (−1000 to +163, relative to the transcription start site of the TINCR gene) and the relative deletion of binding sites were respectively subcloned into the pGL3 basic firefly luciferase reporter. siRNAs for specifically knockdown E2F1, TINCR and STAU1 were chemically synthesized (Invitrogen, Shanghai, China), and the sequences of the oligonucleotides synthesized for RNAi have been listed in Supplementary Table S1. Transfections were carried out using Lipofectamine 2000 reagent according to the manufacturer's instructions (Invitrogen, Shanghai, China).
Cell lines and immunoblot analysis. The human gastric adenocarcinoma cancer cell lines MGC803, BGC823, MKN45, AGS and SGC7901 and the normal gastric epithelium cell line (GES-1) were obtained from the Chinese Academy of Sciences Committee on Type Culture Collection Cell Bank (Shanghai, China). Western blot analysis was conducted according to our previous protocol. 24 Antibodies used in the study were: E2F1 (cat. # ab14768, Abcam, Hong Kong, China), CDKN2B (cat. # sc-271791, Santa Cruz, Dallas, TX, USA), STAU1 (03-116, Millipore, Bedford, MA, USA), FLAG-tagged antibodies (8146 S, Cell Signaling Technology, Boston, MA, USA), and GAPDH antibody was used as control.
Tissue samples and clinical data collection. In this study, 80 patients underwent primary GC resection at the First Affiliated Hospital of Nanjing Medical University and the Affiliated Hospital of Yangzhou University. The study was approved by the ethics committee on Human Research of the First Affiliated Hospital of Nanjing Medical University and the Affiliated Hospital of Yangzhou University. Written informed consent was obtained from all patients. The clinicopathological characteristics of the GC patients have been summarized in Supplementary Table S2.
RNA preparation and quantitative real-time PCR. Total RNAs were extracted with TRIzol reagent (Invitrogen, Grand Island, NY, USA), and quantitative real-time PCR (qRT-PCR) analyses were conducted according to the manufacturer's instructions (Takara, Dalian, China). The primers sequences have been listed in Supplementary Table S1.
Isolation of cytoplasmic, and nuclear RNA. Cytoplasmic and nuclear RNA were isolated and purified using the Cytoplasmic & Nuclear RNA Purification Kit (Norgen, Belmont, CA, USA), according to the manufacturer's instructions.
IHC analysis. To quantify protein expression, both the intensity and extent of immunoreactivity were evaluated and scored. In the present study, staining intensity was scored as follows: 0, negative staining; 1, weak staining; 2, moderate staining; and 3, strong staining. The scores of the extent of immunoreactivity ranged from 0 to 3, and were determined according to the percentage of cells that showed positive staining in each microscopic field of view (0, o25%; 1, 25-50%; 2, 50-75%; 3, 75-100%). A final score ranging from 0 to 9 was achieved by multiplying the scores for intensity and extent. Using this method, the expression of proteins was scored as 0, 1, 2, 3, 4, 6 or 9. In case of disagreement (score discrepancy 0.1), slides were reexamined and a consensus was reached by the experts.
Luciferase reporter assay. Cells were first transfected with appropriate plasmids in 24-well plates. Next, the cells were collected and lysed for luciferase assay 48 h after transfection. The relative luciferase activity was normalized with Renilla luciferase activity.
Cell proliferation assays. Cell proliferation assays and colony formation assays were performed as previously reported. 24 Flow cytometry. Cell cycle and cell apoptosis were analyzed by flow cytometry and detected as previously reported. 24 EDU analysis. 5-Ethynyl-2-deoxyuridine (EDU) labeling/detection kit (Ribobio, Guangzhou, China) was used to assess the cell proliferation. Cells were grown in 96-well plates at 5 × 103 cells per well. Forty-eight hours after transfection, 50 μM EdU labeling media were added to the 96-well plates and they were incubated for 2 h at 37°C under 5% CO 2 . After treatment with 4% paraformaldehyde and 0.5% Triton X-100, cells were stained with anti-EdU working solution. DAPI was used to label cell nuclei. The percentage of EdU-positive cells was calculated after analyses of fluorescent microscopy. Five fields of view were randomly assessed for each treatment group.
Chromatin immunoprecipitation. ChIP assays were performed using the EZ ChIPTM Chromatin Immunoprecipitation Kit (Millipore), according to the manual. The primer sequences were listed in Supplementary Table S1.
RIP and RNA pull-down. We performed RIP experiments using the Magna RIP RNA-Binding Protein Immunoprecipitation Kit (Millipore) according to the manufacturer's instructions. The STAU1 and FLAG-tagged antibodies used for IP were from Millipore (03-116; RIPAb+ STAU1) and Cell Signaling Technology (8146S), respectively. The details of the primers for RT-PCR and qPCR have been provided in Supplementary Table S1.
Biotin-labeled RNAs were transcribed in vitro with the Biotin RNA Labeling Mix (Roche Diagnostics, Shanghai, China) and T7 RNA polymerase (Roche Diagnostics), treated with RNase-free DNase I (Roche Diagnostics) and purified with an RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Next, 1 mg whole-cell lysates from MGC803 cells was incubated with 3 μg of purified biotinylated transcripts for 1 h at 25°C. Complexes were isolated with streptavidin agarose beads (Invitrogen, Grand Island, NY, USA). The beads were washed briefly three times and boiled in sodium dodecyl sulfate buffer, and the retrieved protein was detected using the standard western blot technique. The RNA present in the pull-down material was detected using reverse transcription polymerase chain reaction (RT-PCR) and qPCR analysis. The RT-PCR and qPCR primer pairs were provided in Supplementary Table S1. Figure 6 Summary diagram describes that E2F1 and TINCR regulates GC cell proliferation Activation of TINCR/STAU1/CDKN2B T-P Xu et al RNA stability assay. To analyze RNA stability, GC cells were treated with actinomycin D (1 μg/ml). Cells were collected at different time points, and RNA was extracted using Trizol reagent (Invitrogen, Grand Island, NY, USA). Reverse transcription was performed using oligo (dT) primers and mRNA levels were measured using qRT-PCR.
Bioinformatics analysis and statistical analysis. GC gene expression data was obtained from the NCBI GEO, (http://www.ncbi.nlm.nih.gov/geo/). One data set GSE51575 consisted of 26 paired primary gastric adenocarcinoma tissues and surrounding normal fresh frozen tissues was included. All the tissues were obtained after curative resection and pathologic confirmation at Samsung Medical Center (Korea cohort). The raw CEL files from the Agilent arrays (Agilent, Santa Clara, CA, USA) for GSE51575 were processed and normalized using the Robust Multichip Average as previously described. 25 All statistical analyses were performed using SPSS 20.0 software (IBM, SPSS, Chicago, IL, USA). The significance of differences between groups was estimated using the Student's t-test, χ2 test, Fisher's exact test, Mann-Whitney test, Kruskal-Wallis test or Wilcoxon test, as appropriate. A ROC curve was established to evaluate the diagnostic value for differentiating between GC and benign diseases. FP survival (FPS) and OS rates were calculated by the Kaplan-Meier method with the log-rank test applied for comparison. Pearson correlation analysis was performed to investigate the correlation between TINCR and E2F1 mRNA expression. Two-sided P-values were calculated, and a probability level of 0.05 was chosen for statistical significance.
Conflict of Interest
The authors declare no conflict of interest. | 6,197.8 | 2017-06-01T00:00:00.000 | [
"Biology"
] |
Microsphere assistance in interference microscopy with high numerical aperture objective lenses
Abstract. Various attempts have been discussed to overcome the lateral resolution limit and thus to enlarge the fields of application of optical interference microscopy. Microsphere-assisted microscopy and interferometry have proven that the imaging of structures well below Abbe’s resolution limit through near-field assistance is possible if microspheres are placed on the measured surface and utilized as near-field assisting imaging elements. The enhancement of the numerical aperture (NA) by the microspheres as well as photonic nanojets was identified to explain the resolution enhancement, but also whispering gallery modes and evanescent waves are assumed to have an influence. Up to now, to the best of our knowledge, there is no complete understanding of the underlying mechanisms and no model enabling to examine ideal imaging parameters. This contribution is intended to clarify how much the lateral resolution of an already highly resolving Linnik interferometer equipped with 100 × NA 0.9 objective lenses can be further improved by microspheres. Our simulation model developed so far is based on rigorous near-field calculations combined with the diffraction-limited illumination and imaging process in an interference microscope. Here, we extend the model with respect to microsphere-assisted interference microscopy providing a rigorous simulation of the scattered electric field directly above the sphere. Simulation and experimental results will be compared in the three-dimensional spatial frequency domain and discussed in context with ray-tracing computations to achieve an in-depth understanding of the underlying mechanism of resolution enhancement by the microsphere.
Introduction
Due to the ongoing trend toward miniaturization high-resolution optical imaging and threedimensional (3D) microscopy is highly relevant in certain fields of science and technology. This is particularly true for interference microscopy, one of the most established techniques for micro-and nano-structure measurement. As the lateral resolution Λ min follows the Abbe criterion E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 2 5 7 Λ min ¼ λ 2NA : The conventional way of improving the lateral resolution capabilities of microscopes is to increase the numerical aperture NA ¼ n sin θ max of the objective lenses and to reduce the wavelength λ of light. For air as the surrounding medium with the refractive index n ¼ 1, the NA solely depends on the maximum angle θ max , which is the maximum angle of incidence with respect to the optical axis and the maximum scattering or reflection angle that is captured by the microscope objective (MO) lens. In recent years, microsphere assistance has been proposed to improve the lateral resolution capabilities of conventional optical bright-field microscopes. 1 Furthermore, the use of microcylinders instead of microspheres has been reported. [2][3][4] A similar approach demonstrates what is called the super-resolving behavior of liquidimmersed microspheres 5 and points out the advantages compared to other resolution enhancement techniques. 6 Microsphere assistance was also successfully applied in context with confocal microscopy. 7 In addition, the illumination conditions with microsphere support and dark field microscopy were examined. 8,9 More recently, microsphere assistance was also applied in white-light interference microscopy. 10 For phase-shifting interferometry, it was shown that with the support of near-field information provided by microspheres, it is possible to extend the resolution limit for interferometric height profile measurements. 11,12 NAs in the range of 0.3 to 0.85 were used, enabling access to high-frequency image information through the improvement of the optical resolution. Thus, high-spatial-frequency surface height information could be obtained by white-light and phase-shifting interferometers in both, Linnik as well as Mirau configurations. [13][14][15] To explain the effect of the resolution enhancement, photonic nanojets are often referred to. [16][17][18] Their impact on resolution it is also discussed in detail by Darafsheh. 19 Nanojets describe the focus of light on the backside of a microsphere illuminated with a plane wave from the top. On the scale of microspheres, this focus is characterized by its high intensity and narrow waist. 20 Numerous papers have been published, studying the behavior and engineering of photonic nanojets. 21,22 Also, the role of evanescent waves and whispering gallery modes has been considered. 23,24 Detailed studies were also made on the resolution capabilities of Mie particles, which are in contact with the surface under investigation. 25 For further details it is referred to a recent review paper that gives an overview of the state-of-the-art in microsphere-assisted microscopy. 26 Since incoherent Koehler illumination is applied in conventional bright-field and in interference microscopy, the microspheres are illuminated by multiple plane waves incident under various angles. Annular illumination of the outer region of microspheres turned out to affect the achieved resolution enhancement. 27,28 Nevertheless, until now there is no complete and widely accepted explanation of the resolution enhancement by microspheres in conventional microscopy and interferometry. For this reason, further analysis of the underlying physical principles is of predominant interest. Analyzing interference microscopes in the 3D spatial frequency domain by the 3D transfer function (TF) of the imaging system gives physical insight into the relevant transfer characteristics. Sheppard et al. 29 introduced a model, which represents the imaging process of confocal microscopes in the 3D spatial frequency domain. This model was applied in further publications to surface profile reconstruction in confocal microscopy 30,31 and later introduced as the foil model in coherence scanning interferometry (CSI). 32,33 We recently extended this model by treating the reference mirror in the same way as the object's surface 34 and pointed out the analogy with the 3D TF of a bright-field reflection microscope. 35 We further recognized that the transfer characteristics of 3D microscopes strongly depend on the scattering characteristics of the surface (single-point scatterers, mirror-like surfaces, or diffraction gratings) and the spectral characteristics of the light source. 36 Analyzing the interferometric measurement data in the 3D spatial frequency domain, the influence of microspheres on the transfer behavior of the optical system could already be shown. 28,37,38 In the following, the transfer characteristics of a high-resolution Linnik interferometer with and without microsphere assistance are pointed out by 3D spatial frequency domain analysis of 3D image stacks of periodic grating structures. A comparison of the results exhibits that the transfer characteristics of microspheres described in the 3D frequency domain are closely related to the angular ranges of incident, reflected, and diffracted light rays. Thus, ray-tracing computations of light propagation through and inside the microsphere give further insight. Finally, experimental results obtained from gratings of different periods are compared to results of simulations based on rigorous finite element method (FEM) computations.
Experimental Setup
The Linnik interferometer sketched in Fig. 1(a) and displayed in Fig. 1(c) is used to record interference images at certain height positions during a depth scan. This results in the so-called 3D image stack. Since our intention is to compare the resolution enhancement introduced by a microsphere in an already highly resolving microscope, we use two high-resolution MO lenses with 100× magnification and an NA of 0.9, which still provide a working distance of 1 mm. A scientific CMOS camera records the image stack. For illumination, a royalblue light-emitting diode emitting at a center wavelength of λ ¼ 440 nm (Luxeon REBEL Color Line, spectral halfwidth 20 nm) arranged in a Koehler illumination setup and a transverse magnetic (TM) polarizer are utilized. The depth scan is carried out using a precision piezo stage moving the object under investigation axially. Small height steps of typically 20 nm between two consecutive image frames are chosen to obtain a high number of sample points of the interference signals, which allows low-pass filtering to further reduce signal noise. The resulting image stack is analyzed pixel by pixel using envelope and phase evaluation algorithms to reconstruct the 3D topography of the surface. 39 It should be noted that the width of the envelope of the interference signals in this high NA Linnik interferometer configuration is dominated by the longitudinal spatial coherence of the focused light rather than the temporal coherence due to the spectral width of the blue LED. This effect is further described by Abdulhalim. 40 When placing microspheres on the surface to be measured, these spheres are put on the object's surface in a liquid emulsion for practical reasons. After the liquid has evaporated, the measurement can be carried out. Throughout this study, SiO 2 microspheres with a diameter of 7 to 10 μm are being used. With the application of microspheres in the imaging path, an axial shift of the focus occurs and creates a virtual image plane as it is sketched in Fig. 1 Our experiments included different diameters and materials of the spheres. It turned out that for MO lenses of high NA the number parameter combinations of microspheres suitable for microsphere-assisted interferometry is limited. For diameters larger than 20 μm, it was not possible to obtain a sharp image. Similar problems occurred at higher refractive indices. Based on these experimental results, a microsphere made of SiO 2 with relatively small diameters of 5 to 15 μm is a good choice for the high aperture experimental setup. However, there may be further parameter combinations working well too.
Analysis in the Spatial Frequency Domain
We assume the scattering geometry 41 according to Fig. 2(a), where k in is the wave vector of a plane wave incident under an angle θ in , and k r and k s are wave vectors of reflected and scattered waves propagating under the angles θ r and θ s , respectively. For a grating with a period Λ min corresponding to the Abbe limit according to Eq. (1), the situation is outlined in Fig. 2(b). For the incidence angle θ in ¼ θ max , the zeroth-order diffracted wave with wave vector k r propagates under the angle θ r ¼ θ max , whereas the scattering angle of the minus first-order diffracted wave is θ s ¼ −θ max . The scattered electric field U s ðqÞ under the Fraunhofer far-field condition can be calculated using the Kirchhoff formulation 29,41 with respect to a microscope in reflection mode assuming a perfectly reflecting surface sðx; yÞ as xþq y yþq z sðx;yÞÞ dx dy; Aðx; yÞe −iðq x xþq y yþq z sðx;yÞÞ dx dy; where a monochromatic plane wave of wavelength λ and wavenumber k 0 ¼ 2π∕λ is incident under the angle θ in , and the vector q ¼ k s − k in defines a point in the 3D Fourier domain. The scattered far-field U s ðqÞ is normalized in a way that for a smooth surface and perpendicular incidence, i.e., θ in ¼ 0, the amplitude in the specular direction becomes unity. The area Aðx; yÞ of integration in Eq. (2) corresponds to the field of view of the microscope, F f: : : g represents the Fourier transform, and à the convolution symbol. sðx; yÞ ¼ sðxÞ applies for a surface textured in one dimension only. If the area A is large enough, Eq. (2) represents the two-dimensional (2D) Fourier transformation with respect to x and y of the phase object expð−iq z sðx; yÞÞ. If, for simplicity sðx; yÞ ¼ sðxÞ and considering wave vectors k in and k in only in the xz-plane, which then equals the plane of incidence and observation, the incident and scattered wave vectors are given as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 3 2 0 Thus, the vector q results in E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 6 ; 2 5 1 According to Eq. (4), the situation shown in Fig. 2(b) leads to E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 1 8 2 The maximum lateral spatial frequency collected by a microscope lens of given NA value. Equation (5) is directly related to the Abbe resolution limit according to Eq. (1). The coordinates in q-space follow the Ewald sphere construction shown in Fig. 3. Due to the NA of the microscope, the possible directions of the wave vectors of incident and scattered light are limited by the angle θ max . According to Fig. 3(a), this results in the two spherical caps, which need to be correlated to calculate the 3D TF HðqÞ of the instrument. 35 The outer boundary of the resulting Ewald-limiting-sphere is plotted in Fig. 3(b), where the contributions of the specular reflection are located on the q z -axis and the outer spherical shell of radius 2k 0 represents the back scattered light. It is worthy to note that the q x -value is related to the spatial frequency component of a surface, whereas the corresponding q z -values represent the spectral range of the corresponding interference signals. The whole construction shows rotational symmetry with respect to the q z axis. Figure 3(c) shows a 2D cross sectional view in the q x q z -plane of HðqÞ, which holds for plane mirror-like surfaces and diffraction gratings. 36 Once U s ðqÞ and HðqÞ are known, the 3D spatial frequency representation of the interference intensity equals the product: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 4 3 9 ΔĨðqÞ ¼ U s ðqÞHðqÞ; (6) and the interference image stack ΔIðx; y; zÞ can be calculated via inverse 3D Fourier transform: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 6 ; 3 9 5 ΔIðx; y; zÞ ∼ RefF −1 fΔĨðqÞgg: Cross sections of experimentally obtained spatial frequency representations ΔĨðqÞ for different rectangular silicon phase gratings (RS-N standard by SiMETRICS) are shown in Fig. 4. Figure 4(a) shows the result for a rectangular grating of 6-μm period and 192-nm peak-to-valley (PV) amplitude, whereas Figs. 4(b) and 4(c) belong to gratings of 400 nm (b) and 300-nm period (c), both with a PV amplitude of 140 nm. The shape of the Ewald-limiting-sphere can be clearly recognized in Fig. 4(a). Figures 4(b) and 4(c) show only three diffraction orders, the zeroth order at q x ¼ 0 and the 1st and -1st orders, which are located at higher q x values for the shorter period.
Ray-tracing Results for a Microsphere
We already demonstrated experimentally that a phase grating with a minimum period length of 230 nm is resolved by the Linnik interferometer mentioned above in combination with microsphere assistance. 42 This period is slightly below the Abbe resolution limit of 244 nm, which holds for this interferometer without a microsphere. This section is intended to study, how light rays are refracted, reflected and focused by the microsphere to find out plausible mechanisms for the resolution enhancement. Since we use microspheres of at least 7-μm diameter (r ¼ 3.5 μm), the Mie-parameter 2πk 0 r∕λ is at least 50 and a physical description based on geometrical optics is a satisfying approximation to study basic effects. 43 Figure 5 shows some results of ray tracing computations assuming a microsphere of silica with a refractive index of 1.4655. In Fig. 5(a), a bundle of parallel light rays propagates along the vertical axis and hits the microsphere. The refracted light propagates through the microsphere, is refracted again and shows nearly grazing incidence with respect to a horizontal line located directly under the microsphere, where the object is typically placed. Due to the large angle of nearly 90 deg (defined with respect to the z-axis) of these rays refracted twice, the effective NA of the microsphere as an optical imaging element is close to unity. There are additional situations as shown in Figs. 5(b)-5(d), where the light never reaches a measuring object located underneath the microsphere. These ray paths lead to additional interference components although they are not affected by the surface of the measuring object. In Fig. 5(b), the incident light rays show a relatively large angle of incidence with respect to the z-axis. However, all angles are below the maximum angle θ max defined by the NA of the system. The rays travel horizontally through the microsphere. Due to the symmetry of the arrangement the scattering angle equals the angle of incidence, i.e. θ s ¼ θ in . Hence, for these rays q x ¼ 0 and the corresponding q z -values are relatively small. In Fig. 5(c), parallel rays are incident on the left hand side. These rays are refracted at the boundary of the microsphere and are then internally reflected at the bottom of the sphere. After an additional refraction, they propagate in air and include relatively small angles with the z-axis, thus the corresponding q x -values are low and q z -values high. This is the arrangement belonging to the rainbow and, indeed, the rainbow ray 43 can be identified as the ray on the right hand side, which corresponds to the maximum scattering angle. The rays close to the rainbow ray form a caustic, and thus, in these regions high intensity values occur. A similar situation is shown in Fig. 5(d), which shows symmetry with respect to the z-axis, since all rays are internally reflected at the coordinate x ¼ 0, z ¼ −r inside the microsphere and, therefore, q x ¼ 0 for these rays. Figure 5(d) demonstrates that the incident and scattered rays again include rather small angles with the z-axes, such that no total internal reflection inside the microsphere occurs. Light incident at higher angles is no longer reflected at x ¼ 0, z ¼ −r. Therefore q z is quite high for the rays plotted in Fig. 5(d).
Experimental Results
The interferometric measurement data is acquired through a stepwise depth scan with a step height of 20 nm performed by a piezo scanner. This results in a measuring process known from CSI, however, additionally utilizing microsphere assistance. The measurement object is the RS-N resolution standard by SiMETRICS. Mainly the 300-and 600-nm grating structures with nominal depths of 140 and 160 nm, respectively, are examined. While performing CSI measurements, the height information of the specimen's surface is encoded in the phase of the interference signals. To gain a better understanding of the phase modulation occurring in interferometric measurement, xz-cross sections of (offset-free) interference images acquired through a depth scan are shown in Fig. 6. The results shown in Figs. 6(a) and 6(b) are obtained from the 300-and 600-nm grating structure of the RS-N standard using microsphere assistance, i.e., placing a microsphere directly on the grating structure. For comparison, Fig. 6(c) shows how the image stacks recorded for a mirror instead of a grating leads to highest contrast of the modulation in z-direction of the interference signals centered around the virtual image plane. It is worthy to note that the position on the z-axis is related to the range of the depth scan, which is typically 5-to 10-μm long. The virtual image plane is placed approximately in the middle of this range and z ¼ 0 denotes the starting position of the scan range. In x-direction, the phase modulation introduced by the surface of the measured grating structures is visible in Figs. 6(a) and 6(b). The interference signals can be analyzed using envelope-and phase retrieving algorithms to reconstruct the grating structure of the object as previously shown. 42 In addition, the resolution enhancement through the imaging process also an additional magnification is introduced by the microspheres.
For comparability with the results shown in Secs. 3 and 4, the spatial frequency domain representation of the interference data is analyzed as introduced beforehand. First, the data are preprocessed by means of Blackman-windowing as well as zero-padding to exclude contributions occurring besides the microsphere and to improve the spatial frequency resolution, respectively. The data are further offset reduced. Cross sections of the spatial frequency representation in the q x q z -plane are shown in Fig. 7
accordingly.
As it is shown in Eq. (5) and illustrated in Fig. 4, for a one-dimensional grating structure imaged by an interference microscope, the diffraction orders are visible as corresponding sharp lines at q x ¼ const: in the 3D spatial frequency representation. The additional magnification factor introduced by the microsphere was determined to be M ¼ 1.4 for our setup. Thus, for the grating of 300-nm period the q x -value corresponding to the first-order diffraction maximum is shifted by the mircosphere from q x;a ≈ AE21 μm −1 in Fig. 4(c) to q x;a ≈ AE15 μm −1 according to Fig. 7(a), since the period length Λ is multiplied by M. Consequently, the 600-nm grating magnified by the microsphere leads to first-order diffraction maxima at q x;a ≈ AE7.5 μm −1 as it is displayed in Fig. 7(b). Finally, for the mirror no diffraction order except for the zero-order located at q x ¼ 0 appears, as shown in Fig. 7(c).
Further, the transfer behavior of the microsphere-assisted setup differs compared to an interference microscope without a microsphere. The size of the field of view limited by the microsphere significantly influences the intensity distribution in the 3D spatial frequency domain. According to Eq. (2), the field of view corresponds to the area of integration described by Aðx; yÞ. This leads to a convolution in the 3D spatial frequency domain of the diffraction pattern of the grating with an Airy-disk function (implying a circular shaped field of view). The smaller the field of view Aðx; yÞ the broader is this Airy-disk and the corresponding blurring of the diffraction maxima. As stated by Sheppard, 44 broadening of diffraction orders due to a small field of view has a significant influence on the resolution capabilities of a system. Due to optical aberrations introduced by the microsphere and the Blackman window used to extract the relevant lateral and axial range in the spatial domain, the frequency response is additionally affected. As a result, a broadening of the discrete intensity pattern, i.e., the sharp lines at certain q x -values initiated by diffraction at the grating structure, occurs. This becomes apparent by comparison of Figs. 7(a), 7(b) and 4. In addition, the light diffracted by the grating, additional intensity contributions in Fig. 7 can be attributed to the microsphere itself, e.g., the intensity maximum at q x ¼ 0, q z ≈ 15 μm −1 , which corresponds to the situation according to Fig. 5(b) and the higher frequency rippling, which is a consequence of the increased scattered light intensity under the rainbow angle shown in Fig. 5(c).
Rigorous Simulations
To investigate further influences on the transfer behavior of the microspheres as well as the influence of photonic nanojets, rigorous simulations were performed, which are presented in the following.
Simulation of the Complete Imaging Process
The measured data are compared to simulation results of the imaging process, where the lightsurface interaction considering the microsphere is based on rigorous FEM computation of the electric field distribution. The transfer characteristics of the interference microscope as well as the phase shifts introduced by the depth scan are considered by filter operations using Fourier optics modeling. The combined model enables a full 3D simulation of illumination and diffraction at 2D periodic surfaces and provides accurate results compared with CSI measurements. 45,46 In this study, the model is extended considering a microsphere with a radius of r ¼ 2.5 μm and a refractive index of n ¼ 1.5, which is placed directly on the specimen. For computational reasons, microspheres are arranged in a periodic manner with a period length L x ¼ 13.2 μm. Due to computational and time constraints, the simulation is performed on 2D surface structures and the microsphere is approximated by a microcylinder, as cylindrical microelements were shown to enhance the lateral resolution capabilities too. 4,47,48 Since we are interested in general effects of microsphere-induced resolution enhancement, it is reasonable not to use exactly the same configuration for measurement and simulation. With respect to Koehler illumination, it can be assumed that the illumination of the specimen is composed of individual incoherent partial plane waves. These illuminate the specimen from all angles which cover the aperture of the objective. To realize the conical illumination, discretization is performed over these angular values and rigorous simulations of the near field are conducted using plane wave illumination for each discrete incident angle. For each simulated near field, the far-field expansion and Fourier optics modeling of the imaging processes in the microscope are performed. Afterwards, the interference intensity distribution considering the reference field is calculated for each set of incident angles and the final incoherent summation of the results is done. 46 Figure 8 shows extracts of offset reduced interference image stacks in the presence of a microsphere simulated with TM polarized monochromatic light of λ ¼ 440 nm and NA ¼ 0.9. Due to the high NA value, the light source can be assumed as monochromatic, since the influence of temporal coherence is negligible for single-colored LEDs. The underlying structure is a sinusoidal grating with PV height of 25 nm and period lengths corresponding to those of the measured results shown in Fig. 6. To avoid additional effects such as multiple scattering and edge diffraction, the height is chosen to be smaller compared to the measured profile.
Considering Figs. 6 and 8, significant key characteristics can be compared. In both figures, the grating structure appears in the phase modulation of the interference signals. For the experimental results, the phase modulation is more pronounced in amplitude, since a rectangular grating (RS-N standard) was used. Similar to the measurement result, the simulated interference signals obtained from a flat mirror [ Fig. 8(c)] do not show a grating-dependent lateral phase modulation. Detailed comparison of simulated and measured results exhibits deviations. These are mainly due to the fact that results measured using a sphere are compared to simulation results obtained from a cylinder because of the computational effort mentioned above. Further, the sphere is arbitrarily placed on the grating and thus at different positions with respect to the grating structure in simulations. Additionally, the spheres have a different radius compared to the cylinder leading to further deviations. Nevertheless, the qualitative comparison between simulated and experimental results demonstrates that in principle the simulation model reflects the relevant physical mechanisms in microsphere-assisted interferometry.
To analyze the interference signals in more detail, Fig. 9 shows the representation in q-space. Comparing simulation and measurement results, the diffraction orders are blurred in the measurement results. The intensity obtained from a plane mirror [ Fig. 9(c)] shows an additional periodic modulation of the intensity distribution for larger q z and small q x values. This is probably a consequence of the rainbow effect explained in Sec. 4, since further simulation results turn out that the modulation period behaves anti-proportional to the microsphere's diameter. This phenomenon is not as clearly observed in the measurement results obtained from a plane mirror, but the differences can be explained by the use of microcylinders in the simulation instead of spheres. However, a slight modulation can be observed even in the measurement results, especially in the case of Figs. 7(a) and 7(b). It is worthy to note that due to the sinusoidal grating structure with a relatively low PV amplitude of 25 nm, the intensity of the blurred diffraction orders is not as pronounced as in the measurement results. Furthermore, in both, measurement and simulation, an intensity contribution at q x ¼ 0 for small values of q z is visible with and without a grating structure. This can be assigned to the rays traveling horizontally through the microcylinder as it is shown in Fig. 5(b) and confirmed by simulations assuming a microcylinder in free space. The high intensity for large q z values and q x ≈ 0 follow from the rays depicted in Fig. 5(d), where rays of oblique incidence and scattering angles inside the sphere are refracted such that above the sphere smaller angles and hence larger values of q z result.
In sum, the rigorous simulation model reproduces the major effects occurring in measurement results. Artefacts observed in the 3D spatial frequency domain can be assigned to general cases of ray tracing. Thus, the q-space representation is shown to be quite useful to analyze the physical mechanism introduced by microspheres. Furthermore, diffraction orders belonging to the measured grating structure under the sphere lead to smaller q x and larger q z values due to the magnification of the sphere as supposed by Hüser and Lehmann. 28,42 This effect is now confirmed by simulations of grating structures of different period.
Studies on Photonic Nanojets
Rigorous simulations were performed to study the influence of photonic nanojets. In this case, the results are obtained with 2D FEM simulations of the electromagnetic field distribution. Only perpendicular incidence of a monochromatic plane wave under an angle θ in ¼ 0 is considered (λ ¼ 440 nm). The phenomenon of photonic nanojets occurring on the back surface of microspheres or microcylinders was widely studied. 19,49 However, these investigations do not include the configuration of an interferometer in reflection mode as it is studied here. Therefore, an additional simulation with a microcylinder on a grating (period length 300 nm) is carried out. The intensity distributions obtained with and without the grating are shown in Fig. 10. (a) (b) Fig. 10 Simulation results for a microcylinder illuminated by a monochromatic plane wave for the cases (a) with a diffraction grating (300-nm period length) directly below the cylinder and (b) in air surrounding.
When comparing the results from Figs. 10(a) and 10(b), the photonic nanojet obviously disappears for the case of the grating located directly below the microcylinder. The specimen is placed at the location, where the nanojet in air occurs. Hence, the generation of a nanojet will be disturbed, and instead, an interaction of the corresponding electromagnetic waves with the specimen's surface takes place. Furthermore, the case of near grazing incidence below the microcylinder shown in Fig. 5(a) arises. The light diffracted from the grating can contribute to the imaging process and thus high aperture imaging occurs even if MO lenses of low NA are used.
Discussion
To analyze the resolution enhancement, we compare both, experimental and simulated interference image stacks in the spatial and the three-dimensional spatial frequency domain. This methodology provides advantages as it explains the occurring phenomena based on light rays traveling under different angles with respect to the optical axis and thus enables comparisons with ray-tracing computations and rigorous simulations as well. In 3D Fourier space effects occurring through the microsphere itself independently of the specimen can be separated from effects introduced by the interaction of light and the specimen via the microsphere. Microspheres shift the intensity diffraction maxima of a grating to lower spatial frequencies. As a consequence, the central wavelength of the resulting interference signals is significantly reduced if microspheres are used. In combination with ray-tracing and rigorous simulation results, we conclude that the most dominant effect, which arises from the microspheres, can be viewed as an effective enlargement of the NA of the optical system.
Conclusion
Although microsphere-assisted resolution enhancement is frequently used in optical imaging and 3D microscopy, the physical mechanism behind this phenomenon is still not completely understood. The most frequently mentioned explanation attempts refer to the enhancement of the NA, the collection of evanescent waves, the photonic nanojet effect, and the excitation of whispering gallery modes.
The methodologies presented in this article can be applied to several configurations of microsphere-assisted interferometry and microscopy. Analysis in the 3D spatial frequency domain gives valuable insight into the transfer behavior of optical systems with different NA and other optical properties. The rigorous simulation model represents a complete computational approach of the imaging process through the microsphere. This method enables a more specific look on relevant mechanisms responsible for the transfer behavior of microsphere-assisted interferometry. With this model, e.g., the influences of whispering gallery modes and evanescent waves on the imaging process can be elaborated further.
Since we use 3D microscopy in reflection mode, we suppose direct reflection and diffraction of propagating waves to be dominant and thus implications of evanescent waves and whispering gallery modes to be negligible. Therefore, our investigation suggests that the most likely mechanism is the enhancement of the NA, which is close to 1 in case of microsphere assistance, in combination with the rather limited field of view under the microsphere. The fact that higher magnifications than M ¼ 1.4 could not be obtained with our experimental configuration is one of the facts that led us to conclude that the numerical aperture must be increased effectively by the microsphere and finally limits the resolution of microsphere-based interferometry. For our microscope with an NA of 0.9 a resolution enhancement of 11% is achieved using microspheres. Since the occurrence of photonic nanojets also relies on the coherent superposition of waves propagating under higher angles with respect to optical axis the physical origins of nanojets and NA enhancement are closely related to each other. | 7,791 | 2022-10-01T00:00:00.000 | [
"Physics"
] |
EAS longitudinal development distribution parameters for different extrapolations of the nuclei intaraction cross section to the very high energy domain
Determination of the primary particle mass using air fluorescence or a Cherenkov detector array is one of the most difficult task of experimental cosmic ray studies. The information about the primary particle mass is a compound of the produced particle multiplicity, inelasticity, interaction cross-section and many other parameters, thus it is necessary to compare registered showers with sophisticated Monte-Carlo simulation results. In this work we present results of the studies of at least three possible ways of extrapolating proton- Nucleus and Nucleus-Nucleus cross sections to cosmic ray energies based on the Glauber theory. They are compared with experimental accelerator and cosmic ray data for the proton-air cross section. We also present results of the EAS development with the most popular high-energy interaction models adopted in the CORSIKA program with our cross section extrapolations. The average position of the shower maximum and the width of its distribution are compared with experimental data and some discussion is given.
Introduction
The hadronic interaction cross section is one of the parameters playing a major role during an Extensive Air Shower development. Calculations of hadronic cross sections at ultra high energies require their extrapolation from lower energies where accelerator data are available. Such extrapolations forces the building of a phenomenological models based e.g. on Glauber diffraction theory [1,2]. Glauber approximation consists of introducing an eikonal function, χ, representing all phase-shifts related to all possible scattering acts. Eikonal χ(b) also represents the opacity of two colliding objects. The scattering amplitude in impact parameters space is given by: Finally, a knowledge on the form of hadron matter distribution allows for calculations of elastic, inelastic and total cross sections.
Proton -Nucleus scattering
The essence of the Glauber approximation is the natural assumption that the resulting amplitude phase shift of the collision is the sum of all possible A individual nucleonnucleon phase shifts. The scattering amplitude is where the function ϕ is the wave-function of the nucleus with the nucleons distribution given by {d}. This general formula is a subject of subsequent approximations leading to the set of consecutively simpler equations for the collision cross sections. We assume that there is no space correlation between nucleons. Using a universal nucleon distribution within the nucleus ρ we have with the following normalization: ρ j (r j )d 3 r j = 1 . The next quite obvious approximation is that individual sub-collisions are the same, having universal nucleonnucleon phase-shifts dependences χ. With this we have On the other hand, the scattering process can be treated as a single collision process with its own nuclear phase shift χ opt (b) The comparison of Eqs. (7) and (8) leads to the relation between the opacity for the nucleus and the single nucleon: To calculate the integral in Eq. 9 we used the fluctuation of the nucleus shape ρ A (d) adopted from Lund model [3] in the form of Woods-Saxon (2-parameter Fermi) distribution leading us to following form of eikonal function:
Big nucleus, point nucleon approximation
We can use the approximation that the number of nucleons in nuclei is relatively big. The number of nucleons (A) can go to infinity keeping the nucleus opacity constant (normalisation). Opacity of the nucleon is the sum of many (small, point-like) scattering centers. Using the optical theorem we obtain: This can be substituted in Eq. 4 and the inelastic cross section can be described by:
Nucleus -Nucleus scattering
The treatment of nucleon-nucleus presented above can be extended to the case of nuclei collisions with the amplitude defined as Using Eq.(8) we can define the overall nucleus-nucleus opacity Thus the nucleus-nucleus scattering amplitude is, with the analogy to Eq.(8) The "big nucleus" and "point nucleon" approximation can also be used in this case leading to
Probabilistic framework
One of the existing ways in the literature to describe nucleus-nucleus interactions is the probabilistic formalism (see, e.g., [4]). It assumes that the collision between individual nucleons of colliding nuclei are not correlated and do not interfere with each other. If there are AB pairs of nucleons which could take part in the interaction the probability of having n inelastic interactions is The summation over n can be performed and the result with the integration over b gives the value of the so-called "production cross section": which is quite similar to the result in Eq.(16), but with σ inel in place of σ tot .
Calculated σ inel Cross-Sections
We present inelastic cross sections calculated using three previously discussed ways. Cross sections for p-Air and Fe-Air collisions are presented separately in Figs 1 and 2 respectively. In the case of p-Air cross-section, results are presented with available experimental data. Results in both cases are compared with cross-section values currently existing in three high-energy interaction models CORISKA -EPOS-LHC [6], Sibyll 2.3c [7,8] and QGSJETII-04 [9].
The basis of the approach to the calculation of inelastic Proton-Nucleus and Nucleus-Nucleus cross sections is more precisely described in section 2 [10].
Simulations of X max parameter
We used the calculated σ inel cross section in the COR-SIKA program for simulations of the longitudinal development of an EAS. Simulations were performed using three quite popular high-energy intraction models -EPOS-LHC, QGSJETII-04 and Sibyll 2.3c. As a result we present plots of X max position versus primary energy for each case compared with results for cross sections currently existing in CORSIKA and experimental data (Fig. 3). Additionally we present RMS parameters of calculated values. The following plots concerns EAS simulations with modified σ inel pN and σ inel NN cross sections, not modifications of high energy models. All simulations have been performed using the NKG lateral disribution function. Simulations in 4 points on energy scale consist of the following statistics: 10 11 GeV -100 showers, 10 10 GeV -250 showers, 10 9 GeV -500 showers, 10 8 GeV -500 showers. Results are compared with experimental data [11][12][13][14] (Fig. 3).
The presented results of X max parameters concerns simulations with new inelastic cross sections instead of those originally existing in the EPOS-LHC, QGSJETII-04 and Sibyll 2.3c models. Implementation of our cross sections was only one modification made in CORSIKA for these simulations.
Conclusions
We present revised results of inelastic cross-sections (σ inel ) calclations based on the Glauber diffraction theory and point-nucleon and probabilistic approximations. Differences between results of Glauber theory and approximations are increasing with energy. For all discussed models our σ inel in the Glauber case are a little bit higher than commonly used in CORSIKA (Fig. 1). It forces faster EAS development than we can observe in results of X max simulations (Figs. 3 -5). Simulations with new Glauber σ inel provides a better agreement with measured X max for protons as the primary particle. Simulated X max values have been obtained using CORSIKA with NKG option only. In future the presented σ inel cross sections will be implemented in CONEX [15] code and simulations will be repeated with EGS and higher statistics. Exact determination of nuclear cross-section is very difficult because of a lack of knowledge about nucleus hadron matter distribution especially at the highest energies. This kind of consideration is very important for correct interpretation of cosmic ray data. | 1,688.4 | 2019-01-01T00:00:00.000 | [
"Physics"
] |
The role of aerodynamic forces in a mathematical model for suspension bridges
In a fish-bone model for suspension bridges studied by us in a previous paper we introduce linear aerodynamic forces. We numerically analyze the role of these forces and we theoretically show that they do not influence the onset of torsional oscillations. This suggests a new explanation for the origin of instability in suspension bridges: it is a combined interaction between structural nonlinearity and aerodynamics and it follows a precise pattern. This gives an answer to a long-standing question about the origin of torsional instability in suspension bridges.
Introduction
Since the Federal Report [1], it is known that the crucial event causing the collapse of the Tacoma Narrows Bridge was a sudden change from a vertical to a torsional mode of oscillation. Several studies were done on this topic, see [8,11,14] but a full explanation of the origin of torsional oscillations is nowadays still missing; see also the updated monograph [15] and references therein. In two recent papers the onset of torsional oscillations was attributed to a structural instability. In [2] a model of suspension bridge composed by several coupled (second order) nonlinear oscillators has been proposed. By using suitable Poincaré maps, it has been proved that when enough energy is present within the structure a resonance may occur, leading to an energy transfer between oscillators, from vertical to torsional. The results in [2] are purely numerical. We found a similar answer in [3] by analyzing a different mathematical model, named fish-bone. In this model, the main span of the bridge, which has a rectangular shape with two long edges and two shorter edges, is seen as a degenerate plate fixed and hinged between the towers. The midline of the roadway is seen as a beam with cross sections that are seen as rods free to rotate around their barycenters located on the beam. The degrees of freedom are the vertical displacement of the beam y, which is positive in the downwards direction, and the angle θ of rotation of the cross sections with respect to the horizontal position. The roadway is assumed to have length L and width 2 with 2 L. By considering the kinetic energy of a rotating object and the bending energy of a beam, the following system is obtained in [3]: M y tt +EIy xxxx +f (y+ sin θ)+f (y− sin θ) = 0 0 < x < L, t > 0, where M is the mass of the rod, µ > 0 is a constant depending on the shear modulus and the moment of inertia of the pure torsion, EI > 0 is the flexural rigidity of the beam, f includes the restoring action of the prestressed hangers and the action of gravity. To (1) we associate the boundary-initial conditions For a linear force f the two equations in (1) decouple: this case was studied in [10]. In the nonlinear case, well-posedness of the problem was shown in [6]. For a suitable nonlinear f , in [3] we gave a detailed explanation of how internal resonances occur in (1), yielding instability. The aim of this analysis was purely qualitative and the bridge was seen as an isolated system with no dissipation and no interactions with the surrounding air. In particular, both theoretical and numerical results were given proving that the onset of large torsional oscillations is due to a resonance which generates an energy transfer between different oscillation modes. More precisely, when the bridge is oscillating vertically with sufficiently large amplitude, part of the energy is suddenly transferred to a torsional mode giving rise to wide torsional oscillations. Estimates of the energy threshold for stability were obtained both theoretically and numerically, see Section 2.
Our purpose in [3] was to emphasize the structural behavior of the bridge without inserting any interaction with the surrounding air. This procedure was also followed by Irvine [7, p.176] who comments his own approach by writing: In this formulation any damping of structural and aerodynamic origin has been ignored... We could include aerodynamic damping which is perhaps the most important of the omitted terms. However, this refinement, although frequently of significance, yields a messy flutter determinant that requires a numerical solution.
This comment says two things. First, that it was a good starting point to study (1) as an isolated system. Second, that, in order to have more accurate responses, the subsequent step should be to insert aerodynamic forces in the model. This refinement of the model (1) was also suggested to us by Paolo Mantegazza, a distinguished aerospace engineer at the Politecnico of Milan, and motivates the present paper. In order to better highlight the role of the aerodynamic forces, we do not insert in the model any other external action. We will show, both numerically and theoretically, that the threshold of instability of the system is independent of aerodynamic forces. This suggests a new pattern for the aerodynamic and structural mechanisms which give rise to oscillations in suspension bridges, see Section 6.
One mode approximation of the fish-bone model
We first introduce some simplifications of the model which, however, maintain its original essence and its main structural features. First, up to scaling we may assume that L = π and M = 1. Then, since we are willing to describe how small torsional oscillations may suddenly become larger ones, we use the following approximations: cos θ ∼ = 1 and sin θ ∼ = θ; see [3] for a rigorous justification of this choice. Since our purpose is merely to describe the qualitative phenomenon, we may take EI π L 4 = 3µ π L 2 = 1 although these parameters may be fairly different in actual bridges. For the same reason, the choice of the nonlinearity is not of fundamental importance; it is shown in [2] that several different nonlinearities yield the same qualitative behavior for the solutions. Whence, as suggested by Plaut-Davis [12, Section 3.5], we take f (s) = s + s 3 . Finally, we set z := θ and the system (1) becomes y tt + y xxxx + 2y(1 + y 2 + 3z 2 ) = 0 0 < x < π, t > 0, To (4) we associate some initial conditions which determine the conserved energy E of the system, that is, Existence and uniqueness of solutions were proved in [6] by performing a suitable Galerkin procedure, see also [3] where more regularity was obtained. The proof is constructive: to (4) we associate the functions and the approximated m-mode system where j = 1, ..., m. Then, suitable a priori estimates allow to prove that ) as m → +∞ for all T > 0. Hence, the functions in (5) approximate the solutions of (4). The error committed when replacing y with y m and z with z m can be rigorously estimated, see [3,Theorem 2].
In what follows we focus our attention to the simplest case m = 1. Then, system (6) reads with some initial conditions If we take ζ 0 = ζ 1 = 0, then the unique solution of (7)-(8) is (y 1 , z 1 ) = (ȳ, 0) withȳ =ȳ(η 0 , η 1 ) being the unique (periodic) solution of We callȳ the first vertical mode with associated energy Since we are interested in the stability of the solution z 1 ≡ 0 corresponding to ζ 0 = ζ 1 = 0, we linearize system (7) around (ȳ, 0). The torsional component of the linearized system is the following Hill equation [5]:ξ We say that the first vertical modeȳ at energy E(η 0 , η 1 ) is torsionally stable if the trivial solution of (11) is stable. By exploiting a stability criterion by Zhukovskii [16], in [3] we obtained the following theoretical estimates. Proposition 1. The first vertical mode y at energy E(η 0 , η 1 ) (that is, the solution of (9)) is torsionally stable provided that Proposition 1 gives a sufficient condition for the torsional stability. The numerical results obtained in [3] show that the threshold of instability could be larger. We quote a couple of them in Figure 1. We plot the solution of (7) with initial conditions for different values of y 1 ∞ . The green plot is y 1 and the black plot is z 1 . For y 1 ∞ = 1.45 no 3 How to introduce the aerodynamic forces into the model?
Even in absence of wind, an aerodynamic force is exerted on the bridge by the surrounding air in which the structure is immersed, and is due to the relative motion between the bridge and the air.
Pugsley [13, § 12.7] assumes that the aerodynamic forces depend linearly on the "cross" derivatives and functions. Similarly, Scanlan-Tomko [14] obtain the following equations satisfied by the torsional angle θ: where I, ζ θ , ω θ are, respectively, associated inertia, damping ratio, and natural frequency. The r.h.s. of (13) represent the aerodynamic force which is postulated to depend linearly on bothθ and θ with A, B > 0 depending on the structural parameters of the bridge. Let us mention that the arguments used in [14] to reach the l.h.s. of (13) have been the object of severe criticisms (see [9]), due to some rough approximations and questionable arguments. Nevertheless, the r.h.s. of (13) is nowadays recognized as a satisfactory description of aerodynamic forces. Following these suggestions, we insert the aerodynamic forces in the 1-mode system (7). We first consider the case where only the cross-derivatives are involved. This leads to the following modified system: ÿ 1 + 3y 1 + 3 2 y 3 1 + 9 2 y 1 z 2 1 + δż 1 = 0 z 1 + 7z 1 + 9 2 z 3 1 + 27 2 z 1 y 2 1 + δẏ 1 = 0 with δ > 0. As in (12), we take the initial conditions for different values of σ and we wish to highlight the differences, if any, between (7) and (14). For (14) we have no energy conservation; however, let us consider the (variable) energy function Let us now consider the case where also the cross-terms of order 0 are involved. Then, instead of (14) we obtain the system ÿ 1 + 3y 1 + 3 2 y 3 1 + 9 2 y 1 z 2 1 + δ(ż 1 + z 1 ) = 0 z 1 + 7z 1 + 9 2 z 3 1 + 27 2 z 1 y 2 1 + 3δ(ẏ 1 + y 1 ) = 0 where the coefficient 3 in the second equation comes from the variation of the energy Also for (17) we do not have energy conservation but the function E in (18) better approximates the internal energy. It may be questionable whether to include the last term δy 1 z 1 into E since this term depends on the aerodynamic forces. However, the behavior of E, which we analyze in the next section, does not depend on the presence of this term.
Numerical results
For (14) we first take σ = 1.47 and we modify the aerodynamic parameter δ. To motivate this choice we note that no energy transfer seems to occur for σ below this threshold, furthermore σ = 1.47 is also the numerical threshold found when no aereodynamic force is inserted in the model, see Section 2. In Figures 2 and 3 we plot both the behavior of the solutions (first line) and the behavior of the energy E(t) (second line), for increasing values of δ. The first lines in Figures 2 and 3 should be compared with the second picture in Figure 1 (case δ = 0). We note that, as the aerodynamic parameter increases, the transfer of energy is anticipated but it is not amplified. Quite surprisingly, on the second line we see that the energy E(t) remains almost constant except in the interval of time where the transfer of energy occurs: for increasing aerodynamic parameters δ we observe increasing variations in the energy behavior.
Then we maintain fixed δ = 0.01 and we increase the initial energy, that is, the initial amplitude of oscillation. In Figures 4 and 5 we plot both the behavior of the solutions (first line) and the behavior of the energy E(t) (second line), for increasing values of σ.
It turns out that all the phenomena are anticipated (in time) and amplified (in width) and reach a quite chaotic behavior for σ = 3 where we had to stop the numerical integration at t = 90. For (17) we take again as initial conditions (15) but with σ ≥ 1.47 so that we are above the energy threshold of instability, see Section 2. In Figure 6 we plot both the behavior of the solution (first line) and the behavior of the energy E(t) (second line) of (17)- (15).
It is quite visible that the instability is further anticipated but now also the amplitude is enlarged. Moreover, the energy increases also in absence of torsional instability: this variation is due to the cross-derivatives since all the other terms appear in the energy (18). The very same behavior is obtained for the internal energy, namely the energy (18) without the last term δy 1 z 1 . We also remark that the energy E fails to follow a regular pattern only in presence of instability. We only quote these numerical results because all the other experiments gave completely similar responses.
Theoretical results
As pointed out by Irvine [7, p.176], the numerical approach is probably the most appropriate to analyze a model which also involves aerodynamic forces. The reason is that a satisfactory stability theory for systems such as (14) and (17) is not available. Nevertheless, some theoretical conclusions can be drawn also for these systems, in the spirit of the results obtained in [3] (see also Section 2), where the main idea is to study the solutions of the systems near the pure vertical modeȳ. Here "pure" means that no interactions with the surrounding are admitted and only the structural behavior of the bridge is considered. Let us explain how some of the results for the isolated system (7) may be extended to (14); one can then proceed similarly for (17).
For system (7), two steps are necessary to define the torsional stability of the unique (periodic) solutionȳ of (9) (the pure mode): (i) we linearize the torsional equation of the system (7) around (ȳ, 0), see (11); (ii) we say that the pure vertical modeȳ at energy E(η 0 , η 1 ) is torsionally stable if the trivial solution of (11) is stable.
We point out that the system (7) is isolated and that (11) is unforced. In this situation, the above steps (i)-(ii) are equivalent to: (I) in the torsional equation of the system (7) we drop all the z 1 -terms of order greater than or equal to one and we replace y 1 withȳ; (II) we say that the pure vertical modeȳ at energy E(η 0 , η 1 ) is torsionally stable if all the solutions of (11) are globally bounded.
If we replace (7) with the system (14), then (i)-(ii) make no sense while (I)-(II) do. A linearization as in (i) would exclude the aerodynamic forces, while acting as in (I) preserves them and gives rise to whereȳ is the unique periodic solution of (9). Needless to say, f and a have the same period. The definition (ii) is inapplicable to (19) since ξ ≡ 0 is not a solution, while (II) is a verifiable property for (19). This is the definition of stability that we adopt for the vertical modeȳ in presence of aerodynamic forces. With this definition we can prove the following statement.
Proposition 2. Let y be the pure vertical mode at energy E(η 0 , η 1 ), that is, the solution of (9). Then y is torsionally stable for (7) if and only if it is torsionally stable for (14).
Assume now thatȳ is torsionally stable for (14). Then ξ is bounded for any choice of ξ h , that is, any choice of the constants A and B in (20). This shows that also ξ h is bounded and proves the stability ofȳ for (7). 2 This statement deserves a couple of straightforward comments.
• In agreement with the numerical results described in Section 4, Proposition 2 shows that the energy threshold for stability does not depend on the strength of the aerodynamic forces; in particular, an isolated system has the same energy threshold.
• By applying Propositions 1 and 2 we infer that the energy threshold for the stability of (14) is at least 235 294 .
Conclusions
It is clear that in absence of wind or external sources a bridge remains still. A vertical load, such as a vehicle, bends the bridge and creates a bending energy. Less obvious is the way the wind inserts energy into the bridge: let us outline how this happens. When a fluid hits a bluff body its flow is modified and goes around the body. Behind the body, or a "hidden part" of the body, the flow creates vortices which are, in general, asymmetric. This asymmetry generates a forcing lift which starts the vertical oscillations of the body. Up to some minor details, this explanation is shared by the whole community and it has been studied with great precision in wind tunnel tests, see e.g. [8,15]. The vortices induced by the wind increase the internal energy of the structure by generating wide vertical oscillations. When the amount of energy reaches a critical threshold our results in [3] show that a structural instability appears: this is the onset of torsional oscillations. The results in the present paper show that, at this stage, the aerodynamic forces excite the internal energy irregularly giving rise to further self-excited oscillations.
The whole energy-oscillation mechanism is here described through a very simplified model which certainly needs to be significantly improved. But, at least qualitatively, we believe that the "true" mechanism in a suspension bridge will follow this pattern: 1) the interaction of the wind with the structure creates vortices; 2) vortices create a lift which starts vertical oscillations of the bridge; 3) when vertical oscillations are sufficiently large, torsional oscillations may appear; 4) the onset of torsional instability is of structural nature; 5) the aerodynamic forces excite the energy only when the structural torsional instability appears; 6) the energy threshold of stability is independent of the strength of aerodynamic forces. | 4,361 | 2014-09-05T00:00:00.000 | [
"Physics",
"Engineering"
] |
Dielectric-breakdown tests of water at 6 MV
EGG published 30 January 2009)We have conducted dielectric-breakdown tests on water subject to a single unipolar pulse. The peakvoltages used for the tests range from 5.8 to 6.8 MV; the effective pulse widths range from 0.60 to 1:1 s;and the effective areas tested range from 1:8 10
Reference [20] proposes that the characteristic time delay delay between the application of a voltage to a water-insulated anode-cathode gap, and the completion of dielectric failure of that gap, can be approximated as follows: In this expression stat is the statistical component of the delay time; i.e., the characteristic time between the application of the voltage and the appearance of free electrons and ions that initiate the formation of streamers in the water.We define form to be the formative component: the time required for the streamers to propagate across the gap and evolve sufficiently to produce complete dielectric failure.
To inhibit electrical breakdown, water-insulated components are usually designed to produce a nominally uniform electric field over most of the component's area.We assume that, when the area of a water-insulated system with a uniform field is sufficiently large, the appearance of free electrons and ions necessary to initiate a breakdown occurs somewhere in the system very early in the voltage pulse [20].Under this condition the statistical time delay stat can be neglected, and the breakdown delay is dominated by its formative component: In principle, dielectric breakdown dominated by the formative component can be studied with an electrode geometry that consists of a point anode and a planar cathode [20][21][22].Although measurements with an infinitely field-enhanced anode point and an infinitely extended flat cathode are not possible, a number of dielectric-breakdown measurements between a significantly field-enhanced anode electrode and a less-enhanced cathode have been described in the literature.
Using these measurements, Ref. [20] finds that complete dielectric failure is likely to occur in water between a fieldenhanced anode and a less-enhanced cathode when E p 0:330AE0:026 eff ¼ 0:135 AE 0:009: In this expression E p V p =d is the peak value in time of the spatially averaged electric field between the anode and cathode (in MV=cm, where V p is the peak voltage difference and d is the minimum distance between the electrodes), and eff is the temporal width (in s) of the voltage pulse at 63% of peak.This relation is based on 25 measurements for which 1 V p 4:10 MV, 1:25 d 22 cm, and 0:011 eff 0:6 s.
To develop a tentative design criterion for a large-area water-insulated system with a nominally uniform electric field, Ref. [20] further applies a safety factor to Eq. ( 3) by reducing the right-hand side by 20%: E p 0:330 eff 0:108 when A * 10 4 cm 2 ; (4 where A is the effective area of the system.Equation ( 4) assumes that the area of the system is sufficiently large to have a negligible statistical time delay, and hence that the breakdown delay is dominated by the formative component.Both Eqs. ( 3) and ( 4) assume that voltage pulses of interest have normalized time histories that are mathematically similar; under this condition, eff / delay $ form .
PHYSICAL REVIEW SPECIAL TOPICS -ACCELERATORS AND BEAMS 12, 010402 (2009) 1098-4402=09=12(1)=010402( 5) 010402-1 Ó 2009 The American Physical Society In this article, we describe three tests of Eq. ( 4), which we believe are the first performed under the following simultaneous conditions: (i) peak voltage !4:10 MV, (ii) AK gap !22 cm, (iii) effective pulse width !0:6 s, and (iv) effective anode area ! 10 4 cm 2 .Two of the tests were conducted on the 36-module ZR accelerator; the third was conducted on the Z-20 machine, which is a single ZR module used for component development.
All the tests were performed on one or more waterinsulated intermediate-storage capacitors.A crosssectional view of a single capacitor is presented by Fig. 1.The capacitor includes two coaxial electrodes.The inner radius of the outer electrode is 99 cm; the outer radius of the inner electrode is 56 cm; and the anodecathode gap is 43 cm.The total effective area of the anode of a single capacitor is 1:8 Â 10 5 cm 2 .The electric field at the anode is nominally uniform.The voltage across each capacitor was measured using the D-dot monitor described in Ref. [23].
The tests were conducted over the course of operating the ZR and Z-20 accelerators for various experiments, and were not performed on accelerator shots dedicated specifically to measuring the dielectric strength of water.Given the high voltages involved, large AK gaps, long pulse widths, and large areas, dedicated shots require a substantial investment of resources and hence are not readily conducted.For this reason, we report in this article results of tests that were performed during normal accelerator operation.
Results of the tests, along with those previously described in Ref. [20], are summarized by Table I.Two capacitors were used for the 5.8-MV test; the corresponding voltage pulses are plotted by Fig. 2. One capacitor was used for the 6.8-MV test; 20 were used for each of the five tests conducted at 6.1 MV.For the 6.8-and 6.1-MV tests, the voltage pulse applied to each capacitor was shortened by closing a switch that was connected to the capacitor's inner conductor.Correcting for the coaxial geometry, we find that the peak anode electric fields were 0.103, 0.121, and 0:108 MV=cm for the 5.8-, 6.8-, and 6.1-MV tests, respectively.
The values of E p 0:330 eff for all the tests are listed in Table I, and suggest that the results are consistent with Eq. ( 4).(The results are also consistent with the predictions of Woodworth and colleagues [26].) In addition to Eq. ( 3), other published water-dielectricbreakdown relations are considered; specifically [21,22,27,28]: I. Conditions under which dielectric breakdown of water is observed not to occur.Each of these five observations was made on a large-area (A ! 10 4 cm 2 ) water-insulated system with a nominally uniform electric field.The quantity V p is the peak voltage difference between the anode and cathode, d is the minimum distance between the electrodes, E p V p =d, and eff is the temporal width of the voltage pulse at 63% of peak.The last column assumes E p is expressed in MV=cm, and eff in s.The Maxwell-Lab measurements were performed on a capacitor with coaxial electrodes that had outer and inner radii of 60 and 48 cm, respectively [22,24].The peak field E p given for the Maxwell measurements is that at the outer conductor (which was the anode), and is corrected for the coaxial geometry.The peak fields of the tests described in the present article are similarly corrected.The observations summarized in the table are consistent with the design criterion given by Eq. ( 4).
FIG. 1 .
FIG. 1. (Color) Cross-sectional view of a ZR-accelerator intermediate-storage capacitor.The two electrodes have outer and inner radii of 99 and 56 cm, respectively.
TABLE II .
Comparison of the results summarized by Table I with the predictions of Eqs. ( ). FIG. 2. (Color) Time histories of the voltage applied to the two ZR intermediate-storage capacitors that were used for the 5.8-MV test.DIELECTRIC-BREAKDOWN TESTS OF . . .Phys.Rev. ST Accel.Beams 12, 010402 (2009) 010402-3 | 1,684.2 | 2009-01-30T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Critical exponents of the order parameter of diffuse ferroelectric phase transitions in the solid solutions based on lead germanate: studies of optical rotation
In this work we show that the critical exponents of the order parameter (CEOPs) of diffuse ferroelectric phase transitions (DFEPTs) occurring in lead germanate-based crystals can be determined using experimental temperature dependences of their optical rotation. We also describe the approach that suggests dividing a crystal sample into many homogeneous unit cells, each of which is characterized by a non-diffuse phase transition with a specific local Curie temperature. Using this approach, the CEOPs have been determined for the pure Pb$_{5}$Ge$_{3}$O$_{11}$ crystals, the solid solutions Pb$_{5}$(Ge$_{1-x}$Si$_{x}$)$_{3}$O$_{11}$ ($x = $0.03, 0.05, 0.10, 0.20, 0.40) and (Pb$_{1-x}$Ba$_{x}$)$_{5}$Ge$_{3}$O$_{11}$ ($x =$ 0.02, 0.05), and the doped crystals Pb$_{5}$Ge$_{3}$O$_{11}$:Li$^{3+}$ (0.005 wt. %), Pb$_{5}$Ge$_{3}$O$_{11}$:La$^{3+}$ (0.02 wt. %), Pb$_{5}$Ge$_{3}$O$_{11}$:Eu$^{3+}$ (0.021 wt. %), Pb$_{5}$Ge$_{3}$O$_{11}$:Li$^{3+}$, Bi$^{3+}$ (0.152 wt.~\%) and Pb$_{5}$Ge$_{3}$O$_{11}$:Cu$^{2+}$ (0.14 wt. %). Comparison of our approach with the other techniques used for determining the Curie temperatures and the CEOPs of DFEPTs testifies to its essential advantages.
Introduction
Lead germanate crystals Pb 5 Ge 3 O 11 (abbreviated hereafter as PGO) exhibit a proper second-order ferroelectric phase transition (PT) at the Curie temperature C ≈ 450 K [1]. At > C , the crystals belong to a hexagonal system (the point symmetry group6). The sixth-fold inversion symmetry axis vanishes at < C and the symmetry becomes trigonal (the point group 3).
Ferroelectric properties of PGO were discovered nearly 50 years ago. In spite of a long history of their studies, the crystals still attract a considerable attention of researchers [2][3][4][5][6]. Probably, this is partially due to the fact that PGO remains a unique example of materials where the PT is accompanied by the symmetry change6 ↔ 3. Moreover, this symmetry change is very convenient while studying the optical rotatory power. Indeed, the optical rotation in PGO can be directly measured for the light with wavelength propagating along the optic axis, with no accompanying linear optical birefringence. In addition, the PGO crystals represent a basis for a large family of solid solutions and doped crystals [7], whereas replacement of chemical elements in PGO or its doping can be achieved using fairly simple technological processes. At the same time, such solid solutions and doped crystals are attractive objects for the study of diffuse ferroelectric PTs (DFEPTs) -namely, ferroelectric PTs that do not have a point character, but occur in certain, more or less expressed, temperature intervals (diffusion regions) [8].
Below the Curie temperature, PGO becomes optically active due to the effect of electrogyration induced by spontaneous electric polarization [7]: where 33 is the gyration-tensor component, 333 is the spontaneous electrogyration coefficient and ( S ) 3 is the spontaneous polarization. The spontaneous polarization ( S ) 3 ≡ S represents the order parameter of the PT. It is linearly related to the optical rotation 3 ≡ , which can be defined as a specific rotation angle of the polarization plane of light [1][2][3][4][5][6][7]9]. Therefore, the studies of temperature dependence of the optical rotation, which can be considered as a spontaneous electrogyration, enable one to derive many characteristics of the PT. Despite diverse knowledge about the physical properties of PGO, some aspects of their critical behavior are still unclear. First of all, this applies to experimental determination of critical exponent of the order parameter (CEOP) . This parameter can be found from the temperature dependence of spontaneous polarization: In the framework of mean-field Landau theory for the proper second-order ferroelectric PTs, should be equal to 0.5. In 1972, Iwasaki et al. [1] analyzed the experimental dependence S = S ( ) for the PGO crystals and found that its behavior corresponds to the classical Landau theory [i.e., S ∝ ( C − ) 0.5 ] only in the region C − < 30 K below the point C = 450 K. Further on, Konak et al. [10] noted that the temperature dependence of the optical rotation in PGO is described by the "empirical" relation ∝ ( C − ) 0.35 in the region C − > 3 K (with C = 450 K). In other words, although represents a secondary order parameter of the PT, its behavior differs significantly from that predicted by the Landau theory. In 1999, Trubitsyn et al. [11] showed that the splitting parameter Δ of one of the spectral EPR lines of probing Gd 3+ ions in PGO behaves as a "local" order parameter of the PT according to the Landau theory [Δ ∝ ( C − ) 0.5 ] in a sufficiently wide temperature region ( C − < 150 K) below the Curie temperature ( C = 451.4 K).
In 2005, Shaldin et al. [12] studied the temperature behavior of spontaneous polarization for PGO in the temperature range from 4.2 to 300 K. Their experimental results and the literature data available at the time allowed the authors of reference [12] to detect the changes in the critical behavior of PGO from a dipole type ( = 0.5) to a pseudo-quadrupole type ( = 0.25) with increasing temperature from 290 to C = 450 K. This manifests itself as a change in the behavior of the parameter 1/ S as a function of ( C − ) at C − = 50 K. In 2006, Miga et al. [13] obtained the CEOP = 0.51 ± 0.03 and the Curie temperature C = 452.58 ± 0.03 K, using the experimental temperature dependence of the residual polarization R and its fitting by the formula R ∝ ( C − ) in the temperature region C − < 12 K. These parameters were obtained after a sample under study was aged in the electric field with the strength 10 6 V/m. Finally, in 2008, Kushnir et al. [14] performed the optical studies of fluctuations of the order parameter for Pb 5 (Ge 1− Si ) 3 which is equal to 0.44 for pure PGO. Moreover, the values for the PGSO and PBaGO solid solutions, which manifest diffuse ferroelectric phase transitions (DFEPT), deviate even more significantly from the classical value = 0.5. Considering the methods available for determining CEOPs of the PTs, we limit ourselves to the methods that analyze the temperature behavior of the optical rotation and use the dependences like equation (1.3) to calculate the CEOP . This is because in reality the other approaches determine only the temperature region where the spontaneous electric polarization or some secondary order parameter of the PT is proportional to ( C − ) 0.5 . As can be seen from equation (1.3), exact setting of the Curie temperature C is essential for such methods.
Within the framework of optical techniques, the Curie temperature is usually chosen as a point where the optical rotation disappears completely in the process of its temperature change. Hereinafter, this approach is referred to as a method I. On the other hand, in reference [14] the C parameter is defined as a point of minimum of the temperature dependence of the derivative d 2 /d . This approach is called a method II.
43703-2
When the ferroelectric PT is diffuse, the Curie temperature cannot be defined unambiguously. This can lead to significant errors in determining the CEOP even though one excludes a temperature region near the PT from the calculations described by equation (1.3).
We illustrate the ambiguity in the choice of the Curie temperature based on the example of DFEPT in PGO doped with 0.140 wt. % of Cu 2+ ions (PGO:Cu_140), which was studied in our recent work [15]. As seen from figure 1, the temperature dependences of the optical rotation in the PGO:Cu_140 crystals have a sufficiently long "tail", which indicates to the presence of DFEPT. Using the method I, C (I) = 460.8 K can be found (figure 2) and the CEOP (I) determined with equation (
Phenomenological approach
The technique used in this work is based on the following: (i) a generalized model of diffuse PTs, according to which a crystal under study can be divided into infinitely large number of homogeneous unit 43703-3 cells so that the PT in each of these cells is not diffuse and manifests a specific local Curie temperature [8]; (ii) Gaussian distribution of the local Curie temperatures in the homogeneous unit cells due to the central limit theorem, where the role of mathematical expectation is played by the so-called average Curie temperature Θ, which is taken as the PT point and characterizes the state when half of the sample undergoes the PT [16]; (iii) a general relation (1.2) for the order parameter of a proper second-order ferroelectric PT and its CEOP. Assume that diffusion of the PT is caused by some scalar inhomogeneity, e.g., scalar defects that do not change the symmetry of crystal matrix and affect only the Curie temperature distribution in the sample.
Although there are a number of works considering the local properties that affect the critical behaviour (see e.g., [17]) the approach consisting in dividing the crystals into non-interacting homogeneous cells is proper for the works mentioned above [8,16]. Of course, even Ising's model is a particular case of work [17], which can be considered as the dividing of the system into the subsystems, while these subsystems interact between each other.
Let the diffusion region of the phase transition Δ contain ∈ {2, 3, 4, . . . } local Curie temperatures C ( = 1, . . . , ). Then, we have It is obvious that the accuracy of this mathematical model increases with increasing parameter . The Gaussian distribution of the local Curie temperatures ( C ) in the homogeneous unit cells within the diffusion region Δ must satisfy the boundary and normalization conditions: Given formula (2.1) and conditions (2.2), one can rewrite the relation for ( C ) as
43703-4
Critical exponents of the order parameter of diffuse ferroelectric phase transitions are constants which are fixed for a given value. Taking eguations (1.2) and (2.1) into account, one can find the temperature dependence of optical rotation for a given homogeneous cell with the local Curie temperature C : where the coefficient of proportionality is assumed to be the same for all homogeneous cells of the sample under study. Considering equations (2.3) and (2.6), we arrive at the final temperature dependence of the optical rotation , which is valid for the whole sample: It follows from equation (2.7) that the equality = 0 holds true at C . In other words, the parameter C is the temperature at which the optical rotation induced by spontaneous polarization vanishes completely in the process of heating of the sample. The parameter C coincides with the Curie temperature found by the method I [i.e., C ≡ C (I)]. Therefore, C can be found directly from the experimental dependence = ( ).
Summing up, our technique for determining the CEOP at the diffuse PT implies fitting the temperature dependence of the optical rotation to relation (2.7) and finding the constants C , , and . This enables one to determine the parameters , Δ and , which provide the best agreement of the fitting curve with the experimental dependence. The appropriate goodness of fit is characterized by the determination coefficient 2 .
Results and discussion
Let us consider temperature dependences of the optical rotation obtained in works [18,19] [18,19]. Open points refer to experimental results and dashed curves refer to fitting by equation (2.7).
As seen from figure 4 and figure 5, the fitting curves agree well with the temperature dependences of optical rotation for the both PBaGO and PGSO solid solutions. The corresponding fitting parameters 43703-5 Note that the fitting results obtained for pure PGO indicate that the PT in this crystal is also diffuse (see figure 4 and table 1). This may be the main reason why the C and values obtained in different works for the pure PGO crystals are different. Now, let us analyze the temperature dependences of the optical rotation obtained in our previous works [15,20] for the PGO crystals doped with 0.005 wt. % of Li 3+ (PGO:Li_005), 0.020 wt. % of La 3+ (PGO:La_020), 0.021 wt. % of Eu 3+ (PGO:Eu_021), 0.152 wt. % of Li 3+ and Bi 3+ (PGO:LiBi_152) and 0.14 wt. % of Cu 2+ (PGO:Cu_140) (see figure 6). Using our technique for determining the CEOPs at the diffuse PTs, one can find the fitting parameters listed in table 3.
It is worth noting that the fitting parameters presented in tables 1, 2 and 3 are obtained for the case = 10 3 . Our studies have also testified that further increasing value does not lead to a significant increase in the accuracy of the model and to a better correspondence of experiment and theory. As seen from tables 1, 2 and 3, the inequality C (II) Θ C (I) is commonly valid for the Θ and C parameters. In its turn, the difference Δ C is comparable with the fitting parameter Δ . Therefore, the methods I and II combined together can be considered as a rapid test for estimating a degree of diffusion of the PT.
Given the above results, one can conclude that the method for determining the CEOPs at diffuse PTs, which is used in the present work, has notable advantages over the other methods employed for finding 43703-6 [15,20]. Open points refer to experimental results and dashed curves refer to fitting by equation (2.7). correct, provided that a "tail" of optical rotation is observed at the DFEPT. The definition of C as a point of minimum of the temperature dependence of the derivative d 2 /d (the method II) [14] is also not indisputable. Indeed, in case of < 0.5, given equation (1.2), the parameter d 2 /d ∝ −2 /( C − ) 1−2 at = C should tend to −∞ rather than to a finite value (see figure 2). The other point is that determination of the CEOP requires analyzing an additional dependence, log vs. log( C − ). Moreover, another temperature dependence, that of the derivative d 2 /d , must be plotted in order to determine C in case of the method II. More important, selection of that part of logarithmic dependence which should be fitted by a linear function is rather subjective. On the contrary, the approach to the calculation of CEOP presented in this work relies only upon the fitting procedure for the dependence = ( ) and the C data which can be found using simple and standard objective techniques for interpolation of the experimental temperature dependence of optical activity. Finally, our approach enables one to determine the diffusion region Δ for the PT, in contrast to the other methods.
Conclusions
In the present work, we have described the method suggested for accurate determination of the CEOPs in the PGO-based solid solutions that manifest the DFEPTs. This method consists in dividing a crystal sample under study into a large number of homogeneous unit cells, each of which has a non-diffuse PT with exactly defined local Curie temperature. Then, we fit the temperature dependences of the optical rotation, which are proportional to the spontaneous polarization (i.e., the order parameter), using a straightforward phenomenological relation. As a result, we are able to find the CEOP itself, the region where the PT is diffuse, and the average Curie temperature.
Using this method, we have determined the CEOPs for the pure Pb 5 Ge 3 O 11 crystals, the solid solutions Pb 5 (Ge 1− Si ) 3 | 3,616.2 | 2023-01-03T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Small-angle scattering and quasiclassical approximation beyond leading order
In the present paper we examine the accuracy of the quasiclassical approach on the example of small-angle electron elastic scattering. Using the quasiclassical approach, we derive the differential cross section and the Sherman function for arbitrary localized potential at high energy. These results are exact in the atomic charge number and correspond to the leading and the next-to-leading high-energy small-angle asymptotics for the scattering amplitude. Using the small-angle expansion of the exact amplitude of electron elastic scattering in the Coulomb field, we derive the cross section and the Sherman function with a relative accuracy $\theta^2$ and $\theta^1$, respectively ($\theta$ is the scattering angle). We show that the correction of relative order $\theta^2$ to the cross section, as well as that of relative order $\theta^1$ to the Sherman function, originates not only from the contribution of large angular momenta $l\gg 1$, but also from that of $l\sim 1$. This means that, in general, it is not possible to go beyond the accuracy of the next-to-leading quasiclassical approximation without taking into account the non-quasiclassical terms.
I. INTRODUCTION
In the high-energy QED processes in the atomic field, the characteristic angles θ between the momenta of final and initial particles are small. Therefore, the main contribution to the amplitudes of the processes is given by the large angular momenta l ∼ ερ ∼ ε/∆ ∼ 1/θ, where ε, ρ, and ∆ are the characteristic energy, impact parameter, and momentum transfer, respectively ( = c = 1). The quasiclassical approach provides a systematic method to account for the contribution of large angular momenta. It was successfully used for the description of numerous processes such as charged particle bremsstrahlung, pair photoproduction, Delbrück scattering, photon splitting, and others [1][2][3][4][5][6][7][8]. The accurate description of such QED processes is important for the data analysis in modern detectors of elementary particles. The quasiclassical approach allows one to obtain the results for the amplitudes not only in the leading quasiclassical approximation but also with the first quasiclassical correction taken into account [9][10][11][12][13][14].
A natural question arises: how far can we advance in increasing accuracy within the quasiclassical framework? In this paper we examine this question by considering the process of high-energy small-angle scattering of polarized electrons in the atomic field. The general form of this cross section reads (see, e.g., Ref. [15]) where dσ 0 /dΩ is the differential cross section of unpolarized scattering, p and q are the initial and final electron momenta, respectively, ζ 1 is the polarization vector of the initial electron, ζ 2 is the detected polarization vector of the final electron, S is the so-called Sherman function, and T ij is some tensor. In Section II we use the quasiclassical approach to derive the small-angle expansion of the cross section of electron elastic scattering in arbitrary localized potential. As for the unpolarized cross section dσ 0 /dΩ, its leading and subleading terms with respect to the scattering angle θ are known for a long time [16]. They can both be calculated within the quasiclassical framework. We show that the Sherman function S in the leading quasiclassical approximation is proportional to θ 2 . We compare this result with that obtained by means of the expansion with respect to the parameter Zα, [17][18][19][20][21] (Z is the nuclear charge number, α ≈ 1/137 is the fine structure constant). The leading in Zα contribution to the Sherman function is due to the interference between the first and second Born terms in the scattering amplitude. In contrast to the quasiclassical result (proportional to θ 2 ), it scales as θ 3 at small θ. There is no contradiction between these two results because the expansion of our quasiclassical result with respect to Zα starts with (Zα) 2 . Therefore, depending on the ratio Zα/θ, the dominant contribution to the Sherman function is given either by the leading quasiclassical approximation or by the interference of the first two terms of the Born expansion. One could imagine that the terms O(θ 3 ) in the function S can be ascribed to the next-to-leading quasiclassical correction and, therefore, they come from the contribution of large angular momenta. However, by considering the case of a pure Coulomb field, we show in Section III that the account for the angular momenta l ∼ 1 is indispensable for these terms. Thus, we are driven to the conclusion that, in general, it is not possible to go beyond the accuracy of the next-to-leading quasiclassical approximation without taking into account the non-quasiclassical terms.
II. SCATTERING OF POLARIZED ELECTRONS IN THE QUASICLASSICAL APPROXIMATION
It is shown in Ref. [22] that the wave function ψ p (r) in the arbitrary localized potential V (r) can be written as ψ p (r) = [g 0 (r, p) − α · g 1 (r, p) − Σ · g 2 (r, p)]u p , where φ is a spinor, α = γ 0 γ, Σ = γ 0 γ 5 γ, m is the electron mass, and σ are the Pauli matrices. In this section we assume that m/ε ≪ 1. In the leading quasiclassical approximation, the explicit forms of the functions g 0 and g 1 , as well as the first quasiclassical correction to g 0 , are obtained in Ref. [9]. The first quasiclassical correction to g 1 and the leading contribution to g 2 are derived in Ref. [14]. The asymptotic form of the function ψ p (r) at large distances r reads The functions G 0 , G 1 , and G 2 can be easily obtained from the expressions for g 0 , g 1 , and g 2 in Ref. [14]: where Here ∆ = q − p, q = pr/r, ρ is a two-dimensional vector perpendicular to the initial momentum p, and the notation X ⊥ = X − (X · n p )n p is used for any vector X, n p = p/p.
For small scattering angle θ ≪ 1, we have δf 0 ∼ δf 1 ∼ θf 0 . Taking this relation into account, we obtain the following expressions for dσ 0 dΩ , T ij , and S in Eq In Eqs. (6) and (7) we keep only the leading and the next-to-leading terms with respect to θ in dσ 0 /dΩ and T ij , and the leading term in the function S. The form of T ij is a simple consequence of helicity conservation in ultrarelativistic scattering. The expression for dσ 0 /dΩ coincides with that obtained in the eikonal approximation [16]. Note that f 0 → −f * 0 , δf 0 → δf * 0 , and δf 1 → δf * 1 at the replacement V → −V as it simply follows from Eq. (5). Therefore, the quasiclassical result for the Sherman function S, Eq. (7), is invariant with respect to the replacement V → −V . In contrast, the term 2 Re(δf 0 /f 0 ) in dσ 0 /dΩ in Eq. (6) results in the charge asymmetry in scattering, i.e., in the difference between the scattering cross sections of electron and positron, see, e.g., Ref. [15]. Similarly, the account for the first quasiclassical correction leads to the charge asymmetry in lepton pair photoroduction and bremsstrahlung in an atomic field [13,14,22].
Let us specialize Eqs. (6) and (7) to the case of a Coulomb field. Substituting V (r) = −Zα/r in Eq. (5), we have where η = Zα and Γ(x) is the Euler Γ function. Then, from Eqs. (6) and (7) we obtain The remarkable observation concerning the obtained Sherman function (10) is that it scales as θ 2 while the celebrated Mott result [17] for the leading in η contribution to S scales as There is no contradiction because the expansion of (10) in η starts with η 2 , while the Mott result is proportional to η. Thus, the Mott result is not applicable if θ η. In the next section we obtain the result (10), along with smaller corrections with respect to θ, by expanding the exact Coulomb scattering amplitude represented as a sum of partial waves.
We show that the Mott result is recovered in the order θ 3 , as it should be.
Let us now qualitatively discuss the influence of the finite nuclear size on the cross section dσ 0 /dΩ and the Sherman function S. We use the model potential where R is the characteristic nuclear size. For this potential we take all integrals in Eq. (4) and obtain where K ν (x) is the modified Bessel function of the second kind. The quantity A in Eq. (13) is nothing but the charge asymmetry, As it should be, in the limit b → 0 the results (12) and (14) coincide with Eqs. (9) and (10), respectively. In Fig. 1 and Fig. 2 we plot the asymmetry A and the Sherman function S as the functions of b for a few values of η. It is seen that both functions strongly depend on b and η. It is interesting that they both change their signs at b ∼ 1. Presumably, the latter feature takes place also for the commonly used parametrizations of the nuclear potential.
III. SMALL-ANGLE EXPANSION OF THE COULOMB SCATTERING AMPLI-TUDE
In this section we investigate the nontrivial interplay between the contributions of large angular momenta l (quasiclassical contribution) and l ∼ 1 to the cross section and Sherman function for electron elastic scattering in the Coulomb field. Note that, for small angle θ, the main contribution to the scattering amplitude is given by l ≫ 1 not only in the ultrarelativistic limit, but for arbitrary β = p/ε as well. Therefore, we treat the parameters η = Zα and ν = Zα/β as independent ones. We perform small-angle expansion of the amplitude, but do not assume that η ≪ 1, in contrast to the consideration in Ref. [21].
The elastic scattering amplitude reads (see, e.g., Refs. [15,23]): where φ i and φ f are the spinors of the initial and final electron, respectively. The functions F (θ) and G (θ) have the form Here P l = P l (cos θ) is the Legendre polynomial, γ l = l 2 − η 2 .
The unpolarized cross section dσ/dΩ and Sherman function S (θ) are readily expressed in terms of F (θ) and G (θ): We want to find the expansion of dσ 0 /dΩ and S with respect to θ. The main contribution to the sum in Eq. (16) comes from the region of large l. Let us write the function F as The quantity T l is the expansion of Γ(γ l −iν)/Γ(l−iν) Γ(γ l +iν+1)/Γ(l+iν+1) e iπ(l−γ l ) over 1/l up to O (1/l 2 ). The sum in the definition of F a can be taken analytically at θ ≪ 1. In order to do this we use the integral representation and take the sum over l using the generating function for the Legendre polynomials. We where s = sin θ 2 and ̺ = y 2 + 4s 2 (1 + y). As it follows from Eq. (19), the convenient variable for the small-angle expansion is s ≪ 1. There are two regions, which contribute to the integral over y: The first region provides contributions ∝ s n+2iν (n = 0, 1, . . .), while the second region provides contributions ∝ s n (n = 2, 3 . . .). Calculating the integral with the method of expansion by regions, see, e.g., Ref. [24], we arrive at Here t 0 and t 1 correspond, respectively, to the leading quasiclassical approximation and first quasiclassical correction (|t 0 | = 1, |t 1 | ∼ θ 1 ). The relative magnitude of t 2 is θ 2 and it is tempting to interpret t 2 as a second quasiclassical correction. However, this is not true because the magnitude of t 2 is the same as that of the individual terms at l ∼ 1 in the sum in Eq. (16). It is easy to check that the contribution to t 2 proportional to s 2+2iν remains intact even if the sum over l starts from some l 0 ≫ 1, provided that l 0 ≪ 1/s. Therefore, this contribution is natural to identify with the second quasiclassical correction.
Let us now consider the function F b in Eq. (18). The sum over l converges at l ∼ 1, and we can approximate P l (θ) − P l−1 (θ) by −2ls 2 . Since F b in the leading order is proportional to s 2 , it is natural to sum up F b and the term in F a (θ), Eq. (20), proportional to s 2 . Finally we have T l is defined in Eq. (18), and h(ν) is given in Eq. (8). The small-angle expansion of the function F was investigated in Ref. [21] at small η and arbitrary ν. Expanding in η up to η 4 under the sum sign in Eq. (22) and taking the sum over l, we find the agreement with Ref. [21] up to a misprint in Eq. (3.27) of that paper (in the right-hand side of Eq. (3.27) one should make the replacement j → j + 1). The function C(η, ν) strongly depends on the parameters η and ν. This statement is illustrated by Fig. 3 where the real and imaginary parts of C(η, ν) at ν = η (β = 1) are shown as functions of η. Substituting Eq. (21) in Eq. (17), we find It is quite remarkable that the second correction to the cross section entirely comes from interference between the quasiclassical and nonquasiclassical terms. Therefore, this correction can not be calculated within the quasiclassical approach.
We are now in position to discuss the nontrivial interplay between the small-angle approximation and the small-ν approximation. Keeping only the leading in ν terms in the coefficients of the expansion in s, we have S (θ) = 2ηms 2 ε [πη(2 ln 2 − 1) + βs ln s] .
The cross section (25) agrees with the small-angle expansion of the corresponding result in Refs. [18,19]. The function S, Eq. (26), agrees with the small-angle expansion of the Sherman function in Ref. [20]. The term proportional to s ln s in (26) corresponds to the celebrated Mott result [17].
We see that the relative magnitude of the first and the second corrections with respect to s to the differential cross section is proportional to the ratio ν/θ of two small parameters, and this ratio can be smaller or larger than unity. The same phenomenon takes place also in the Sherman function: the ratio of the leading quasiclassical term and the correction is proportional to ν/(θ ln θ).
IV. CONCLUSION
In the present paper we have examined the accuracy of the quasiclassical approach when applied to the calculation of the small-angle electron elastic scattering cross section, including the polarization effects. Using the quasiclassical wave function, we have derived the differential cross section with the account of the first correction in θ, Eq. (6), and the Sherman function in the leading order in θ, Eq. (7). The results (6) and (7) 23) and (24) are valid even for β ≪ 1. We have shown that the correction of relative order θ 2 to the cross section, as well as that of relative order θ to the Sherman function, originate not only from the contribution of large angular momenta l ≫ 1, but also from that of l ∼ 1. Thus, we are driven to the conclusion that, in general, it is not possible to go beyond the accuracy of the next-to-leading quasiclassical approximation without taking into account the non-quasiclassical terms. | 3,679.2 | 2015-07-15T00:00:00.000 | [
"Physics"
] |
Hyper-radiosensitivity affects low-dose acute myeloid leukemia incidence in a mathematical model
In vitro experiments show that the cells possibly responsible for radiation-induced acute myeloid leukemia (rAML) exhibit low-dose hyper-radiosensitivity (HRS). In these cells, HRS is responsible for excess cell killing at low doses. Besides the endpoint of cell killing, HRS has also been shown to stimulate the low-dose formation of chromosomal aberrations such as deletions. Although HRS has been investigated extensively, little is known about the possible effect of HRS on low-dose cancer risk. In CBA mice, rAML can largely be explained in terms of a radiation-induced Sfpi1 deletion and a point mutation in the remaining Sfpi1 gene copy. The aim of this paper is to present and quantify possible mechanisms through which HRS may influence low-dose rAML incidence in CBA mice. To accomplish this, a mechanistic rAML CBA mouse model was developed to study HRS-dependent AML onset after low-dose photon irradiation. The rAML incidence was computed under the assumptions that target cells: (1) do not exhibit HRS; (2) HRS only stimulates cell killing; or (3) HRS stimulates cell killing and the formation of the Sfpi1 deletion. In absence of HRS (control), the rAML dose-response curve can be approximated with a linear-quadratic function of the absorbed dose. Compared to the control, the assumption that HRS stimulates cell killing lowered the rAML incidence, whereas increased incidence was observed at low doses if HRS additionally stimulates the induction of the Sfpi1 deletion. In conclusion, cellular HRS affects the number of surviving pre-leukemic cells with an Sfpi1 deletion which, depending on the HRS assumption, directly translates to a lower/higher probability of developing rAML. Low-dose HRS may affect cancer risk in general by altering the probability that certain mutations occur/persist.
Introduction
One of the early observations among atomic bomb survivors in Hiroshima and Nagasaki was an increased risk of developing leukemia (Folley et al. 1952). Since then, many epidemiological analyses have been presented on the incidence of various forms of leukemia in the life span study cohort of Japanese atomic bomb survivors to investigate, among others, the shape of the dose-response curve (Preston et al. 1994;Richardson et al. 2009;Hsu et al. 2013). In these analyses, excess risk models with a linear, linear-quadratic or a purely quadratic dependency in radiation dose are typically fitted to cohort data to examine the possible form of the dose-response curve that best describes the data. Another approach is to translate the (limited) radiobiological understanding of a disease 1 3 into a mechanistic mathematical model to study the doseresponse curve (Preston 2017;Shuryak 2019;Kaiser et al. 2021). Stouten et al. (2021) presented a mathematical model to quantify the dose-response curve of the major radiation-induced acute myeloid leukemia (rAML) pathway in photon-irradiated male CBA/H mice. These mice have been used extensively to study rAML due to very low background incidence, reproducible maximum rAML induction of about 20% following 3 Gy of whole-body exposure, and histopathological features similar to human AML (Major and Mole 1978;Verbiest et al. 2015). The major murine rAML disease pathway can be explained in terms of two mutations affecting gene Sfpi1 coding for hematopoietic transcription factor PU.1 (Finnon et al. 2012;Verbiest et al. 2015;O'Brien et al. 2020). A radiation-induced deletion with Sfpi1 copy loss is the first hit responsible for the formation of pre-leukemic cells (Bouffler et al. 1997;Silver et al. 1999), and is identified in about 82% of the rAML cases. In approximately 78% of these rAML cases, the cells with an Sfpi1 deletion additionally acquired a specific point mutation in the remaining Sfpi1 allele (O'Brien et al. 2020). These two mutations are considered to be responsible for the formation of leukemic cells and the resulting rAML onset (Cook et al. 2004;Suraweera et al. 2005;Verbiest et al. 2018).
Although the target cells responsible for rAML development remain unknown, hematopoietic stem and progenitor cells (HSPCs) are generally thought to be involved in leukemogenesis (Passegué et al. 2003;Hope et al. 2004;Taussig et al. 2005;Hirouchi et al. 2011;Shlush et al. 2014;Gault et al. 2019). Recent in vitro clonogenic survival experiments revealed that murine HSPCs such as long-term hematopoietic stem cells (LT-HSCs) exhibit cellular hyper-radiosensitivity (HRS) (Rodrigues-Moreira et al. 2017). Low-dose HRS is responsible for severely lowering the surviving cell fraction after very low dose exposure compared to what one would expect based on a linear-quadratic cell survival model. Further increasing the dose activates an increased radioresistance mechanism, which causes the surviving cell fraction to increase again. At higher doses, the surviving cell fraction converges back onto the traditional linear-quadratic model (Marples and Collis 2008). In the present paper, the term HRS includes the response in the entire dose region where clonogenic cell survival is lower than expected based on a linear-quadratic model, i.e., it includes the increased radioresistance phenomenon. Rodrigues-Moreira et al. (2017) observed that the potential target cells of rAML display HRS for acute doses below 0.1 Gy, with a maximum effect around 0.06 Gy. A transient low-dose radiation-induced increase of reactive oxygen species has been shown to be responsible for inducing HRS in these hematopoietic cells (Rodrigues-Moreira et al. 2017). Cellular HRS has been extensively investigated for the endpoint of cell survival (Lambin et al. 1993;Short et al. 1999;Joiner et al. 2001;Marples and Collis 2008;Olobatuyi et al. 2018); however, it remains extraordinarily difficult to relate the possible effects of HRS to the endpoint of cancer risk. Besides the endpoint of cell survival, HRS has also been shown to stimulate the induction of radiation-induced chromosomal aberrations and deletions at very low doses (Seth et al. 2014;Troshina et al. 2020). The rAML incidence may be affected if HRS stimulates the formation of the Sfpi1 deletion at very low doses.
Exposure to doses typically absorbed during diagnostic procedures such as PET/CT scans may affect risk estimates if target cells exhibit HRS. Because of a small effect size at lower doses, it is not realistic/practical to conduct mouse experiments to infer a dose-response curve dependent on the possible HRS status of rAML target cells. In the present study, a contribution is made to expand the (scarcely) available literature about the possible effects of HRS on the low-dose rAML incidence. The induced-repair model introduced by Marples and Joiner (1993) was used to investigate the effect of HRS target cell status on the low-dose rAML incidence. In the present paper, three scenarios are considered to study how cellular HRS may influence the incidence of low-dose rAML. The assumptions were made that HRS does not affect cell survival (HRS − ), HRS only influences cell survival (HRS + 1 ), or HRS stimulates cell killing (i.e., it influences cell survival) as well as the induction of the Sfpi1 deletion (HRS + 2 ). Based on the presented model, experiments are proposed to possibly detect whether HRS affects low-dose rAML incidence. Furthermore, the computationally intensive stochastic rAML model from Stouten et al. (2021) was redesigned such that the dose-response curve can be calculated almost instantaneously.
Background of the model
The redeveloped rAML model presented here including an HRS extension is based on previous modeling work (Dekkers et al. 2011;Stouten et al. 2021) in which, similar to the two-stage models for cancer risk assessment, malignant cells are formed due to the occurrence of two mutations (Moolgavkar et al. 1988;Dewanji et al. 1989;Leenhouts and Chadwick 1994). Figure 1 shows an overview of the model utilized to quantify the rAML incidence. Briefly, the mathematical CBA/H mouse rAML model assumes that normal HSCs (N) are transformed into pre-leukemic intermediate cells (I) due to a radiation-induced interstitial deletion on chromosome 2 with Sfpi1 copy loss. Cells N and I can both die due to radiation exposure. Cells I proliferate and they transform into malignant cells when the codon R235 point mutation occurs in the remaining Sfpi1 allele. The formation of the first malignant cell leads to rAML onset over the course of t lag months, provided that the mouse does not die during that time (Dekkers et al. 2011;Stouten et al. 2021). The expressions along the arrows correspond with the rates (except for latency t lag ) used in the differential equation model to describe the response of bone marrow cells (N, I and M) to ionizing radiation exposure.
The mathematical male CBA/H mouse model developed by Stouten et al. (2021) enables one to determine the distribution of potential rAML diagnosis times ( f A (t) ). Here, an essential observation is that, if mice did not die from other causes, every mouse would eventually develop rAML. This allows one to define, for each mouse, two independent time points: t A , the potential time at which rAML occurs in the absence of other causes of death, and t A , the potential time at which a mouse dies in the absence of rAML. The potential rAML diagnosis time ( t A ) is obtained by adding the diagnosis time latency ( t lag ) to the time at which the first malignant cell is formed ( t M=1 ). Thus, the rAML diagnosis can only take place if a mouse survives sufficiently long to develop rAML, i.e., t M=1 + t lag = t A ≤ t A . Similar to Stouten et al. (2021), the addition of a diagnosis latency of t lag = 5.06 months was based on the observation that mice, which were deleted of exon 5 of the Sfpi1 gene (PU.1 −/− ), developed AML with a median latency of 22 weeks (Metcalf et al. 2006). This latency estimate is almost identical to a model-based estimation made by Dekkers et al. (2011).
In the present paper, the computationally intensive stochastic rAML model developed by Stouten et al. (2021) is replaced by a more efficient model. Instead of running timeconsuming simulations to check whether rAML could have been diagnosed per mouse ( t A ≤ t A ), a differential equation model is used to directly determine the probability of rAML development. The probability distribution of the potential time at which the first malignant cell is formed, f M=1 (t) , can be derived from the differential equation model. The potential rAML diagnosis distribution time, f A (t) , in absence of any other causes of death can be found from f M=1 (t) with the aforementioned time lag t lag . Furthermore, the dose-dependent probability distribution of the potential time to non-rAML causes of death, f A (t) , is known (Stouten et al. 2021). By assuming that t A and t A are independent, one can utilize the distributions f A (t) and f A (t) to find the distribution of the actual rAML diagnosis time: where F A (t) is the (corrected) cumulative distribution function of f A (t) which will be defined later. At time t, 1 −F A (t) represents the probability that a mouse has not yet died from non-rAML causes. The probability of developing rAML is found by calculating the area under the curve of f d (t) , i.e.,
Differential equation model of bone marrow leukemogenesis
By translating the two-mutation model of rAML (Dekkers et al. 2011;Verbiest et al. 2015;Stouten et al. 2021) into Fig. 1 Overview of the two-mutation rAML model. Normal murine bone marrow cells (N) are assumed to transform into pre-leukemic cells I due to a radiation-induced deletion with Sfpi1 copy loss. Intermediate cells I proliferate and can transform into malignant cells M due to the occurrence of a point mutation in the remaining Sfpi1 allele. Both cells N and I can additionally undergo radiation-induced cell death (N, I → ∅ ). Once a mouse acquires a single malignant cell, the time required for rAML onset and diagnosis is t lag months, and only occurs if the mouse survives sufficiently long. With the excep-tion of latency t lag , the rates along the arrows correspond to the transition rates included in the differential equations for N, . The effect of hyper-radiosensitivity (HRS) on leukemogenesis was studied with the assumptions that HRS only affects the per cell death rate ( L ) of cells N and I or that HRS stimulates cell killing as well as the formation of the Sfpi1 deletion (N → I). The HRS assumptions were incorporated into the model by replacing the rate L with the HRS-dependent rate L HRS 1 3 differential equations, the potential rAML diagnosis time distribution ( f A (t) ) can be obtained and used to find the actual rAML diagnosis time distribution ( f d (t)).
The lethal event/lesion formation rate L (t) was derived from a dose-dependent linear-quadratic model ( L(D) = D + D 2 ), and can be used to model the clonogenic cell survival fraction S following exposure to a dose D (Gy) through: S(D) = exp(−L(D)) (Chadwick and Leenhouts 1973;Kellerer and Rossi 1974). The rate L (t) can be utilized to describe the radiation-induced loss of clonogenic potential in differential equations and is obtained from L(D) after substitution of the dose absorption function D(t) =Ḋt : L(t) =Ḋt +Ḋ 2 t 2 , where Ḋ is the constant dose rate (Gy/ month) with which mice are irradiated. Taking the time derivative of L(t) yields the rate L (t) (Zaider and Minerbo 2000;Gong et al. 2013;Olobatuyi et al. 2018): Note that, irradiation starts at time t = 0 , the dose of interest D accumulates at exposure time T = D∕Ḋ and L(t = T) = L(D).
The following ordinary differential equations describe the dynamics of the number of normal (N), intermediate (I) and malignant (M) bone marrow cells in absence of HRS (note: Fig. 1 can be used as a reference for how each model rate corresponds to a certain process): where the parameters del (dimensionless), b (month −1 ) and p (month −1 ) correspond to the formation of the Sfpi1 deletion, the proliferation rate and the Sfpi1 point mutation rate respectively. The assumption was made that the number of bone marrow cells with a deleted Sfpi1 copy can be described through a linear-quadratic model L(D) (Stouten et al. 2021). This assumption is based on the observation that the number of lethal events and chromosome aberrations are linearly correlated (McMahon 2018), and the increase in the number of interstitial deletions is a linear-quadratic function of the radiation dose (Cornforth et al. 2002). Note that no distinction was made between the radiation-induced cell killing rate ( L ) of cells N and I, because cells I only differ from cells N in the occurrence of the radiation-induced Sfpi1 deletion. Hence, the assumption was made that, during the very brief exposure time, the occurrence of the Sfpi1 deletion does not alter the radiosensitivity of cells I. It was additionally assumed that normal cells N could not transition into I due to a naturally occurring interstitial Sfpi1 deletion, and proliferation of N was excluded because acute exposure was considered here ( T ≈ 0 ). It should further be noted that the model developed by Stouten et al. (2021) contains an additional intermediate cell compartment in which Sfpi1deleted cells do not have a growth advantage in the early stages after irradiation (Olme et al. 2013a). This compartment is ignored in the presented model because it requires an extra parameter and it makes the model solution more complex, while yielding similar results.
Assuming that malignant cell formation/arrival from intermediate cells follows a Poisson process with rate function Ṁ (t) , then the time required to produce the first malignant cell after irradiation has the following probability distribution (Hurtado and Kirosingh 2019): The potential rAML diagnosis time distribution in the absence of other causes of death is obtained by shifting the curve f M=1 (t) with t lag months: The equations for Ṅ , ̇I and Ṁ (Eqs. 3-5) need to be solved to use the above density function to quantify the rAML incidence. Stouten et al. (2021) derived a dose-dependent expression for the number of intermediate cells present at time T ≈ 0 following brief high-dose-rate exposure ( I 0 (D) ), by assuming that no cells I proliferate or transform into M during exposure. By reproducing this approach with initial conditions N(0) = N 0 ≈15,670 (Staber et al. 2013;Stouten et al. 2021) and I(0) = 0 , the following initial condition can be found: The equations ̇I (t) = (b − p )I(t) and Ṁ (t) = p I(t) after radiation exposure can now easily be solved with the initial conditions I(0) = I 0 (D) and M(0) = 0: The above expressions are required to use f A (t) (Eq. 7).
Incorporation of hyper-radiosensitivity
Hyper-radiosensitivity was included in the model due to the observation that the possible target cells of rAML exhibit HRS (Rodrigues-Moreira et al. 2017). Besides the cell survival endpoint, HRS has also been observed for the endpoint of chromosomal aberrations in gamma-irradiated human peripheral G 2 blood lymphocytes (Seth et al. 2014). Furthermore, the induced-repair model from Marples and Joiner (1993) can be utilized to describe HRS for the endpoints of cell survival, chromosomal aberrations and deletions in B14-150 Chinese hamster cells irradiated with carbon ions (Troshina et al. 2020). Based on these experimental findings, the following three scenarios were considered to study the possible effect of HRS on the rAML incidence. First, the HRS − assumption presumes that bone marrow cells do not exhibit HRS (control scenario). The second scenario assumes that bone marrow cells exhibit HRS and this only affects cell survival (HRS + 1 ). The third scenario assumes that HRS affects cell survival and stimulates the formation of the Sfpi1 deletion (HRS + 2 ). The lethal event rate was modified in accordance with the induced-repair model from Marples and Joiner (1993) to describe low-dose HRS through the parameter of the linear-quadratic model: where r represents the traditional linear-quadratic parameter applied to the conventional high-dose response, s is the slope at a very low radiation dose, and D c reflects the dose at which the induction of increased radioresistance is 63% complete. The induced-repair cell survival model is given by S HRS (D) = e − (D)D− D 2 .
The rate function, L HRS (t) = (D)Ḋ + 2Ḋ 2 t , was used to describe clonogenic cell death for the HRS + assumptions during exposure. Rate L HRS (t) was substituted for L (t) in the differential equations for N (Eq. 3) and I (Eq. 4) to model the possible effect of HRS on the development of rAML. For the HRS + 1 scenario, only the cell death rate during irradiation was modified (i.e., Ṅ = −L HRS N − delL N;̇I = −L HRS I + delL N ), whereas the death rate and the Sfpi1-induction rate were both changed for the HRS + 2 scenario (i.e., Again, by assuming that no cells I proliferate or transform into M during the brief exposure time, one can easily solve the differential equations for the HRS + assumptions. This yields the initial condition for the number of cells with an Sfpi1 deletion present after irradiation for the HRS + assumptions: Substitution of the above initial conditions for I 0 (D) in Eqs. (9) and (10) allows one to quantify the HRS-dependent rAML incidence. The s ∕ r -ratio is unknown for the possible HRS-mediated low-dose induction of Sfpi1 loss. The dose-response curve of different types of chromosomal aberrations observed in gamma-irradiated human G 2 blood lymphocytes display s ∕ r ratios of 2.5 and 3.5 (Seth et al. 2014). For simplicity, the relationship s = 3 r was assumed. This ratio is utilized in the L HRS (D) function shown in Eq. (13) and only affects the induction of the Sfpi1 deletion.
Deaths from non-rAML causes
The dose-dependent survival time distribution ( f A (t) ) of male CBA/H mice (Major and Mole 1978) in the absence of rAML was approximated with a skew normal distribution with location, scale and shape parameters of = 25.86−0.57D months, = 5.87 months and = − 1.01 respectively (Stouten et al. 2021). The distribution parameters were fixed in accordance with the observation that the mean male CBA/H mouse survival time decreases from 22.5 to 19.1 months, when the dose is increased from 0 to 6 Gy, with a survival time standard deviation of 4.83 months and a skewness of approximately − 0.141 (Major 1979). To exclude negative distribution times from the skew normal distribution, the cumulative distribution function F A (t) corresponding to the density f A (t) was corrected: Thus, the corrected cumulative distribution function, F A (t) , was defined such that it has the properties F (1) to find the actual rAML diagnosis time distribution.
Model implementation, data and fitting procedure
The model was implemented in R version 4.0.3 (R Core Team 2018) and a nonlinear least-squares fitting procedure (R package minpack.lm) was performed to minimize the differences between the model and the data. Values for the parameters b and p were determined by fitting the model to dose-dependent (0.75, 1.5, 3.0, 4.5 and 6.0 Gy) rAML incidence percentages (Major 1979;, and time-dependent cumulative rAML incidence percentages observed after 4.5 Gy of exposure ). These data were obtained from experiments in which male CBA/H mice were irradiated with high-dose-rate X-rays.
Specifically, the following cost function that is dependent on parameter vector = (b, p ) was minimized: The first term of the cost function takes 20 observed (y(i)) and modeled ( ŷ(i, ) ) dose-dependent incidence percentages into account ( i ∈ {1, … , 20} ). Each rAML incidence modeldata residual for point i was weighted by the fraction of mice used to acquire data point i: w 1 (i) = n mice,i ∕ ∑ j n mice,j . The second term of the cost function describes the differences between observed ( z(t i ) ) and modeled ( ẑ(t i , ) ) rAML incidence after acute 4.5 Gy of exposure as a function of time at time points t i ( i ∈ {1, … , 20} ). The time-dependent cumulative rAML incidence values were normalized relative to the final time point t 20 , z(t i )∕z(t 20 ) and ẑ(t i )∕ẑ(t 20 ) , and weighted by the mean of the rAML incidence data observed after acute 4.5 Gy of exposure ( w 2 ) (Major 1979;). This correction was applied such that both the model and the experimental data reached an identical maximum value at time t 20 . Stouten et al. (2021) showed that = r = 0.0402 Gy −1 , = 0.122 Gy −2 and del = 0.0499 (dimensionless) can be used to describe cell survival curves of murine HSCs and HSPCs (Mohrin et al. 2010) and approximate the relative in vitro/vivo formation of the Sfpi1 deletion in CBA/H mice following 3 Gy of X-ray exposure (Olme et al. 2013a). For the current paper, it was not possible to obtain significant parameter values when fitting all of the model parameters to the rAML incidence data at once. Lack of inclusion of cell survival data and Sfpi1 deletion data in the fitting procedure made it impossible to identify unique optimal parameter values. Hence the parameter values for , and del were taken from Stouten et al. (2021) because those values can be related to experimental data. The (fitted) model parameters to run the simulations are reported in Table 1
Low-dose HRS affects pre-leukemic cell formation
The presented mathematical model can be used to study the possible effects of HRS on the incidence of rAML in male CBA/H mice after acute high-dose-rate X-ray exposure. Simulations were carried out with the assumptions that (1) bone marrow cells do not exhibit HRS (HRS − ), (2) cells display HRS for the endpoint of cell survival only (HRS + 1 ), (3) or HRS affects cell survival and stimulates the formation of the Sfpi1 deletion (HRS + 2 ). Figure 2a shows the surviving fraction for the HRS − (solid) and HRS + (dashed) target cell assumptions. The surviving fraction curves were obtained with a linearquadratic model (HRS − assumption) and an induced-repair model (HRS + assumption, Eq. (11)). The model parameters (Table 1) Although the utilized induced-repair model parameters are unable to describe the available HRS data exactly, this does not have a large effect on the qualitative impact of HRS on the rAML incidence, which is the main focus of this study.
The radiation-induced deletion with Sfpi1 copy loss is responsible for the formation of pre-leukemic cells (Verbiest et al. 2015). Figure 2b illustrates the effect of the HRS − (solid curve), HRS + 1 (dashed curve) and HRS + 2 (dotted curve) Stouten et al. (2021), the maximum number of cells with an Sfpi1 deletion is formed after about 2.7 Gy of exposure. Further increasing the dose induces frequent cell death, hence explaining the observation that the number of cells with an Sfpi1 deletion approaches zero following exposure to higher doses.
The rAML incidence is calculated with the diagnosis time distribution
The model presented in this paper was used to find the rAML diagnosis time distribution f d (t) (dotted curve, Fig. 3a). The distribution f d (t) (Eq. 1) was calculated after 4.5 Gy of exposure by multiplying the potential rAML diagnosis time distribution f A (t) (dashed curve), with one minus the cumulative distribution function for non-rAML death times F A (t) (solid curve). The probability of developing rAML was calculated by integrating the rAML diagnosis time curve f d (t). Figure 3b shows the cumulative rAML incidence in time following exposure to doses of 0.75, 1.5, 3.0, 4.5 or 6.0 Gy (light gray to black). These curves were obtained by integrating the rAML diagnosis time distribution f d (t) as a function of time. The model presented here (solid curves) yields results similar to the previously published rAML model that is more complex and computationally intensive (dashed curves, Stouten et al. 2021). The initial rise in the cumulative rAML incidence proceeds faster with the new model compared to the previous version. The previous model contains an additional intermediate cell compartment such that cells with an Sfpi1 deletion do not have an initial growth advantage (Stouten et al. 2021), hence explaining the initial delay in rAML diagnoses observed with the previous model. Furthermore, as the previous model, the new model is also able to describe the total rAML incidence among male CBA/H mice (filled circles) and the time-dependent cumulative incidence data (stairs) (Major 1979;). Figure 4 shows the modeled effect of low-dose HRS on the rAML incidence. The incidence curves were calculated by running the model for the previously discussed three HRS assumptions (HRS − , HRS + 1 , HRS + 2 ). First, similar to Stouten 2021), the rAML incidence curve corresponding to the HRSassumption increases in a linear-quadratic manner with the absorbed radiation dose (solid black curve). Second, a comparison of the dose-response curves for the HRS − and HRS + 1 (dashed black curve) scenarios indicates that HRS may reduce the rAML incidence with a maximum effect around 0.06 Gy. Third, the incidence curve obtained with the HRS + 2 assumption (dotted black curve) has a relatively high slope at very low doses compared to the other HRS assumptions. The three modeled rAML incidence curves are identical at higher doses, regardless of the low-dose HRS assumption.
Low-dose HRS modifies the rAML dose-response curve
The high-dose model predictions are in line with available rAML incidence data of 0.75, 1.5, 3.0, 4.5 and 6.0 Gy X-ray irradiated male CBA/H mice (Major 1979;. Maximum rAML induction is observed with the model after about 2.5 Gy of exposure, and the rAML incidence decreases with higher doses due to the depletion of pre-leukemic cells with an Sfpi1 deletion and increased mouse deaths from non-rAML causes. Figure 4 additionally shows that the modeled HRS − rAML incidence percentages up to 0.2 Gy (solid black curve) can be accurately approximated with the linear-quadratic doseresponse curve, y(D) = c 1 D + c 2 D 2 (solid gray curve), with coefficients c 1 = 3.63 Gy −1 and c 2 = 10.1 Gy −2 , obtained by Stouten et al. (2021) from modeled rAML incidence percentages. Coefficient c 1 of the linear-quadratic doseresponse curve approximation can be modified to describe the HRS + 1,2 scenarios: where the constant z = 1 Gy −1 was introduced to make the product zD dimensionless. Note that the term for the HRS + 1 assumption differs from the induced-repair model due to the addition of a dose-dependency before Euler's number; furthermore, the first plus sign was changed into a minus sign because this assumption is responsible for decreasing the rAML incidence. The term for the HRS + 2 assumption is identical to the induced-repair model . Figure 4 illustrates that c 1,HRS + 1 (D) (dashed gray curve, c 1,r = c 1 Gy −1 , c 1,s = 71.9 Gy −1 , D c = 0.06 Gy −1 ) and c 1,HRS + 2 (D) (dotted gray curve, c 1,r = 3 Gy −1 , c 1,s = 10.8 Gy −1 , D c = 0.026 Gy −1 ) can be utilized to accurately approximate the modeled rAML incidence estimates corresponding to the two HRS assumptions (dashed and dotted black curves respectively). The time-dependent cumulative rAML incidence curves are shown following exposure to 0.75, 1.5, 3.0, 4.5 or 6.0 Gy (light gray to black) for the recently published model (dashed curves, Stouten et al. 2021) and the simplified model presented in this paper (solid curves). The cumulative incidence was determined by calculating the area under the diagnosis time curve f d as a function of time. The model was fitted to time-dependent cumulative incidence in CBA/H mice following 4.5 Gy of X-ray exposure (stairs) and the incidence data (mean ± standard error, n = 4,) shown at the end of the cumulative incidence curves (Major 1979; Multiple observations can be made from c 1,HRS + 1 (D) and c 1,HRS + 2 (D) . First, at very low doses, the slope parameter for the HRS − and HRS + 1 assumptions are identical ( c 1 = c 1,r = 3.63 Gy −1 ) and about three times smaller compared to the HRS + 2 assumption ( c 1,s = 10.8 Gy −1 ). A relatively large slope parameter was expected for the HRS + 2 scenario due to the assumption that low-dose HRS stimulates the formation of the Sfpi1 deletion. Second, as the dose increases, c 1,HRS + 1 (D) becomes smaller compared to the HRS − scenario ( c 1 ) due to the assumption that HRS only affects cell survival, which results in fewer Sfpi1 deletions and therefore lower rAML incidence. Third, if the dose becomes sufficiently large such that D∕D c ≈ 1 , terms c 1,HRS + 1 (D) and c 1,HRS + 2 (D) both approach the slope parameter c 1,r .
Discussion
The surviving fraction curve of the cells possibly responsible for rAML development displays excess cell killing (HRS) at lower doses (Rodrigues-Moreira et al. 2017). The aim of the present study was to explore the possible effect of HRS on the rAML incidence in male CBA/H mice. Lower rAML incidence occurred in in-silico mice carrying HRS + 1 target cells compared to HRS − cells over the same dose interval for which hyper-radiosensitive surviving fractions were modeled. This incidence reduction arises because the probability of acquiring the Sfpi1 deletion decreases due to a sharp HRS-associated increase in cell killing. A lower number of viable pre-leukemic cells carrying the Sfpi1 deletion translates to a lower probability of malignant cell formation and rAML onset during a mouse's lifespan, hence explaining the difference in rAML incidence depending on the HRS status.
Low-dose ionizing radiation exposure has been shown to increase the number of chromosomal aberrations and deletions (Seth et al. 2014;Troshina et al. 2020); therefore, a scenario was considered in which HRS stimulates cell killing and the formation of the Sfpi1 deletion. The rAML incidence at very low doses was increased for the HRS + 2 target cell assumption compared to the other assumptions, since a higher number of pre-leukemic cells directly leads to increased rAML incidence. A scenario in which HRS only affects the Sfpi1 deletion was not considered because there are experimental indications that HRS affects cell killing (Rodrigues-Moreira et al. 2017). If HRS would only stimulate the Sfpi1 deletion, then the dose-response curve would still be similar to the proposed expression ( c 1,HRS + 2 (D) , Eq. 17) at very low doses due to the stimulation of pre-leukemic cell formation. At higher doses, the effect of HRS disappears which causes the HRS + rAML dose-response curves to converge onto the HRS − curve.
Although low-dose HRS has been observed in vitro, 0.02 Gy irradiated C57BL/6-CD45.2 mice did not have significantly decreased LT-HSC counts compared to controls 4 months post-irradiation (Rodrigues-Moreira et al. 2017). This implies that low-dose HRS might not occur in vivo. However, a lack of significantly decreased cell counts may be attributed to: the presence of radiation-induced inactivated cells (i.e., cells that did not clonogenically survive irradiation, but are still present in the bone marrow), repopulation and/or a small low-dose-associated (0.02 Gy) effect size.
The induced-repair model from Marples and Joiner (1993) Hyper-radiosensitivity (HRS) modifies the rAML doseresponse curve. The rAML dose-response curve obtained with the HRSassumption is linear-quadratic at lower doses (solid black curve). In the presence of HRS + 1 target cells (HRS only affects cell survival), the low-dose incidence is reduced (dashed black curve) compared to the HRSassumption. The rAML incidence at very low doses is higher with the HRS + 2 target cell assumption (HRS stimulates cell killing and the formation of the Sfpi1 deletion, dotted black curve). The modeled high-dose incidence estimates are in accordance with the available male CBA/H mouse data (Major 1979;, standard errors are shown for n = 4). The modeled HRS − rAML dose-response curve (solid black curve) is almost identical to the linear-quadratic dose-response curve approximation y(D) = 3.63D + 10.1D 2 made by Stouten et al. (2021) (HRS − approx , solid gray curve). The linear coefficient of the linear-quadratic response curve y(D) could be modified with simple dose-dependent expressions to approximate the dose-response curves obtained with the HRS + 1 (dashed gray curve, Eq. 16) and HRS + 2 (dotted gray curve, Eq. 17) assumptions 1 3 model. For example, with the reported parameters of r = 0.63 Gy −1 and s = 9.84 Gy −1 (Rodrigues-Moreira et al. 2017), the surviving fraction was reduced to 0.79 instead of the observed mean of about 0.65 following 0.06 Gy of exposure. Furthermore, the smaller radiosensitivity parameter r = 0.04 Gy −1 estimated by Stouten et al. (2021) was used here to describe cell survival and rAML, instead of the relatively large value found by Rodrigues-Moreira et al. (2017). A global optimization technique (simulated annealing) was employed in the present study to assess whether a good rAML model fit could have been obtained with r = 0.63 Gy −1 . But this method failed to identify a realistic optimal parameter set capable of describing the rAML data. The presented rAML model requires a relatively small value of r to properly describe the upward curvature of the rAML incidence data up to about 2.5 Gy.
Although the values of r and estimated by Stouten et al. (2021) can also be used to describe clonogenic survival data of HSCs and HSPCs (Mohrin et al. 2010), this finding might be a coincidence.
For the HRS + 1 assumption, the quotient of the surviving fraction for the HRS − assumption divided by that of the HRS + assumption ( S∕S HRS ) was found to be identical to the quotient obtained for the number of radiation-induced pre-leukemic cells ( I 0 ∕I 0,HRS + 1 , results not shown). This was expected because the cells were assumed to die in accordance with the induced-repair model (Eq. 11), whereas the induction of the Sfpi1 deletion was described with the conventional linear-quadratic model. At lower doses, the rAML dose-response curve can be accurately described in terms of the number of radiation-induced pre-leukemic cells (Stouten et al. 2021), which implies that the Sfpi1 deletion is an important mutation that largely determines the shape of the dose-response curve (i.e., Fig. 2b explains Fig. 4). Therefore, the effect of the HRS + 1 assumption on the rAML incidence was found to be approximately identical to the effect of HRS on cell survival and the induction of pre-leukemic cells. Although this observation follows from a model assumption, this could occur in vivo if the presented model is representative of the actual two-mutation major rAML disease pathway. The mathematical mouse model was additionally redesigned such that the computation time is negligible, while yielding a similar linear-quadratic dose-response curve and time-dependent cumulative incidence curves compared to the more complicated and time-consuming rAML model developed by Stouten et al. (2021). Although most of the model assumptions and the linear-quadratic dose-response curve obtained with the rAML model were discussed in detail by Stouten et al. (2021), it is important to note that a different set of assumptions could yield a distinct doseresponse curve. For example, certain dose-response curves reflecting hormesis or a threshold (Brenner et al. 2003) were excluded beforehand due to a lack of data on processes that might affect the response curve for the major rAML disease pathway.
Epidemiological studies have found that the doseresponse curve for human AML risk can be described with a linear-quadratic model (Preston et al. 1994) and a preferred quadratic model (Richardson et al. 2009;Hsu et al. 2013). Similar to Stouten et al. (2021), the linear-quadratic doseresponse curve obtained here was found through a bottom-up approach and should therefore not be extrapolated to humans due to differences in the underlying AML disease pathway (Verbiest et al. 2015). Most rAML cases in male CBA/H mice can be explained through the major rAML pathway involving the interstitial deletion with Sfpi1 copy loss and the Sfpi1 point mutation. The remaining cases occur through minor pathways that may be independent of the Sfpi1 deletion and/or the Sfpi1 point mutation (O'Brien et al. 2020). The overall rAML dose-response curve is the sum of the (different) dose-response curves corresponding to the major and minor rAML disease pathways (Stouten et al. 2021). It is possible that HRS can influence the expression of disease pathways in different ways since HRS has been observed for distinct endpoints, e.g., cell survival and mutations Seth et al. 2014;Troshina et al. 2020).
It should be noted that HRS may affect cancer incidence in other ways. For example, Jacob et al. (2008) showed that, compared to the conventional linear-quadratic cell survival model, the incorporation of HRS in the two-stage clonal expansion model may increase the low-dose risk of mortality from all solid cancer types among male Japanese atomic bomb survivors. The obtained dose-response curve was similar to the HRS + 1 rAML dose-response curve presented here, but instead of decreasing the incidence, HRS was found to increase the incidence. Higher cancer mortality risk was found by Jacob et al. (2008) due to the assumption that increased cell killing can temporarily increase the proliferation rate of intermediate cells to overcompensate radiation-induced cell inactivation. Ban and Kai (2009) made a similar observation regarding the effect of ionizing radiation on the proliferation rate. Based on the available data, Jacob et al. (2008) found that both the linear-quadratic and the induced-repair cell survival models could describe the available cancer risk data equally well. In the present paper, the assumption was made that ionizing radiation exposure does not influence the proliferation rate of pre-leukemic cells, hence the model-based observation that the rAML incidence is lowered if HRS only affects surviving cell fractions. Findings similar to Jacob et al. (2008) can be obtained with the presented model if a cell killing-dependent proliferation rate is assumed (results not shown).
Although cellular HRS has been thoroughly investigated (Lambin et al. 1993;Short et al. 1999;Joiner et al. 2001; 1 3 Marples and Collis 2008;Olobatuyi et al. 2018), the available literature related to how HRS possibly affects low-dose cancer risk after acute exposure is scarce, which might be due to a lack of reliable biomarkers (Martin et al. 2013). The finding that low-dose HRS modifies the probability of rAML onset may not be limited to this form of cancer. In general, a consequence of HRS may be that this process changes the probability that certain radiation-induced (driver) mutations propagate/occur and contribute to long-term carcinogenesis. Then, acute doses absorbed during e.g. whole-body PET/ CT scans may be sufficiently large to cause a small HRSmediated increase/decrease in cancer risk compared to what is expected based on the linear no-threshold assumption. Two simple HRS terms were introduced here such that the linear coefficient of a risk model can be modified to include HRS. However, the application of these HRS terms should be limited to illustrative purposes because it requires parameters that cannot be identified from epidemiological data.
It may be possible to experimentally examine the presented hypotheses about the influence of HRS on the rAML incidence in male CBA/H mice. First, one should determine the dose-response curve for the number of cells with a deleted Sfpi1 copy. Data from Peng et al. (2009) indicate that bone marrow cells of mice irradiated with iron ions or X-rays may result in an HRS-dependent increase in Sfpi1 loss one day as well as 1 year after iron ion (CBA/ Ca mice) and one month after X-ray (CBA/H mice) exposure. Unfortunately, insufficient data points are available to definitively confirm/reject the presented hypothesis about HRS with respect to the Sfpi1 deletion. Therefore, the experiment from Peng et al. (2009) should be repeated with more doses. Based on such an experiment, it is possible to examine the HRS + assumptions regarding the induction of the Sfpi1 deletion (Fig. 2b). Second, if HRS truly occurs in vivo during each irradiation event in male CBA/H mice, one could conduct a dose fractionation experiment to test whether HRS affects the rAML incidence. Consider total absorbed doses of 0.4, 0.8, 1.2, 1.6 and 2.0 Gy, delivered in 20 fractions over four weeks. A dose fraction may then fall within the HRS region e.g. 1.2/20= 0.06 Gy. For the HRS + 1 assumption, one would expect to find fewer rAML cases for a dose fraction size that maximizes the HRS effect, whereas increased incidence could be detected if the HRS-mediated increased cell proliferation assumption from Jacob et al. (2008) is true. It should be noted that fractionated irradiation has been observed to induce repeatable HRS-mediated cell killing Turesson et al. 2010); however, whether repeatable HRS is induced depends on the cell line and interfraction time . Therefore, it is vital to first test if HRS can be induced repeatably before conducting animal experiments.
Although a dose fractionation experiment was conducted by Mole and Major (1983), only four cases were observed following a total dose of 1.5 Gy (72 mice, 5.6%) and 3.0 Gy (65 mice, 6.2%) delivered in 20 fractions. No conclusions can be made about HRS based on these results, because they may have been obtained due to chance since the sample/effect size is too small given the large variation in rAML incidence that is usually observed within/between investigations with CBA/H mice (Major and Mole 1978;Mole and Major 1983;Olme et al. 2013b;Verbiest et al. 2018). The variation in incidence between earlier (Major and Mole 1978;Mole and Major 1983) and recent experiments (Olme et al. 2013b;Verbiest et al. 2018) may be attributed to a difference in housing conditions. Within an experiment, it may be easier to detect the possible effect of HRS on the rAML incidence by classifying each rAML case to the major or minor pathway based on the presence/absence of the Sfpi1 deletion.
Conclusions
In conclusion, through a mathematical modeling approach it was shown how low-dose rAML incidence in male CBA/H mice may be influenced if HRS affects endpoints such as cell survival and the Sfpi1 deletion. For radiation protection, at the present state of knowledge, it is difficult to predict the relevance of HRS on cancer/leukemia incidence/mortality among humans. As discussed in the paper, HRS could either increase or decrease radiation-induced risk compared to what would be predicted by a linear or a linear-quadratic model. Through this work, a step has been set in the direction of expanding the limited available literature on the relationship between HRS and carcinogenesis. Furthermore, experiments have been proposed to identify the possible effect of HRS on rAML incidence, and investigate how the overall rAML dose-response curve can be described in terms of the minor/major rAML pathways.
Author contributions SS developed the model, wrote the code, performed numerical analyses, made all of the figures and co-wrote the paper. BB assisted in model development, co-wrote the paper and validated the code. LR co-wrote the paper. SVL supervised the research and co-wrote the paper. CB co-wrote the paper. FD supervised the research and co-wrote the paper.
Funding The authors received no specific funding for this work Availability of data and material Data are available from the corresponding author upon request.
Code availability
The R code required to reproduce all the results is available from the corresponding author upon request. | 10,554.6 | 2022-07-21T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Identification of an age-dependent biomarker signature in children and adolescents with autism spectrum disorders
Background Autism spectrum disorders (ASDs) are neurodevelopmental conditions with symptoms manifesting before the age of 3, generally persisting throughout life and affecting social development and communication. Here, we have investigated changes in protein biomarkers in blood during childhood and adolescent development. Methods We carried out a multiplex immunoassay profiling analysis of serum samples from 37 individuals with a diagnosis of ASD and their matched, non-affected siblings, aged between 4 and 18 years, to identify molecular pathways affected over the course of ASDs. Results This analysis revealed age-dependent differences in the levels of 12 proteins involved in inflammation, growth and hormonal signaling. Conclusions These deviations in age-related molecular trajectories provide further insight into the progression and pathophysiology of the disorder and, if replicated, may contribute to better classification of ASD individuals, as well as to improved treatment and prognosis. The results also underline the importance of stratifying and analyzing samples by age, especially in ASD and potentially other developmental disorders.
Background
Autism spectrum disorders (ASDs) are a clinically and biologically heterogeneous group of neurodevelopmental conditions characterized by a triad of core features: social and communication impairments and restricted repetitive behavior. The clinical manifestations of ASD have been shown to change over development. Crosssectional and longitudinal research indicates that the severity of the core features and maladaptive behaviors of ASD among adolescents and adults tend to abate with age [1][2][3][4]. A cross-sectional study showed improved gaze behavior and social functioning of ASD subjects between adolescence and adulthood, with the suggestion that increased mirror neuron system activity may contribute to these effects [5].
In addition to the clinical manifestations, there is accumulating evidence that individuals with ASD have significant differences in brain development compared to controls. The results of several studies that were reviewed in [6] have shown there is reduced functional activation in multiple brain areas of 2-to 4-year-old children during socio-emotional, cognitive and attention tasks. Also, studies have shown age-dependent changes in cortical development [7] in brain regions involved in social-cognitive and motor function [8], language [9], and symptom severity [10]. Taken together, the findings indicate that neurobiological alterations that occur during the first years of life may underlie the neuroanatomical, functional and behavioral aspects of ASD. Therefore, identification of biomarkers associated with these alterations may provide further insights into the disease etiology.
Thus far, there have been only a small number of studies that have attempted to identify molecular changes in ASD that occur at different ages. One study found agedependent gene expression changes in prefrontal cortex using whole-genome analysis of mRNA levels in postmortem brains of ASD subjects [11]. Most of the molecular profiling studies have investigated age-related changes in ASD subjects in the levels of growth factors such as brain-derived neurotrophic factor (BDNF). In ASD cases, the levels of BDNF were found to be significantly lower in 0-to 9-year-old children compared to those aged greater than 10 years, while no age-related differences in BDNF levels were found for non-ASD controls [12]. This suggested that there may be a delayed increase of BDNF with development. The 1 H nuclear magnetic resonance (NMR) analyses found lower frontal lobe ratios of N-acetylaspartate/creatine, which was correlated with age in ASD children [13]. This could reflect increased mitochondrial metabolism and may be related to symptoms of obsessional behavior and decreased social function of the patients.
Most previous molecular profiling studies of ASD have been performed using specific age groups, which precludes identification of changes that occur at different stages of development. Here we have attempted to gain further insight into age-related molecular trajectories in ASD by multiplex immunoassay profiling of 208 analytes in serum from patients and sibling controls, following partitioning into three age groups (4 to 9, 9 to 13 and 13 to 18 years). This platform has the advantage of being capable of screening multiple molecules simultaneously in biological samples and has been used previously to identify serum or plasma biomarkers in several areas of medicine, including neuropsychiatric conditions such as schizophrenia, bipolar disorder, major depressive disorder and Asperger syndrome [14][15][16].
Subjects
Subjects were recruited from Karakter Child and Adolescent Psychiatry and the Radboud University Nijmegen Medical Center in Nijmegen, The Netherlands. The subjects included 37 ASD subjects (age = 10.8 ± 3.5 years; body mass index (BMI) = 18.0 ± 3.7 kg/m 2 ) and 37 controls (age = 10.5 ± 3.2 years; BMI = 17.6 ± 3.0 kg/m 2 ). The Commissie Mensgebonden Onderzoek (CMO) regio Arnhem Nijmegen ethical committee approved the study protocols, informed written consent was given by the parents of all participants, and studies were conducted according to the Declaration of Helsinki. Clinical diagnosis of ASD was conferred by board certified child psychiatrists based on developmental history and psychiatric interview and observation and according to accepted international criteria (APA, DSM-IV-TR).
Diagnosis of ASD was confirmed by a structured developmental interview with the parents (ADI-R) [17]. Subjects with a diagnosis of autistic disorder (AD) or pervasive developmental disorder-not otherwise specified (PDD-NOS) were included in the study. The Wechsler Abbreviated Scale of Intelligence was administered to all participants to measure intelligence quotient, and age-appropriate Autism Spectrum Quotient (AQ) questionnaire scores were recorded for all ASD and control individuals. All diagnoses and clinical tests were performed by psychiatrists under Good Clinical Practice compliance to minimize variability. Unaffected control subjects were siblings recruited from the same families and had comparable age, gender and body mass index (BMI) to the respective patient populations.
Samples
Blood samples were collected from all ASD individuals and controls into S-Monovette 7.5 mL serum tubes (Sarstedt, Numbrecht, Germany). Serum was prepared using standard protocols by leaving samples at room temperature for 2 hours to allow clotting, followed by centrifugation at 4,000 × g for 5 minutes to remove clotted cells and other particulate material. The resulting supernatants were stored at −80°C in LoBind Eppendorf tubes (Hamburg, Germany). The study protocols, processing of clinical samples and execution of test methods were carried out in compliance with the Standards for Reporting of Diagnostic Accuracy (STARD) initiative [18].
Multiplex immunoassay analysis
The levels of 256 initial analytes were measured in 250 μL serum using multiplexed immunoassays (Discovery MAP™ platform) in a Clinical Laboratory Improvement Amendments (CLIA)-certified laboratory (Myriad-RBM; Austin, TX, USA) as described previously [14]. Briefly, samples were analyzed at optimized dilutions and raw intensity measurements were converted into absolute protein concentrations using duplicate 8-point standard curves. Sample analysis was randomized to minimize bias due to measurement-related effects.
Statistical analysis
The statistical programming software R (http://www.rproject.org/) was used to pre-process, analyze and plot the multiplex immunoassay data. First, the data were filtered to remove those assays with more than 30% of values lying outside the limits of quantitation. This resulted in exclusion of 48 assays. For the remaining 208 analytes, low values were replaced by 0.5 X the corresponding minimum values for that assay and high readings were replaced by 2.0 X the maximum levels. For each assay, values were log e transformed for analysis, and outlying values were removed if these exceeded more than 3 standard deviations from the means. Deviations from typical molecular developmental patterns in ASD siblings were assessed by calculating age-diagnosis interactions. The interaction was assessed using a linear model, adjusting for additional covariates of family membership, plate, BMI, and sex. A similar procedure was used to identify molecules changed in ASD, adjusting for these same additional covariates in a linear model. Next, relationships between molecules with significant age-diagnosis interactions were tested by computing Spearman rank correlation coefficient between each pair of molecules for control siblings using untransformed data. Statistical tests were deemed significant at P <0.05.
In-silico pathway analysis
The UniProt accession codes of proteins which showed diagnosis-age interactions were uploaded into the Ingenuity Pathways Knowledge Database (IPKB; Ingenu-ity™ Systems; Mountain View, CA, USA). Pathways most significant to the data set were determined by overlaying the identified proteins onto predefined pathway maps in the IPKB. A right-tailed Fisher's exact test was used to calculate P values associated with the identified pathways. The significance of the association between the dataset and canonical pathways was measured by the ratio of the number of significant molecules divided by the total number of molecules in the canonical pathway and by the Fisher's exact test P value.
Identification of altered molecules in autism spectrum disorder individuals compared to sibling controls
Multiplex immunoassay analysis of all ASD individuals (n = 37) and controls (n = 37) resulted in identification of nine proteins which were present at significantly different levels (interleukin-3, interleukin 12 subunit p40, interleukin-13, macrophage derived chemokine, stem cell factor, Tamm-Horsfall urinary glycoprotein, tumor necrosis factor beta, tyrosine kinase with Ig and EGF homology domains 2 and von Willebrand factor) ( Table 1). None showed a difference higher than 1.2-fold or less than 0.8-fold. We next determined whether molecular differences between ASD and control individuals were potentially obscured by the age range investigated.
Identification of molecules which showed diagnosis-age interactions
The investigated individuals were separated into age groups approximating time periods before (<9 years), during (9 to 13 years) and after (>13 years) puberty ( Table 2). ASD subjects and their unaffected control siblings did not differ significantly in mean age, body mass index (BMI), height or weight values. AQ scores were significantly different (P <0.05) between ASD and unaffected individuals. AQ scores did not change significantly with age for ASD individuals or for controls. Deviations from typical molecular developmental patterns in ASD subjects were assessed by calculating an age-diagnosis interaction using a linear model, as described in the Materials and Methods section. After adjusting for additional covariates of family membership, assay plate, BMI, and sex, 12 proteins showed significant diagnosis-age interactions (Table 3; Figure 1). None of these proteins overlapped with molecules found to be significantly different in the comparison of all ASD and control subjects ( Table 2). The most significant divergences in trajectories were observed for matrix metalloproteinase 7 (MMP-7) (P = 0.005; increasing slope), adiponectin (P = 0.007; increasing slope) and transferrin (P = 0.012; decreasing slope). The most profound ratiometric differences across age groups were seen for haptoglobin, cancer antigen 19-9 (CA-19-9), thyroglobulin (TG) and C-reactive protein (CRP), which were present at approximately 50% of control levels in the youngest age group (<9 years) and were increased by more than 200% compared to controls in the highest age group (>13 years). Four molecules (insulin-like growth factor binding protein 5 (IGFBP5), transferrin, neuropilin-1, creatine kinase-MB (CK-MB)) showed the opposite trajectory with respect to typical molecular levels, with higher levels seen in the youngest group and lower levels in the oldest group.
Correlations of molecules with significant diagnosis-age interactions
Spearman rank correlation testing showed that the levels of 11 out of the 12 molecules with significant agediagnosis interactions were also significantly correlated with at least one other molecule ( Figure 2). TRAIL-R3 was the only protein that was not correlated with at least one other. Neuropilin 1 had the highest Spearman correlation coefficient and most significant correlations with the proteins transferrin (R = 0.779, P = 1.36E-08) and thyroglobulin (R = −0.618, P = 4.62E-05). Also, adiponectin, transferrin and neuropilin 1 showed the greatest number of connections by having significant correlations with four other proteins.
Pathway analysis
The UniProt accession codes of all 12 proteins were uploaded into the Ingenuity Pathways Knowledge Base (IPKB; www.ingenuity.com) to identify the most over-represented pathways associated with the dataset (Table 4). This showed that the diseases most significantly associated with these proteins were hematological diseases (P <0.001) and endocrine system disorders (P <0.001). Both of these categories were linked to changes in adiponectin, creatine kinase-MB, C-reactive protein, haptoglobin, matrix metalloproteinase 7 and transferrin, although interferon inducible T cell αchemoattractant (ITAC) was associated specifically with hematological disease and thyroglobulin was specifically related to endocrine system disorders. The most significant canonical pathway associated with the proteins was acute phase response signaling (P <0.001), based on changes in C-reactive protein, haptoglobin and transferrin.
Discussion
This is the first proteomic profiling study aimed at identifying age-related serum biomarker changes in young The data obtained for all subjects were partitioned into the indicated age bins and geometric means were used to calculate fold changes (autism spectrum disorder/control). Figure 1 Age-dependent changes in expression of serum proteins in 4-to 18-year-old autism spectrum disorder (ASD) subjects compared to matched sibling controls. Protein concentrations were plotted against age after log e transformation, and a linear regression was fit in ASD subjects (orange) and sibling controls (blue). The abbreviations are as described in Table 3. Table 3.
ASD subjects. In addition, we used well-matched nonaffected siblings, allowing us to detect changes related specifically to the manifestation of ASD as a clinical state. Using multiplex immunoassay analysis of 208 molecules we identified significantly different age-dependent trajectories in the levels of 12 proteins in ASD individuals compared to unaffected sibling controls. The most significant canonical pathway associated with the agedependent changing proteins was acute phase response, consistent with known alterations in immunological and inflammatory functions in ASD individuals [19,20]. A literature review by Rossignol and Frye highlighted 10 studies that reported an increase in prevalence of autoimmune disorders in family members of children with ASD [21], and another study has linked perturbed immune function in young autism children to gastrointestinal disturbances [22]. In addition, changes in other proteins were consistent with previous reports related to alterations in metabolism [23] and mitochondrial function [24]. Furthermore, Adams and coworkers have comprehensively reviewed the link between autism and metabolic disturbances in young and adult autistic patients [25]. Interestingly, another study showed that treatment of autism patients with pioglitazone resulted in improvement of some symptoms, with a stronger effect in younger patients [26]. This is the first report showing that changes in these molecules occur in an age-dependent manner in ASD individuals. In addition, our findings suggest that pubertal status may be an important factor to take into consideration after identifying opposing directional changes in the oldest and youngest age groups in ASD compared to unaffected individuals. It is likely that the significantly different trajectories in the inflammation-and metabolism-related molecules with age in ASD are linked at a fundamental level [27]. For example, C-reactive protein and haptoglobin, which both increased with age in the ASD subjects, are components of the acute phase response, although these same proteins have also been used as biomarkers for immune disorders and metabolic syndrome [28,29]. We also found increased levels of TRAIL-R3, which has been linked to inflammation by regulation of apoptotic processes in immune cells [30] and also to the loss of insulin-producing pancreatic beta cells in type 1 diabetes mellitus [31]. Likewise, we found increased levels of matrix metalloproteinase (MMP) 7 in the higher age group of ASD individuals, suggestive of an inflammatory phenotype. MMPs play a pivotal role in the pathogenesis of autoimmune and inflammatory conditions such as arthritis, atherosclerosis, pulmonary emphysema and endometriosis [32]. In addition, changes in the MMPs have been linked to metabolic diseases including type 2 diabetes mellitus [33].
We also found higher levels of adiponectin with increasing age in ASD individuals compared to a decrease with age seen in the control subjects. The finding of lower levels of adiponectin in the younger age groups of ASD patients is consistent with the findings of Shimuzu et al., which showed decreased levels of this protein in ASD subjects compared to controls at an average age of 12 years old [34]. Adiponectin is involved in the control of fat metabolism and insulin sensitivity. Normally, low levels of this protein have been used as a biomarker for oxidative stress, diabetes and a risk factor for metabolic syndrome [35,36]. Therefore, this finding may be in contrast with the reported higher incidence of these conditions in ASD individuals [37,38]. However, this could also be due to the fact that most previous studies have not accounted for any differences in age-related trajectories. In line with this, we also found decreased levels of insulin-like growth factor binding protein 5, which is known to be involved in cell proliferation, differentiation and apoptosis [39], in diabetes and other metabolic conditions [40]. The finding that thyroglobulin levels were increased with age in ASD individuals may have metabolic links as this protein is an essential autocrine regulator of physiological thyroid follicular function that counteracts the effects of thyroid stimulating hormone [41]. Variations in thyroglobulin are associated with susceptibility to autoimmune thyroid disease type 3, which include Graves' disease and Hashimoto thyroiditis [42].
Other potential markers of inflammation or immune function that were increased with age included cancer antigen . Although CA-19-9 has been mainly associated with pancreatic cancer [43], it has also been used a biomarker of pancreatic tissue damage as seen in type 2 diabetes and other metabolic disorders [44]. Likewise this marker is elevated in ASD individuals who have insulin resistance [45], suggesting that the ASD individuals in this study may become more susceptible to such disorders after puberty. This is consistent with the increased prevalence of metabolic conditions in Table 4 In-silico pathway analysis of proteins with significant diagnosis-age interactions Table 3. The ratio for canonical pathways represents the number of molecules from the data set divided by the total number of molecules in that pathway.
young ASD individuals compared to the general public [46]. We also found high levels of creatine kinase-MB at younger ages, consistent with the findings of a previous study in children with ASD [47]. However, we found that the levels of creatine kinase decreased with age, which suggests that progressive effects may occur in energy metabolism or related pathways in ASD. This could be linked to mitochondrial dysfunction and oxidative stress that has been associated with the etiology of autism [21]. The multiplex immunoassay profiling analysis also led to identification of decreased levels of neuropilin 1 in young ASD individuals compared to controls. The neuropilin protein family has been implicated in the embryonic development of neural and vascular systems, and regulation of many processes in adults, such as angiogenesis, the vascular system and the immune response [48]. This is in line with previous reports showing effects on both of these pathways in ASD subjects [7][8][9][10]49]. Effects on the vascular system can be reflected clinically by an abnormal blood flow. Therefore, it is interesting that neuroimaging studies have identified changes in blood flow in and between certain brain regions of individuals with ASD when tested under resting and active conditions [50,51]. It should be noted that we did not find any age-related changes in the levels of BDNF as described in previous studies [12]. However, this could be due to the fact that such changes have only been described for individuals with ASD in the 0 to 9 years age range and the present study only considered participants older than 4 years of age.
Conclusions
One limitation of this study was the potential bias in the molecular class of the investigated molecules. This procedure was based on the commercial availability of a multiplexed immunoassay platform and did not specifically target proteins of other functional classes. Therefore, it is possible that a different selection of molecules would lead to different conclusions from those drawn in this study. Another limiting factor was the small number of clinical serum samples tested using the multiplex analysis. This was due to the rarity of such samples that could be obtained using strict standard operating procedures from both ASD individuals and matched sibling controls. In addition, the samples used in this study were obtained using matched ASD individuals and controls sampled at a single time point. It would be more accurate to repeat the study under prospective conditions in which multiple samples are taken from the same subjects over time, although this is most likely impractical and will result in a high drop-out rate. Finally, the current findings should be considered as preliminary as we did not correct P values from the molecular analysis studies for multiple hypothesis testing. However, there have been no previous proteomic profiling studies carried out in young autism patients that have led to identification of large effects because well-controlled studies using such well-characterized patients are rare. In conclusion, we have identified 12 serum proteins involved in inflammation and metabolic dysfunction that appear to show different trajectories in ASD individuals compared to controls. The predominant effect appeared to be an age-related increase in inflammation and metabolic dysfunction. Future research in this area should incorporate the use of follow-up data from analysis of separate cohorts to confirm these findings. The study of younger subjects in prospective studies would provide further insight into the role of these proteins in ASD and enable development of more accurate, early diagnostic tests. Also, sampling from the same individuals over time will help to determine the true age-dependency of these serum protein expression changes. Furthermore, association studies that compare the protein readings with the time course of symptoms and other read-outs, such as those from functional imaging analyses [52], will be helpful in increasing our understanding of the changes which occur in ASD at different developmental stages. We anticipate that the development and application of biomarker test panels based on the current findings will lead to earlier and more accurate diagnosis and could also lead to the development of much-needed novel therapies for individuals with these conditions. Competing interests PCG, HR and SB are consultants for Myriad-RBM. However, this does not interfere with policies regarding sharing of data and materials as specified by the journal.
Authors' contributions JMR and PCG carried out the molecular profiling data analyses, interpreted the results, prepared the figures and tables, and wrote the manuscript. JACB and HR wrote the manuscript and carried out editing. JG, NR and BF designed the clinical studies and edited the manuscript. JKB and SB conceived the study, interpreted the results and edited the manuscript. All authors read and approved the final manuscript. | 5,097.4 | 2013-08-06T00:00:00.000 | [
"Psychology",
"Medicine",
"Biology"
] |
Measurement of high energy dark matter from the Sun at IceCube
It is assumed that heavy dark matter particles (HDMs) with a mass of O(TeV) are captured by the Sun. HDMs can decay to relativistic light dark matter particles (LDMs), which could be measured by km$^3$ neutrino telescopes (like the IceCube detector). The numbers and fluxes of expected LDMs and neutrinos were evaluated at IceCube with the $Z^{\prime}$ portal dark matter model. Based on the assumption that no events are observed at IceCube in 6 years, the corresponding upper limits on LDM fluxes were calculated at 90\% C. L.. These results indicated that LDMs could be directly detected in the O(1TeV)-O(10TeV) energy range at IceCube with 100 GeV $\lesssim m_{Z^{\prime}} \lesssim$ 350 GeV and $\tau_{\phi} \lesssim 5\times10^{22}$ s.
Introduction
It was found in cosmological and astrophysical observations that the bulk of matter in the Universe consists of dark matter (DM). 84% of the matter content is thermal DM in the Universe, which were created thermally in Early Universe [1][2][3]. The searches for high energy (from O(GeV) to O(TeV)) neutrinos which are produced by the DM annihilation in the Sun's core have been performed using the data recorded by the IceCube and ANTARES neutrino telescopes [4,5]. But no one has found thermal DM particles yet [4][5][6][7][8][9].
The heavy dark sector with a mass of O(TeV) is an alternative DM scenario [10][11][12][13][14][15][16][17][18][19][20]. In this model, there exist at least two DM species in the Universe (for example, heavy and light DM particles). Heavy dark matter (HDM), φ, is a thermal particle which is generated by the early universe. The bulk of present-day DM consists of them. The other is a stable light dark matter particle (LDM), χ which is the product of the decay of HDM (φ → χχ). Due to the decay of long-living HDMs (τ φ ≫ t 0 [21,22], t 0 is the age of the Universe), the present-day DM may also contain a small component which is high energy LDMs. Besides direct measurements of HDMs, one can detect the standard model (SM) products of decay of HDMs. The search for these high energy SM particles from the Sun (they should be neutrinos in this measurement) has been performed using the data recorded by the IceCube neutrino observatory [23]. In this work, however, the products of the decay of HDMs are a class of LDMs [24,25], not SM particles.
The LDMs from the Sun's core could be more easily detected with IceCube, compared to those from the Earth's core, since the HDM accumulation in the Sun is much greater than that in the Earth [26,27]. LDMs would interact with nuclei when they pass through the Sun, the Earth and ice. Those LDMs can be directly measured with the IceCube neutrino telescope. The capability of the measurement of those LDMs will be discussed here. In this measurement, the background consists of muons and neutrinos generated in cosmic ray interactions in the Earth's atmosphere and astrophysical neutrinos.
In what follows, the distributions and numbers of expected LDMs and neutrinos will be evaluated in the energy range 1-100 TeV assuming 6 years of IceCube data. Then the upper limits on LDM fluxes were also calculated at 90% C.L.. Finally, the capability of the measurement of TeV LDMs will be evaluated at IceCube.
HDM accumulation in the Sun
HDMs of the Galactic halo would collide with atomic nuclei in the Sun when their wind sweeps through the Sun. A fraction of those HDMs would lose enough kinetic energy to be trapped in orbit. With further collisions with atomic nuclei in the Sun's interior, they would eventually thermalize and settle in the Sun's core under the influence of gravitation of the Sun's interior. Those HDMs inside the Sun can decay into LDMs at an appreciable rate. Then the number N of HDMs, captured by the Sun, is obtained by the following equation [26] dN dt where C cap , Γ ann and C evp are the capture rate, the annihilation rate and the evaporation rate, respectively. The evaporation rate is only relevant when the DM mass < 5 GeV [26], which are much lower than my interested mass scale (the mass of HDM, m φ ≥ 1 TeV). Thus their evaporation contributes to the accumulation in the Sun at a negligible level in the present work. C dec is the decay rate for HDMs. Since the fraction of HDM decay ≤ 3.2×10 −14 per year (τ φ ≥ 10 21 s), its contribution to the HDM accumulation in the Sun can be ignored in the evaluation of HDM accumulation. Γ ann is obtained by the following equation [26] Γ ann = C cap 2 where τ = (C cap C ann ) − 1 2 is a time-scale set by the competing processes of capture and annihilation. At late times t ≫ τ one can approximate tanh 2 t τ =1 in the case of the Sun [26]. C cap is proportional to σ φN m φ [26,30], where m φ is the mass of HDM and σ φN is the scattering cross section between the nucleons and HDMs. The spin-independent cross section is only considered in the capture rate calculation. Then σ φN is taken to be 10 −44 cm 2 for m φ ∼ O(TeV) [6,7]. The HDM distribution in the Sun is obtained by [26], where G N is the Newtonian gravitational constant. ρ s ≈ 151 g/cm 3 and T s ≈ 15.5×10 6 K are the matter density and temperature at the sun center, respectively. R sun is the radius of the Sun. One finds that HDMs are concentrated around the center of the Sun.
LDM and neutrino interactions with nuclei
In this work, a Z ′ portal dark matter model [28,29] is taken for LDMs to interact with nuclei via a neutral current interaction mediated by a gauge boson Z ′ which couples to both the LDMs and quarks (see Fig. 1a in Ref. [24]). Here a LDM is assumed to be a Dirac fermion. As assumed in Ref. [24], the interaction vertexes (χχZ ′ and qqZ ′ ) are vector-like in this model, since Z ′ vector boson typically acquires mass through the breaking of an additional U(1) gauge group at high energies. This deep inelastic scattering (DIS) cross-section for χ + N → χ + anything (N is a nucleus) is computed in the same way as the Ref. [24] in this work. The effective interaction Lagrangian can be written as follows: where q i 's are the SM quarks, and g χχZ ′ and g qqZ ′ are the Z ′ -χ and Z ′ -q i couplings, respectively. This Deep inelastic scattering (DIS) cross-section is computed in the lab-frame using tree-level CT10 parton distribution functions [31]. The coupling constant G (G = g χχZ ′ g qqZ ′ ) is chosen to be 0.05. The masses of Z ′ are taken to be 100 GeV, and 250 GeV and 350 GeV, respectively. Here the mass of LDM m χ is assumed to be 8 GeV, then the outgoing energy of LDM caused by the decay of The computed DIS cross section obeys a simple power-law form for the energies between 1 TeV and 1PeV. With m Z ′ = 250 GeV, for example, its cross section is obtained by the following function: where E χ is the LDM energy.
The DIS cross-section for neutrino interaction with nuclei is computed in the lab-frame and given by simple power-law forms [32] for neutrino energies above 1 TeV: where σ νN (CC) and σ νN (NC) are the DIS cross-sections for neutrino interaction with nuclei via a charge current (CC) and neutral current (NC), respectively. E ν is the neutrino energy.
or neutrino energy and E χ ′ ,lepton is the outgoing DM particles or lepton energy). E sec = yE in , where E sec is the secondaries' energy after a LDM or neutrino interaction with nuclei. The mean values of y for LDMs have been computed: The LDM and neutrino interaction lengths can be obtained by where N A is the Avogadro constant, and ρ is the density of matter, which LDMs and neutrinos interact with.
Flux of LDMs which reach the Earth
The LDMs which reach the Earth are produced by the decay of HDMs in the Sun's core. These LDMs have to pass through the Sun and interact with nuclei inside the Sun. Then the number N s of LDMs which reach the Sun's surface is obtained by the following equation: where N 0 = ts 0 dN dt dt is the number of HDMs captured in the Sun. t s and t 0 are the ages of the Sun and the Universe, respectively. T is the lifetime of taking data for IceCube and taken to be 6 years. If the distance from the Sun's center to the the LDM interaction length at i×δL away from the Sun's center. ρ i is the density at i×δL away from the Sun's center [33]. N s is computed in column density in the present work. The first exponential term in the Eqn. 10 is the fraction of decay of HDMs in the Sun's core. The term of continued product in the Eqn. 10 is the faction of LDMs which reach the Sun's surface. Here N is taken to be 10 4 . The results with N =10 4 is sufficiently accurate, whose uncertainty is about 0.05%. Then the flux Φ LDM of LDMs, which reach the Earth, from the Sun's core is described by where D se is the distance between the Sun and Earth.
Evaluation of the numbers of expected LDMs and neutrinos at IceCube
The lifetime for HDMs decaying into SM particles is strongly constrained (τ ≥ O(10 26 − 10 29 )s) by diffuse gamma and neutrino observations [22,[34][35][36]. Since the present work considers an assumption that HDMs are unable to decay to SM particles, the constraints on the lifetime for HDM are only those based on cosmology (the age of the Universe is about 10 17 s). Since τ φ ≫ 10 17 s in the Z ′ portal dark matter model [21,22], τ φ ≥ 10 21 s in this work. IceCube is a km 3 neutrino telescope and deployed in the deep ice below the geographic South Pole [37]. It can detect neutrino interactions with nuclei via the measurement of the cascades caused by their secondary particles above the energy threshold of 1 TeV [38]. The LDMs from the Sun, which pass through the IceCube detector, would interact with the nuclei inside IceCube. This is very similar to the DIS of neutrino interaction with nuclei via a neutral current, whose secondary particles would develop into a cascade at IceCube.
In this analysis, LDM events were selected with the following event selection criteria. First, only cascade events were kept. The track-like events are a class of background sources. The track-like events initiated by muons due to atmospheric muons and muon neutrinos would be rejected after that event selection. To reduce more background events initiated by atmospheric muon, Second, only up-going events occurring during a period in which the Sun was below the horizon were kept. Besides, only those up-going events from the Sun's direction were kept. Due to the sizable energy and angular uncertainties caused by the event reconstruction with IceCube, the cut windows for energy and angular separation between cascades and the Sun's direction would be used to extract signal candidate events from the up-going cascades events. These windows were taken to be one standard uncertainty and one median uncertainty, respectively. Certainly, the residual signals still contain a small neutrino component after all those event selections. Since the LDM and neutrino cascades are hard to distinguish at IceCube, one could only evaluate the number of expected neutrinos fallen into those windows.
The factors (C 1 and C 2 ) should be considered in the evaluation of the numbers of expected LDMs. C 1 is equal to 68.3% (that is 68.3% of the LDM events reconstructed with IceCube fall into a window caused by one standard energy uncertainty). C 2 is equal to 50% (that is 50% of the LDM events reconstructed with IceCube fall into a window caused by one median angular uncertainty). Then the number N det of expected LDMs obeys the following equation: where A ef f (E) obtained from the figure 2 in Ref. [39] is denoting the effective observational area for IceCube. E is denoting the energy of an incoming particle. P (E, ǫ(t)) can be given by the following equation: where L earth,ice is denoting the LDM interaction lengths with the Earth and ice, respectively. D is denoting the effective length in the IceCube detector and taken to be 1 km in this work. D e (ǫ(t)) = 2R e sin(ǫ(t)) is denoting the distance through the Earth. R e is denoting the radius of the Earth. ǫ(t) is denoting the obliquity of the ecliptic changing with time. The maximum value of ǫ is 23.44 • . After rejecting track-like events, the background remains two sources: astrophysical and atmospheric neutrinos which pass through the detector of IceCube. Only a neural current interaction with nuclei is relevant to muon neutrinos considered here. The astrophysical neutrinos flux can be described by [40] Φ astro where Φ astro ν is denoting the total astrophysical neutrino flux. The coefficients, Φ astro , α and β are given in Fig. VI.10 in Ref. [40]. The atmospheric neutrinos flux can be described by [41] Φ atm where x = log 10 (E ν /1GeV ). Φ atm ν is denoting the atmospheric neutrino flux. The coefficients, C ν (γ 0 , γ 1 and γ 2 ) are given in Table III in Ref. [41].
The neutrinos fallen into the energy and angular windows mentioned above would also be regarded as signal candidate events, so the evaluation of the number of expected neutrinos has to be performed by integrating over the region caused by these windows. Then the number of expected neutrinos N ν obeys the following equation: (16) where r e (ǫ(t)) = D e (ǫ(t)) 2 . θ is denoting the angular separation between the neutrinos and the Sun's diretion. θ min = 0 and θ max = σ θ . σ θ is denoting the median angular uncertainty for cascades at IceCube. The standard energy and median angular uncertainties can be obtained from Ref. [42] and Ref. [43], respectively. P (E, ǫ(t), θ) can be given by where D ′ e (ǫ(t), θ) = D e (ǫ(t))cos(θ) is denoting the distance through the Earth.
Results
The distributions and numbers of expected LDMs and neutrinos were evaluated in the energy range 1-100 TeV assuming 6 years of IceCube data. Fig. 1 shows the distributions (with an energy bin of 100 GeV) of expected LDMs and neutrinos. Compared to LDMs with m Z ′ =100 GeV and τ φ = 5 × 10 22 s, the numbers of neutrino events per energy bin are at least smaller by 4 orders of magnitude in the energy range 1-10 TeV. As shown in Fig. 1, the dominant background is caused by atmospheric neutrinos in the energy range 1-5 TeV but astrophysical neutrinos at energies above about 10 TeV in this measurement. The numbers of expected neutrinos (see black solid line) are shown in Fig. 2 and 3. The evaluation of the numbers of expected neutrinos was performed by integrating over the region caused by the energy and angular windows described above. The black dash line denotes the number of expected atmospheric neutrinos. These two figures both indicate the neutrino background can be ignored in this measurement. The numbers of expected LDMs with m Z ′ =100 GeV and τ φ = 10 21 s can reach about 70 and 1 at 1 TeV and 5.3 TeV at IceCube, respectively, as shown in Fig. 2 (see the red dash line).
Discussion and Conclusion
The Ref. [44] presents an analysis of neutrino signals due to the DM annihilation in the Sun with 6 years of IceCube data. This analysis has not found any significant indication of neutrinos due to the DM annihilation in the Sun. Since the LDM and neutrino signals are hard to distinguish at IceCube, it is a reasonable assumption that no events are observed in the measurement of LDMs due to the decay of HDM in the Sun at IceCube in 6 years. The corresponding upper limit on LDM flux at 90% C.L. was calculated with the Feldman-Cousins approach [45] (see the black solid line in Fig. 4 and 5). Based on the results described above, it is a reasonable conclusion that those LDMs could be directly detected in the O(1TeV)-O(10TeV) energy range at Ice-Cube with 100 GeV m Z ′ 350 GeV and τ φ 5 × 10 22 s. Since these constraints are only given by the assumptions mentioned above, certainly, the experimental collaborations, like the IceCube collaboration, should be encouraged to conduct an unbiased analysis with the data of IceCube.
Since Φ LDM is proportional to 1 τ φ (see Eqn. 10), the above results actually depends on the lifetime of heavy DM, τ φ . If τ φ varies from 10 18 s to 10 20 s, the numbers of expected LDMs with IceCube are larger by from 3 to 1 orders of magnitude than that with τ φ = 10 21 s, respectively. Besides, the capability of the measurement of those LDMs was roughly evaluated with the ANTARES telescope. Those LDMs could be directly detected at energies with O(1TeV) at ANTARES with m Z ′ < 200 GeV and τ φ < 10 20 s. Compared to IceCube, the expected signal to background rate is larger by about one order of magnitude with ANTARES. It is more difficult for ANTARES to detect those LDMs, however, since the effective area for ANTARES is smaller by about 2 orders of magnitude than that for IceCube at energies above 1 TeV [39,46]. As we all know, the capabilities of the measurement of those LDMs should be substantially improved with IceCube and ANTARES if their upgrading projects will be completed in the future.
Ref. [28] presents an analysis of the constraints on the mass of Z ′ and g χχZ ′ using the observations of direct, indirect measurement, collider and cosmology. Considered the assumption that the bulk of present-day DM consists of HDMs in this work, observations of cosmology and direct, indirect measurements of DM are inappropriate to analysis the results in this work. Fig. 6 in Ref. [28] shows the result for DM with a mass of 8 GeV. The whole light Z ′ window (m Z ′ < 1 TeV) is ruled out by the observations of LHC and Tevatron using the dijet data. To measure DM generated by colliders, the dijet+E miss T analysis is more reasonable. Fig. 6 in Ref. [28] presents the light Z ′ window is not ruled out by LHC at 8 TeV using the dijet+E miss T data with g χχZ ′ < 0.25. | 4,470.2 | 2021-06-04T00:00:00.000 | [
"Physics"
] |
Long-lived quantum speedup based on plasmonic hot spot systems
Long-lived quantum speedup serves as a fundamental component for quantum algorithms. The quantum walk is identified as an ideal scheme to realize the long-lived quantum speedup. However, one finds that the duration of quantum speedup is very short in real systems implementing quantum walk. The speedup can last only dozens of femtoseconds in the photosynthetic light-harvesting system, which was regarded as the best candidate for quantum information processing. Here, we construct one plasmonic system with two-level molecules embodied in the hot spots of one-dimensional nanoparticle chains to realize the long-lived quantum speedup. The coherent and incoherent coupling parameters in the system are obtained by means of Green's tensor technique. Our results reveal that the duration of quantum speedup in our scheme can exceed 500 fs under strong coherent coupling conditions, which is several times larger than that in the photosynthetic light-harvesting system. Our proposal presents a competitive scheme to realize the long-lived quantum speedup, which is very beneficial for quantum algorithms.
I. INTRODUCTION
Quantum information exhibits the advantage over its classical counterpart due to the appearance of the quantum speedup [1][2][3][4][5][6][7][8][9][10][11][12]. Long-lived quantum speedup has been widely applied in the quantum information processing, e.g., quantum algorithms. One ideal theoretic scheme involving the long-lived quantum speedup is the quantum walk [13,14]. The mean squared displacements of excitation in the ideal one-dimensional (1D) quantum walks display the ballistic spreading (∆x) 2 ∝ t 2 , Such rate of spreading in the quantum walk is indicated as the ideal quantum speedup, which was commonly attributed to the quantum coherence in the systems [13][14][15]. In comparison, the corresponding classical random walk characterizes the diffusive spreading (∆x) 2 ∝ t and does not possess the quantum speedup.
In recent years, many experiments have demonstrated the existence of quantum coherence in natural systems. For example, the ultrahigh efficient transport due to the quantum coherence has been observed in photosynthetic light-harvesting systems [16][17][18][19][20][21][22][23]. These systems are treated as potential platforms to realize the continuoustime quantum walk and quantum speedup algorithms. Nevertheless, the long-lived quantum coherence in photosynthetic light-harvesting systems cannot ensure the long-lived quantum speedup. Hoyer and coauthors [24] pointed out that, due to the disorder and dephasing, the transition from ballistic to diffusive spreading occurs at about 70 fs in photosynthetic light-harvesting complexes even though the quantum coherence lasts much longer. The short duration of quantum speedup hinders photosynthetic light-harvesting systems to be employed as a platform in the realization of quantum speedup algorithms.
Recently, the quantum coherence has also been found in nanophotonic systems such as nanocavities [25,26], photonic crystals [27] and plasmonic systems [28]. As addressed in Refs. [28][29][30][31], the coupling resonances among nanoparticle trimer could induce the strong couplings between the molecules, and the strong quantum coherence has been revealed between two molecules in a symmetrical nanoparticle trimer system [28]. Given that the quantum coherence is the key factor for constructing the quantum walks, these works motive us to study how to realize the ideal quantum walks based on strong coupling plasmonic nanostructures and explore the quantum speedup in these nanostructures.
In this work, we demonstrate that 1D continuous-time quantum walk can be constructed within a scheme of the nanoparticle chain involving plasmonic hot spots. The decoherence from the ambient environment has been taken into account. The dynamics for continuous-time quantum walk in such a system is obtained by means of the Lindblad master equation approach and the electromagnetic Green's tensor technique. Our results reveal that, due to the strong nearest-neighbor coupling between the molecules, the duration of quantum speedup in our proposed system can reach 500 fs, which is several times larger than the duration of quantum speedup in the photosynthetic light-harvesting system. Our implementation of continuous-time quantum walk based on such plasmonic nanostructures presents a new platform to realize the quantum speedup algorithms.
The rest of this paper is arranged as follows: in Sec. II, we present the general description for the plasmonic hot spot system. The correspondence between the plasmonic hot spot system and the continuous-time quantum walk has been provided. Then in Sec. III, we provide the dynamics of the plasmonic hot spot system with different frequencies. The study of quantum speedup in our plasmonic hot spot system is addressed in Sec. IV. Further discussions with different parameters of the chain, the number of molecules and the nonlocal effect are presented in Sec. V. Finally, we make a summary in Sec. VI.
II. GENERAL DESCRIPTIONS IN 1D PLASMONIC NANOPARTICLE CHAIN
The 1D nanoparticle chain is depicted in Fig. 1, in which the separation distances between the particles are assumed as the same (denoted by d). The six two-level molecules (marked by 1 ∼ 6) are inserted into the gaps of the chain, and the orientations of the electric dipole moments of these molecules are assumed along the axis of the chain. Such configuration is also indicated as the plasmonic hot spot system [28]. Here, we consider one excitation in these two-level molecules. The six molecules can be mapped to the positions in the 1D quantum walk (see Fig. 1). Though we focus on the dynamics of the molecular system which is composed by six molecules in the main text, we also study the system with other numbers of molecules and provide the discussion in Supplemental Material. Here, the position of the ith molecule is expressed by i, and the displacement x i of the ith molecule is given by its position in the 1D chain which is indicated in Fig. 1. The mean-squared displacement of the excitation in these molecules can be obtained as where the ∆x denotes the displacement between the current and initial position of the molecules. The density matrix ρ describes the density matrix of all molecules. In the following, we will study the evolution of these molecules and provide the correspondence between the dynamics of the excitation in the molecules and 1D quantum walk.
The inevitable dissipation from the metal nanoparticles and radiation into the free space should be considered for studying the dynamics of the molecules. The spreading speed of excitation and distance relate closely to the dynamics of the molecules. The dynamics of the six molecules can be described in the form of the Lindblad master equation [20,32] where σ † i and σ i are the creation and annihilation operators of the ith molecule, γ ij represents the interference term (incoherent coupling) between the ith and the jth molecules, which could lead to the decay of the offdiagonal elements of density matrix ρ. The γ ii represents the dissipation of the ith molecule, which includes the dissipation in the environment and non-radiation transition. In the metal nanoparticle cluster environment, the dissipation arises mainly from the loss in metal. The Hamiltonian of these molecules is described as Here, N is the number of molecules, and N = 6 is taken in this work. The parameter ω 0 is the transition frequency of molecule, and here for simplicity, we assume they have the same transition frequency. The coefficient g ij in Hamiltonian is the coherent coupling strength between the ith and the jth molecules, where P stands for the principle integral, and ε 0 is the relative dielectric constant of the background medium, in this work the background is taken as water (ε 0 = 1.77).
Here G ↔ is the total Green's tensor. To avoid the sophisticated principle integral, the g ij can be simplified as [28,33] where G ↔ s represents the scattered Green's tensor of the nanoparticle chain, and the detailed calculations can be found in Appendix. With the electromagnetic Green's tensor technique, the incoherent coupling strength γ ij appearing Eq. (2) can be obtained as in which the calculation of the total Green's tensor G ↔ is addressed in Appendix. According to Eq. (5), the dissipation in the ith molecule can be expressed as The calculation procedure for γ ii can be found in Appendix.
In this work, all the interactions among the six molecules have been included, from the nearest neighbor interactions between the 1st and 2nd molecules, to the interaction between the 1st and 6th molecules. According to the symmetry of the system, the interactions (coherent and incoherent) between the 1st and 2nd molecules are the same to the interactions between the 5th and 6th molecules, namely g 12 = g 56 and γ 12 = γ 56 , and for the other nearest neighbor interactions, there is g 23 = g 45 and γ 23 = γ 45 . Similarly, for the non-nearest-neighbor interactions we have g 13 = g 46 and γ 13 = γ 46 , g 24 = g 35 and γ 24 = γ 35 , g 14 = g 36 and γ 14 = γ 36 , g 15 = g 26 and γ 15 = γ 26 . The detailed calculation method for g ij and γ ij can be found in Appendix.
In the following, we will investigate the correspondence between the dynamics of molecules in hot spot system and the standard quantum walk. By employing the quantum trajectory method introduced in Refs. [20,21], we can re-express the dynamics of the excitation in molecules as where the effective Hamiltonian can be written as Notice that H eff is a non-Hermitian Hamiltonian, and it commutes with excitation number operator N = N i=1 σ † i σ i , thus it preserves the number of excitation, and can only give rise to the jump of excitation between different molecules. The second term in the right of Eq. (7) originates from the incoherent interaction between the two-level molecules and plasmons, it can induce dephasing in these molecules without changing the number of excitations. These two terms mentioned above induce the jumps in the single-excitation manifold and no jump to the other excitation manifolds. The last term in the right of Eq. (7) originates from the dissipation in the plasmonic environment and radiation to the free space, it can change the number of excitations and generate jumps between excitation manifolds. When neglecting the jumps to other excitation manifolds, we have the master equation in the case of a no jump trajectory as where ρ eff is the effective density operator describing the density matrix of all molecules. This equation is often considered as a directed quantum walk on the singleexcitation manifold described by the density operator ρ eff [21,34]. In our discussion below, we will focus on the dynamics of density operator ρ including the jumps between excitation manifolds [i.e., Eq. (7)]. We will show that even when influenced by the ambient environment, our system can still possess long-lived quantum speedup.
III. THE QUANTUM WALK DYNAMICS IN 1D PLASMONIC HOT SPOT SYSTEMS
According to the theory in Sec. II, the dynamics of our system involving one excitation depends closely on the coherent and incoherent coupling strengths among the molecules. In our study, the nanoparticles are taken as the Ag spheres. For the dielectric functions of Ag, the Drude model is adopted (ω p = 9.01 eV and γ = 0.05 eV). In Fig. 2, when the radii of the Ag spheres are taken as R = 12 nm, and the separation distances between two nearest-neighbor spheres are fixed at d = 2 nm , we present the coherent and incoherent coupling strengths and molecule decay rates as a function of the transition frequency of molecule. Such configuration could be realized by using small molecules that link the particles in a line, like the experimental research shown in Ref. [30], in which the distance can be as small as about 1 nm. In Fig. 2(a), the nearest-neighbor coherent couplings g 12 /γ 11 , g 23 /γ 22 and g 34 /γ 33 are showed by black, red and blue lines, respectively. The three lines in Fig. 2(b) correspond to the decay rates of the first, second and third molecules (γ 11 , γ 22 and γ 33 ). As for Fig. 2(c), the three lines correspond to the nearest-neighbor incoherent couplings γ 12 /γ 11 , γ 23 /γ 22 and γ 34 /γ 33 , respectively.
It is seen clearly that, there exist multiple resonances in three panels of Fig. 2. These resonances can affect the dynanics of the excitation of molecules. To clarify the properties of the resonances, we focus on some special cases that correspond to different coupling conditions. These special frequencies are marked by ω 1 ∼ ω 6 in Fig. 2, in which ω 1 ∼ ω 4 are the cases that the system possesses the large coherent couplings and small incoherent couplings. For example, when ω = 3.25 eV (marked by ω 1 ), one of the nearest-neighbor coherent couplings (g 12 /γ 11 ) can reach to −3, however the other two are small. And when ω = 3.72 eV and 3.78 eV (marked by ω 2 and ω 3 ), the nearest-neighbor coherent couplings (g 12 /γ 11 , g 23 /γ 22 and g 34 /γ 33 ) can reach to 2 simultaneously. When ω = 4.64 eV (marked by ω 4 ), the three coherent couplings are 0.8 simultaneously. In contrast, ω 5 and ω 6 are the cases that coherent couplings are very small and incoherent couplingss are at resonance peaks.
To further clarify the properties of the resonances marked in Fig. 2, in Fig. 3 we plot the corresponding electric field patterns of the nanoparticle system for the normal incidence, in which ω 1 ∼ ω 6 correspond to the six frequencies marked in Fig. 2. It is shown that when the couplings (coherent or incoherent) are in maximums (ω = ω 1 ∼ ω 3 and ω 5 ), the electric field patterns are similar, and the fields in hot spots are much larger than other region. This stems from the coupling resonances among the nanoparticle chain as has been discussed in Refs. [28,29]. In comparison, we also plot the field pattern under the single scattering frequency (ω = 5.25 eV as marked in Fig. 2(b)), which is showed in the last panel of Fig. 3. In this panel, the intensities of the fields in the gaps are about the same with that around the particles. That means the fields in the molecules cannot be confined. When ω = ω 4 and ω 6 , due to the weak coupling resonances in the system, the fields in hot spots are weaker compared to the cases of ω 1 ∼ ω 3 and ω 5 due to the weak coupling resonances in the system, which are similar to the case of single scattering resonance. The electric field patterns in Fig. 3 show that strong resonance couplings only exist in the coupling resonance region between the nanoparticles, which confines the large electric field within the positions of hot spots. Such strong confinement of fields results in the strong couplings between the molecules that located at the hot spots. In addition, comparing Fig. 2(c) with Fig. 2(b) we find that, incoherent couplings between the molecules correspond very well to the resonance peaks, which are resonant couplings; while coherent couplings have a shift compared to resonance peaks, which are off-resonant couplings. [28] Various resonance couplings can lead to different populations of the excitation of molecules. The time evolution of the populations of six molecules (denoted by P 1 ∼ P 6 ) are shown in Fig. 4 with six different frequen- cies (marked by ω 1 ∼ ω 6 as shown in Figs. 2 and 3). It has to be pointed out that, in the calculations of populations (Fig. 4), all the non-nearest-neighbor couplings among molecules have also been included. The results of the non-nearest-neighbor coherent and incoherent coupling strengths between the molecules have been shown in Supplementary Material. In Fig. 4, the dashed black line denoted by P0 is the time evolution of population in all molecules. As addressed in Fig. 4, when ω = 3.25 eV (ω 1 ), the excitation can reach every molecule even though P 4 and P 5 are very small. At this frequency, only one nearest-neighbor coupling g 34 /γ 33 can reach to −3, the other nearest-neighbor couplings g 12 /γ 11 and g 23 /γ 22 are very small. The imbalance of the coherent couplings leads to small populations at the 4th and 5th molecules. Even though, the time of the energy residing in molecules is larger than 1000 fs in this case (dashed black line in the first panel). In comparison, when the transition frequency is taken as ω = 3.72 eV (ω 2 ) or 3.78 eV (ω 3 ), all the nearest-neighbor coherent couplings showed in Fig. 2 can reach 2 simultaneously, and the nearest-neighbor incoherent couplings and decay rates are small enough. In these cases, the molecules inserted in the gap of the nanoparticle cluster can be treated as an ideal 1D chain with same nearest-neighbor couplings. In Fig. 4, the populations on all molecules at the transition frequency ω = 3.72 eV (ω 2 ) or 3.78 eV (ω 3 ). Thus under these frequencies, all molecules in the 1D chain can be excited, and P 0 is similar to the first panel.
When the transition frequency is taken as ω = 4.64 eV (ω 4 ), the coherent and incoherent couplings are all small (see Fig. 2). In this case the excitation cannot transport (see the fourth panel in Fig. 4) any more, and the percentage of the energy residing in molecules decreases to zero quickly, thus the quantum walk cannot be constructed. When ω = 3.6 eV (ω 5 ), according to the Figs. 2, 4 and 5, we find that the nearest-neighbor and non-nearestneighbor incoherent couplings are in their peak values, and nearest-neighbor coherent couplings are almost zero. In this case, although the population in all molecules lasts a long time similar to the first three cases, the excitation can only reach to the third molecule and cannot transport to the other three molecules. When ω = 4.48 eV (ω 6 ), the nearest-neighbor incoherent couplings are also in their peak values but much smaller than the case of ω 5 , and the nearest-neighbor coherent couplings are also very small. In this case, the excitation almost cannot transport any more, and the energy residing in the molecules decreases to zero quickly. Thus, when the transition frequency is taken as ω = ω 4 , ω 5 and ω 6 , due to very small nearest-neighbor coherent couplings, not all molecules inserted in the gap of nanoparticles can be excited. In these three cases, an ideal 1D chain with same nearest-neighbor couplings cannot be constructed, and the dynamics of the excitations are far different from the dynamics of quantum walk.
IV. THE QUANTUM SPEEDUP IN PLASMONIC HOT SPOT SYSTEMS
In this section, we explore under what conditions our hot spot system holds the quantum speedup as in the standard quantum walk. A natural way to evaluate the quantum speedup is the exponent b, which relates to the spreading of the excitation in the 1D chain by the power law (∆x) 2 ∝ t b . Here, b = 2 corresponds to the ideal quantum speedup in the quantum walk, b = 1 corresponds to the diffusive transport of classical random walks and b < 1 corresponds to the sub-diffusive transport [24]. The long-lived quantum speedup is one of the most important property in the ideal quantum walk, which makes it exhibit advantages over the classical walk in information processing. In one realistic system, the duration of b > 1 determines the quality of the quantum speedup in quantum walks. In Fig. 5(a), we present the mean squared displacement of excitation (∆x) 2 as a function of time, in which the parameters of the system are the same to those in Fig. 2. In Fig. 5(b), we plot the the fitted power b as a function of time at four frequencies. In Fig. 5(a) and (b), the frequencies ω 1 ∼ ω 4 correspond to the cases that nearest-neighbor coherent couplings are in their peak values. According to Figs. 2 and 4, when the frequency is taken as ω = ω 2 or ω 3 , the nearest-neighbor coherent couplings between any two adjacent molecules in 1D chain are strong, and the other couplings can be neglected. In these two cases, these molecules along 1D chain have the same structure as that in the continuous-time quantum walk, and populations at all molecules can reach the nearly same value in sequence. Our calculations in Fig. 6(b) shows that the duration of quantum speedup under ω 2 and ω 3 can reach to about 400 ∼ 500 fs, which is several times larger than those in photosynthetic lightharvesting systems (about 70 fs) [24]. At these frequencies, the molecules inserted in the gap of 1D nanoparticles form a realistic 1D quantum walk system with long-lived quantum speedup.
In the following, we explore how the separation distance between adjacent nanoparticles affects the excitation in the systems. When the separation distance changes, the competition between couplings and dissipations in the system could largely affect the duration of quantum speedup. We investigate the two cases that d = 1 nm and d = 4 nm, respectively. In both cases, the radii of nanoparticles is same, R = 12 nm.
The solid lines in Fig. 6(a) and (b) show that the time evolution of the populations of six molecules in different frequencies for d = 1 nm and 4 nm, respectively. And the dashed black lines in Fig. 6 denote the time evolution of population in all molecules. It is clear that the energy dissipates much more quickly when d=1 nm than the case that d = 4 nm due to the much larger dissipation. The calculations of nearest-neighbor coupling strengths and non-nearest-neighbor coupling strengths are addressed in Supplementary Material. Similar to Fig. 4 (the populations for d = 2 nm), the frequencies ω 1 ∼ ω 4 in Fig. 6 correspond to the cases that nearest-neighbor coherent coupling strengths are in their peak values and incoherent couplings are almost zero. We first focus on the case with d = 1 nm [ Fig. 6(a)]. When the transition frequency is ω 2 or ω 3 , nearest-neighbor coherent coupling strengths can reach 1.8 simultaneously, which are a little smaller than the case with d = 2 nm [there the nearestneighbor coherent coupling strengths reach 2.0 in Fig. 2(a)]. While, due to large dissipations for the case with d = 1 nm, populations on all molecules are dissipated in a very short time. When the transition frequency is taken as ω 1 or ω 4 , coherent coupling strengths between the adjacent molecules are much smaller than the cases of ω 2 and ω 3 . When the frequency is ω 5 or ω 6 , all nearestneighbor incoherent coupling strengths between adjacent molecules are in their peak values and coherent coupling strengths between them are almost zero. Under those transition frequencies (ω 1 , ω 4 , ω 5 or ω 6 ), due to very small nearest-neighbor coherent coupling strengths, the populations on molecules which are far away from the initially excited molecule are almost zero. In those cases, the dynamics of excitation among molecules along 1D chain is far different from that of quantum walk.
When the separation distance is d = 4 nm (the asso-ciated coupling strengths can also be found in Supplementary Material), nearest-neighbor coherent coupling strengths between adjacent molecules only reach about 1.1 simultaneously, that is much smaller than the nearestneighbor coherent coupling strengths with d = 1 nm and d = 2 nm. Notice that in the case with d = 4 nm, the dissipation strengths (indicated in Supplementary Material) are one order of magnitude smaller than those with d = 2 nm. Compared with the case with d = 1 nm or 2 nm, when d = 4 nm, the smaller dissipations in the molecules make populations survive in the system with longer time [as shown in Fig. 6(b)]. However, when d = 4 nm, due to small coupling strengths between adjacent molecules, the excitation is nearly localized at the initial molecule under most frequencies, and the quantum walk dynamics is unable to be implemented. Comparing Fig. 6(a) with Fig. 4 (the case with d = 2 nm), we find that the surviving time of populations is shorten from about 1000 fs to 300 fs when the separation distance is changed from d = 2 nm to 1 nm. Figure 8 shows the mean squared displacement of excitation (∆x) 2 (a) and the fitted power b (b) as a function of time when d = 1 nm (left column) and d = 4 nm (right column). Where the cases are presented with blue (ω = ω 1 ), red (ω = ω 2 ), black (ω = ω 3 ) and green (ω = ω 4 ) lines respectively. When the separation distance d = 1 nm, we find that the duration of quantum speedup is no longer than 150 fs [see the case with the transition frequency chosen as ω 2 or ω 3 in Fig. 7(b)], which is much smaller than the duration of quantum speedup with d = 2 nm [about 500 fs in Fig. 4(b)]. While when d = 4 nm, the duration of quantum speedup can reach about 350 fs under ω = ω 1 . Such duration of quantum speedup is comparable to that with d = 2 nm. However, when the transition frequency satisfies ω = ω 1 with d = 4 nm, we find that all molecules except the initially occupied molecule are nearly not excited [first panel of Fig. 6(b)]. Such small excitation at molecules cannot be seen as one efficient quantum walk.
V. DISCUSSIONS
In our study above, we have realized the long-live quantum speedup within the plasmonic hot spot system. The formation of quantum walk dynamics in this system can last hundreds of femtoseconds, which is one order magnitude larger than in the photosynthetic lightharvesting system. Based on discussions above, we find that among molecules inserted in our 1D nanoparticle chain, the competition between coherent/incoherent couplings and dissipations can largely influence the excitation of molecules. Such influence on the excitation leads to different quantum speedup in our system and affect the formation of the quantum walk within our scheme. From the view of quantum walk, the formation of standard quantum walk requires the same nearest-neighbor coupling strengths between any adjacent two sites. These same coupling strengths induce the walker to jump to the nearest sites with same probabilities and form the strong interference between the wave functions at every site. The probability in the central sites diminishes due to destructive interference and much more probability appears in the remote sites due to constructive interference, which generates the ideal quantum speedup in the standard quantum walk. When studying the quantum speedup in the plasmonic hot spot system, we find that the emergence of long-lived quantum speedup in the system indicates nearly same nearest-neighbor coupling strengths and very small non-nearest-neighbor coupling strengths between molecules [ω = 3.72 eV (ω 2 ) or 3.78 eV (ω 3 ) in Fig. 4].
In the case with the radii of nanoparticle R = 12 nm, the separation distance d = 2 nm is an optimal solution for the ideal quantum walk with long-lived quantum speedup. For these parameters in the plasmonic hot spot system, we can obtain relatively same larger nearest-neighbor coupling strengths between any two adjacent molecules, and smaller non-nearest-neighbor coupling strengths between molecules. Similarly, when the radii of the nanoparticle is changed, the competition between the coherent/incoherent couplings and dissipations still exists in our scheme, and the optimal separation distance to construct quantum walk with long-lived quantum speedup could be changed. In Supplementary Material, we present coherent/incoherent coupling strengths and populations for the case with R = 10 nm and d = 4 nm. Comparing to the case with R = 12 nm and d = 4 nm, we find that the coherent coupling strengths decrease about 10% when the radii of spheres decreases to R = 10 nm. This directly leads to the decrease of populations on molecule. Therefore, when we decrease the radii of the spheres, we need to decrease the separation distance to ensure the large enough nearest-neighbor coherent couplings between molecules. On the contrary, when we increase the radii of the spheres in our scheme, dissipations in these molecules and coherent couplings between adjacent molecules increase simultaneously. In this case, the optimal separation distance also increases to implement a long-lived quantum speedup.
We also investigate the quantum speedup properties with different numbers of molecules. We fix the parameters of the hot spot system with the radii of sphere R = 12 nm, and the distance between two adjacent spheres is d = 2 nm. The numbers of molecules are chosen as N = 5 and N = 7, respectively. The calculated results have been presented in Supplementary Material. We find that the quantum walk properties in these systems are similar to the case with N = 6, and simultaneously large nearestneighbor coherent couplings lead to long-lived quantum speedup. In our discussion, we obtain that the optimal duration of quantum speedup is 420 fs with the number of molecules N = 5, and 580 fs with N = 7. Considering that the duration of quantum speedup is 500 fs in our main text with the number of molecules N = 6, for the plasmonic hot spot system with different numbers of molecules, we can realize the long-live quantum speedup.
In addition, when the radii of spheres decreases to 10 nm and separation distances are smaller than 1 nm, the nonlocal effect cannot be ignored [35][36][37][38]. In these cases, the plasmon-enhanced fluorescence could be decreased by a small value [39], namely dissipation strengths of molecules could reduce a little when the radii of spheres and separation distances between adjacent molecules are not too small. At the same time, previous researches have shown that coherent coupling strengths in a trimer system (R = 10 nm and d = 1 nm) only present small blue shifts due to the nonlocal effect [28], and values of coupling strengths keep the same. In our plasmonic hot spot system, we also investigate the nonlocal effect in Supplementary Material. We find that the nearest-neighbor couplings and dissipations only have blue shifts and have no obvious change in values. We can obtain the similar quantum speedup behavior with change of the molecular frequencies. It means that the nonlocal effect affects dissipations and coherent couplings among molecules little, and will not influence quantum speedup and the formation of quantum walk in our discussion.
VI. SUMMARY
In this work, we investigate long-lived quantum speedup in molecules inserted in the gaps of 1D nanoparticle chain. Both the coherent and incoherent couplings among molecules have been included for the dynamics of the excitation of molecules, and the roles of nearest-neighbor and non-nearest-neighbor couplings on the quantum speedup have been analyzed. We have found that at some special frequencies, due to the large coupling resonance, the large nearest-neighbor coherent couplings between any two adjacent molecules and small dissipation of each molecule are obtained simultaneously. In this case, the dynamics of excitation among molecules in our scheme is similar to that of the continuous-time quantum walk, and the duration of quantum speedup in our scheme is several times larger than the duration in photosynthetic light-harvesting systems. Although in our study, the quantum speedup within 1D nanoparticle chain is addressed as the dynamics in 1D quantum walk on a chain, our scheme can also be extended to the case of quantum walk on a circle. Given that our discussion takes into the real experimental situation account (e.g., dissipation from the nanoparticles, radiation into free space, and so on), our proposal based on the plasmonic hot spot system presents a new scheme to have long-lived quantum speedup and provides a new platform to realize the continuous-time quantum walk under laboratory conditions. Here we present the detailed calculation method of the coherent and incoherent terms (g ij and γ ij ) as shown in Eqs. (4) and (5). In Eqs. (4) and (5), G ↔ s ( r i , r j , ω) and G ↔ ( r i , r j , ω) represent the scattered and total Green's tensors of the nanoparticle (NP) chain, respectively. The vectors r i and r j are the positions of the ith and jth molecules, respectively. The Green's tensor G ↔ ( r i , r j , ω) has the meaning of the total electric field in the position of r i caused by a unit dipole at the location of r j in the presence of NP chain, similar to G ↔ s ( r i , r j , ω). In this work, the Green's tensors of the NP system are all calculated with the multiple scattering T-matrix technique [40][41][42]. The total and scattered Green's tensors have the following relation G ↔ ( r i , r j , ω) = G ↔ s ( r i , r j , ω) + G ↔ vac ( r i , r j , ω), where G ↔ vac ( r i , r j , ω) is the Green's tensor in the vacuum. In the following, we introduce the multiple scattering T-matrix method to calculate the Green's tensors mentioned above.
The incident field E inc and the scattered field of the ith NP E i s can be expanded with the vector spherical functions (VSFs) [40][41][42] where M 1 ν , N 1 ν , M 3 ν and N 3 ν are the well-known VSFs, and r i is the position vector in the coordinate of the ith nanoparticle. R i is the radius of the smallest sphere circumscribing the ith object, in this work R i is equal to the radii of the NPs.
The Green's tensor in the vacuum G ↔ vac ( r i , r j , ω) can be readily known in Ref. [43]. In Eqs. (A2) and (A3), the coefficients a i ν , b i ν , f i ν and g i ν can be easily solved as soon as the form of the incident wave is given. The subscript ν stands for (m, n) which are the indices of spherical where Expanding Eq. (A8) to the same form of Eq. (A2), from Eqs. (A3-A7) we can obtain the external scattered field E κ d = N i=1 E i s caused by the dipole source. Then, the scattered Green's tensor G ↔ s ( r i , r j , ω) can be obtained with where n i and n j are the unit vectors of the field and source dipole moments, respectively. E κ d ( r i )| rj represents the field in the position of r = r i induced by the source dipole that locates at r = r j in the presence of NP chain.
For the decay rate of the ith molecule γ ii , the Green's tensor G ↔ ( r i , r j , ω) represents the field generated by the dipole in its own position in the presence of NP chain, which can be calculated with where E κ d ( r i )| ri represents the field induced by the source dipole that locates at in its own position in the presence of NP chain. | 8,320.2 | 2018-03-09T00:00:00.000 | [
"Physics"
] |
Models for the recent evolution of protocadherin gene clusters
The clustered protocadherins (Pcdhs) are single-pass transmembrane proteins that constitute a subfamily within the cadherin superfamily. In mammals, they are arranged in three consecutive clusters named α, β, and γ. These proteins are expressed in the nervous system and are targeted to mature synapses. Interestingly, different neurons express different subsets of isoforms; however, little is known about the functions and expression of the clustered Pcdhs. Previous phylogenetic analyses that compared rodent and human clusters postulated the recent occurrence of gene duplication events. Using standard phylogenetic methods, I confirmed the prior observations, and I show that duplications are likely to occur through unequal crossing-over events between two, and sometimes three, different Pcdh genes. The results are consistent with the fact that these genes undergo gene conversion. Recombination events between different clustered Pcdh genes appear to underlie concerted evolution through gene conversion and gene duplications through unequal crossing-over. In this work, I provided evidence that the unit of duplication of these genes in both the mouse and the human and within each cluster is the same. The unit of duplication includes the extracellular domain-coding sequence of an isoform and its promoter along with the cytoplasmic domain-coding region of the immediately upstream isoform in the cluster. BIOCELL 2008, 32(1): 9-26 ISSN 0327 9545 PRINTED IN ARGENTINA
Introduction
Cadherins are a superfamily of cell adhesion molecules that can be divided into various families, one of which is the protocadherin (Pcdh) family.Some of the Pcdh genes are grouped in three clusters named α, β, and γ (Wu and Maniatis, 1999).The human (Homo sapiens) and the mouse (Mus musculus) genomes have one of each type (Wu et al., 2001).The α and γ clusters consist of a variable region, where unusually large ex-ons (VRE) with the same orientation are arranged in tandem and followed by a constant region of three additional exons.These last exons encode an invariable Cterminal cytoplasmic region.Each VRE encodes a signal peptide, six cadherin ectodomains (ECs), a transmembrane segment, and the remaining region of the cytoplasmic domain.Every VRE has its own promoter, and the mature mRNA is synthesized via cissplicing (Tasic et al., 2002;Wang et al., 2002;Fig. 1).The β clusters are similarly organized except that they lack the three common exons.Thus, every β VRE is equivalent to a gene.
The gene order of the mouse and the human α and γ clusters is conserved (Wu et al., 2001;Wu, 2005).
Three groups of Pcdhs can be defined within the γ clusters on the basis of sequence similarity: γa, γb, and γc (Wu and Maniatis, 1999).The functional implications of the divisions are not well understood.In the mouse and the human, the γc VREs lie immediately upstream of the three common exons of the cluster and are extremely similar to the two VREs immediately upstream of the α cluster three common exons (Wu et al., 2001).Interestingly, in the γa and γb groups, proximal VREs usually have very similar sequences; however, the evolutionary relationships, which are well supported, do not seem to be the result of recent duplications.
The human β cluster contains 16 genes (PCDHB1 to PCDHB16), whereas the mouse β cluster has 22 (Pcdhb1 to Pcdhb22) (Wu et al., 2001;Vanhalst et al., 2001).The difference in the number of genes is due to numerous duplication events following the co-divergence of the lineages, combined with the loss of some genes in humans (Vanhalst et al., 2001), but the duplication mechanisms are still unknown.Phylogenetic analyses have shown that the N-terminal ectodomain of some β Pcdhs in humans are very similar (Noonan et al., 2004;Miki et al., 2005).For example, PCDHB8 and PCDHB13 have very similar third ECs.Thus, it appears that their genes emerged from a recent duplication event.
Genes in the Pcdh clusters undergo gene conversion (Noonan et al., 2004), but homogenization of the genetic information is restricted only to specific regions.Generally, most of the cytoplasmic domains and the sixth EC (EC6) sequences are very similar among the Pcdhs in each cluster.Consequently, much of the phylogenetic information is found in the N-terminal ECs.This variability in the ECs may provide specificity in cell-cell adhesion and recognition, probably through homophilic interactions.
The clustered Pcdhs are highly expressed in the nervous system and are targeted to mature synapses and intracellular compartments (Phillips et al., 2003).The complete deletion of the γ Pcdh cluster in mice does not impair neurogenesis, but the cluster is required for the survival of some neuronal populations such as spinal interneurons (Wang et al., 2002).Interestingly, the different Pcdh isoforms are expressed in a punctate pattern throughout the brain (Komura et al., 1998;Frank et al., 2005).The γa and γb Pcdhs are expressed in a monoallelic and combinatorial fashion, while the expression of the γc Pcdhs is biallelic (Kaneko et al., 2006).Similarly, the α Pcdhs gene regulation is monoallelic except for the γc-like isoforms (Kaneko et al., 2006;Esumi et al., 2005).
To gain further insight into the regulation of these genes, I analyzed the recent evolution of the mouse and the human clusters.I showed that the clustered Pcdh gene duplications result from unequal crossing-over events between exons.This is consistent with the fact that the clustered genes are subject to gene conversion because both phenomena, gene duplication and gene conversion, seem to be consequences of recombination between different VREs.Moreover, I showed that the unit of duplication of these genes includes the EC-coding sequences of a VRE and its promoter along with a part of the cytoplasmic domain-coding region of the VRE immediately upstream.
Methods
All of the sequences were downloaded from GenBank (http://www.ncbi.nlm.nih.gov/)(AF152501, AF217742, AF217744-AF217749, AF217750, AF217752-AF217757, AF326296, AY013771-AY013784, AY013786-AY013791, NM_031857, NM_018900-MN_018911, AY573971-AY573983, NM_018912-NM_018921, NM_032088, NM_003735, NM_033584-NM_033595, NM_018922-NM_018927, NM_003736, NM_033574-NM_033580, NG_000012, NG_000016, and NG_000017).A pre-alignment of the whole sequences was performed to establish the limits of the regions and was followed by an alignment of the selected sequences by use of ClustalX (Thompson et al., 1997) under default parameters.I considered that each DCR (Divergent Cytoplasmic Region) coding sequence starts with the last three consecutive conserved nucleotides and ends with the VRE.For Pcdhb3 and PCDHB16, the alternative sequences were used (see below).To establish the EC limits, I considered that each domain ends after the VXVXVDXNDNAPXF-conserved motif.The unrooted trees were obtained with ClustalX (Thompson et al., 1997) by use of the neighbor-joining method under default parameters and edited with the TREEVIEW program (Page, 1996).To estimate branch supports, 1000 bootstrap replicates were performed.The nucleotide sequences were translated in silico to obtain the Pcdh amino acid sequences.Aligned sequences were shaded with the BoxShade program (http:/ /www.ch.embnet.org/software/BOX_form.html).
Description of the clustered Pcdhs cytoplasmic domain
Because the clustered Pcdhs genes undergo gene conversion due to putative recombination events between different VREs, I examined whether unequal crossing-over between different genes could explain the emergence of the new Pcdhs in the clusters.In this case, the new Pcdhs should have N-terminal ECs similar to one isoform and a cytoplasmic domain similar to another (Fig. 2).A visual inspection of the aligned human γa Pcdhs, however, shows that although the N-terminal regions are quite variable, the C-terminal regions are extensively conserved (Fig. 3).The same is true for the other clusters (Miki et al., 2005).Thus, I limited the analysis of the C-terminal region to the last approximately 20 amino acids, which do not seem be affected by gene conversion (Fig. 4).I refer to this region as the divergent cytoplasmic region (DCR).
Analysis of the β cluster VREs
Since the divergence of the human and the mouse lineages, the number of β Pcdhs has increased in both species.For example, PCDHB11 and PCDHB12 ECs are very similar and are orthologs of Pcdhb19 ECs (Noonan et al., 2004;Miki et al., 2005; Additional file 1).Thus, it can be assumed that their coding sequences originated from a recent duplication event.Because their genes are contiguous in the chromosome, the duplication mechanism could have been a single unequal crossing-over event between adjacent genes, as shown in Figures 2A and 5A.According to this model, the DCR-coding sequences of PCDHB10 and PCDHB11 emerged from the same duplication event.A visual inspection of the protein sequences confirmed that these Pcdh DCRs are very similar (Fig. 6A).The same mechanism can explain the origin of PCDHB9, because it has ECs similar to those of PCDHB10 (Noonan et al., 2004;Miki et al., 2005; Additional file 1) and a DCR similar to that of the immediately downstream gene (Fig. 6A).A phylogenetic analysis of the DCR-coding regions confirms these observations (Fig. 6B).
As shown in Figure 6A, PCDHB7 and PCDHB12 also have similar DCRs.Because PCDHB8 and PCDHB13 have similar ECs (Noonan et al., 2004;Miki FIGURE 7. Analysis of the human β cluster intergenic regions.Percentage identity plot obtained comparing the genomic sequence between PCDHB6 and PCDHB16 with that between PCDHB11 and PCDHB14.The horizontal axis indicates the nucleotide position of the genomic sequence between PCDHB11 and PCDHB14, and the vertical axis shows the percentage identity between the two sequences.Note that PCDHB16 is downstream of PCDHB8 and not downstream of PCDHB15.CpG/GpC>0.60,CpG island where the observed to expected CpG/GpC ratio lies between 0.6 and 0.75; CpG/GpC>0.75,CpG island where the ratio exceeds 0.75.
ADDITIONAL FILE 2. Additional analysis of the human β cluster intergenic regions.
Percentage identity plots obtained comparing the intergenic regions between (A) PCDHB9 and PCDHB10, (B) PCDHB11 and PCDHB12, and (C) Pcdhb19 and Pcdhb20, with the intergenic regions between the genes indicated on the left.The horizontal axis indicates the nucleotide position of the sequences on the top, and the vertical axis shows the percentage identity between the intergenic regions between the genes indicated on the left with that on the top.CpG/GpC=0.60,CpG island where the observed to the expected CpG/GpC ratio lies between 0.6 and 0.75; CpG/GpC=0.75,CpG island where the ratio exceeds 0.75.and 5A.If the model is correct, the genomic sequence between PCDHB7 and PCDHB8 should be very similar to the sequence between PCDHB12 and PCDHB13.In fact, a comparison of both sequences with the PipMaker program (Schwartz et al., 2000) reveals a striking similarity (Fig. 7).Notably, this does not occur with the adjacent intergenic regions.The similarity between the intergenic regions downstream of PCDHB16 and PCDHB9 (Additional file 2A) and the similarity between the intergenic regions downstream of PCDHB10 and PCDHB11 (Additional file 2B) support the two first steps of the model proposed in Figure 5A.Next, I investigated whether the proposed mechanisms are specific to the human by analyzing the evolution of the mouse β cluster.According to the models described in Figures 2 and 5B, the ECs-coding sequence of a particular VRE is always linked to the DCR-coding sequence of the gene immediately upstream.In the mouse cluster, Pcdhb4, Pcdhb6, Pcdhb8, Pcdhb10, Pcdhb11, and Pcdhb12 have very similar ECs (Additional file 1).Thus, their coding sequences probably originate from the same ancestral gene.When the DCRs of this cluster are compared, Pcdhb3, Pcdhb5, Pcdhb7, Pcdhb9, Pcdhb10, and Pcdhb11 share a very similar footprint (Fig. 8A).This observation is consistent with the proposed models.The finding was further confirmed by a phylogenetic analysis of the DCR-coding region (Fig. 8B).
To gain further insight into the evolution of this cluster, I analyzed the intergenic regions.A comparative phylogenetic analysis of the human and the mouse ectodomains reveals that the PCDHB5 ECs are orthologs of those in Pcdhb4, Pcdhb6, Pcdhb8, Pcdhb10, Pcdhb11, and Pcdhb12 and that PCDHB4 ECs are orthologs to those of Pcdhb5, Pcdhb7, and Pcdhb9 (Vanhalst et al., 2001; Additional file 1).According to the model proposed in Figure 5B, the intergenic region separating PCDHB4 and PCDHB5 is orthologous to the intergenic regions between Pcdhb3 and Pcdhb4, Pcdhb5 and Pcdhb6, Pcdhb7 and Pcdhb8, Pcdhb9 and Pcdhb10, Pcdhb10 and Pcdhb11, and Pcdhb11 and Pcdhb12.Also, the intergenic region separating PCDHB3 and PCDHB4 is orthologous to the intergenic regions be-tween Pcdhb4 and Pcdhb5, Pcdhb6 and Pcdhb7, and Pcdhb8 and Pcdhb9.When the sequences are compared using the PipMaker program (Schwartz et al., 2000), only those that appear to be orthologous show a greater than 50% similarity (Fig. 9).This supports the idea that the unit of duplication for these genes consists of the DCR-coding region of a Pcdh gene along with the promoter and EC-coding region of the gene immediately downstream.The similarity between the intergenic regions downstream of Pcdhb10 and Pcdhb11 (Additional file 3C) supports the last step in the model shown in Figure 5B.The similarity between the intergenic regions downstream of Pcdhb5 and Pcdhb7 (Additional file 3A), together with the similarity between the intergenic regions downstream of Pcdhb6 and Pcdhb8 (Additional file 3B), supports the fourth step of the model.The first three steps of the model could not be validated by ana- lyzing the non-coding sequences due to the lack of phylogenetic signal.Thus, other possible mechanisms cannot be formally excluded.Nevertheless, gene duplication through unequal crossing-over does not seem to be specific to a particular species.
Analysis of the α cluster VREs
Subsequently, I investigated whether these mechanisms are common to the different clusters.Thus, I conducted a similar analysis with the α Pcdhs.Although duplications in these clusters did not occur after the separation of the rodent and the human lineages, it has been suggested that PCDHA6 and PCDHA7, and PCDHA8 and PCDHA9 are duplications of the same gene pair, because PCDHA6 is very similar to PCDHA8, and PCDHA7 to PCDHA9 (Wu et al., 2001).I proposed an alternative model to explain the observations, namely, a recombination event between two VREs, as shown in Figures 2C and 5C.This model is supported by the short DCR shared by PCDHA5 and PCDHA7 (Fig. 10A).The association was validated by a phylogenetic analysis of the DCR-coding sequence (Fig. 10B).Similar results were obtained analyzing the rat α cluster (Additional file 4).Also, the similarity between the introns downstream of PCDHA5 and PCDHA7 (Fig. 11A) and between the introns downstream of PCDHA6 and PCDHA8 (Fig. 11B) support the model.Consequently, the mechanisms do not appear to be species-or cluster-specific.EVOLUTION OF PROTOCADHERIN GENE CLUSTERS
Analysis of the γ cluster VREs
First, I analyzed the γa group VREs.There are 12 γa Pcdhs (Pcdhgas) in humans (PCDHGA1 to PCDHGA12) and in mice (Pcdhga1 to Pcdhga12).PCDHGA2 and PCDHGA3 as well as PCDHGA8 and PCDHGA9 are very similar (Wu and Maniatis, 1999;Wu et al., 2001).In this group, the third and fourth ECs have the highest paralogous diversity at synonymous sites and are the least affected by gene conversion (Noonan et al., 2004).Thus, I tested if the evolutionary relationships between the above-mentioned Pcdhs were retained in these ECs.The results of the neighbor-joining method indicate that PCDHGA2 and PCDHGA3 are grouped in the evolutionary tree, and the bootstrap support of the corresponding branch is strong (approximately 93%).PCDHGA8 and PCDHGA9 are also grouped with the highest bootstrap value for the resulting branch.Consequently, the evolutionary relationships are significant.All other associations are considerably weaker (Additional file 5A).For example, the strongest of them among PCDHGA8, PCDHGA9, and PCDHGA10 has a bootstrap support of approximately 73%.A previous report showed that PCDHGAn and Pcdhgan are very similar (V n ⎣ N; n < 12) (Wu et al., 2001).Thus, essentially the same results were ob-ADDITIONAL FILE 6. Analysis of the mouse γa group DCRs.(A) Alignment of the mouse γa group DCRs.On the left, pairs of proteins with very similar third and fourth ECs are indicated.On the right, pairs of proteins that share genetic footprints are indicated.The sequences follow the cluster order as shown in Figure 6.The first constant amino acids were included.Genetic footprints were determined manually and are boxed.(B) Unrooted evolutionary tree of the genomic sequences that encode the mouse γa group DCRs obtained with the neighbor-joining method.Bootstrap values over 500 are shown.DCR-coding sequences associated by genetic footprints are indicated with parentheses.The scale bar represents a phylogenetic distance of 0.1.
tained when the analysis was repeated in mice (Additional file 5B).
To determine if the models presented in Figure 2 can explain the evolution of this group of Pcdhs, I analyzed the DCR of these genes.Again, the expected results were obtained.PCDHGA1 and PCDHGA2 have similar DCRs, and the same occurs with PCDHGA7 and PCDHGA8 (Fig. 12).Also, similar results were obtained for the mouse cluster (Additional file 6).
An analysis of the γb group ECs showed that PCDHGB4 and PCDHGB5 have very similar ectodomains and that the same is true for the PCDHGB6 and PCDHGB7 ECs (Additional f ile 7A).The PCDHGB6 cytoplasmic domain, however, is more similar to those of PCDHGB4 and PCDHGB5 than to that of PCDHGB7 (Additional files 8A and 9).The same is true for the murine γb group (Additional files 7B and 8B).These observations can be explained by the model shown in Figure 5D.The intronic phylogenetic signal of the cluster, although consistent with the proposed models, is low (Additional file 10).Therefore, the simple model cannot be further validated, and other mechanisms for the evolution of the cluster cannot be formally excluded.
Discussion
The Pcdh clusters, together with other clusters in the mammalian genomes, undergo concerted evolution.Two clear indicators of this process are the reduced synonymous diversity among paralogs and the increase in the CG content of the third codon (Galtier et al., 2001).Usually, almost the entire length of each Pcdh cytoplasmic domain is subject to gene conversion (Noonan et al., 2004) so that divergence among paralogs is reduced to narrow regions (Figs. 3 and 4).Something similar appears to happen in the α cluster first EC (EC1) sequences in humans, where just a few non-conserved amino acids are surrounded by more than 30 conserved amino acids to each side (see Miki et al., 2005).This does not seem to occur by chance, because the homolo-gous amino acids in mice are conserved and overlap an RGD motif located in a loop homologous to the quasiβ helix conformation of N-cadherin (Shapiro et al., 1995;Morishita et al., 2006).Nevertheless, the divergent sequences can still provide important information about the function and evolution of the Pcdhs.
The various clusters are subject to different evolutionary pressures.Two of the clusters, α and β, appear to have undergone duplications quite recently, and it is not possible to clearly resolve their complete phylogeny due to a loss of signal with time.Consequently, I was only able to analyze the genes that emerged recently.In the human β cluster, the DCR coding sequences were clearly duplicated together along with their immediately downstream Pcdh gene.The γ clusters seem to be under the influence of specific evolutionary forces that physi- cally constrained them because there is no evidence of recent duplication events in the analyzed species.This indicates that these clusters operate as supra-organizations above the gene level.
Recombination between Pcdh genes is likely to be a frequent event because these genes are subject to gene conversion (Noonan et al., 2004;Miki et al., 2005).Therefore, recombination between different genes probably underlies both gene duplication and concerted evolution.On the basis of the evidence presented and the mechanisms proposed, I suggest possible scenarios for the evolution of the different clusters in Figure 5.In this study, I provided evidence that shows that the unit of evolution of the Pcdh genes consists of the EC-coding region and the promoter of a certain VRE along with the DCR-coding sequence of the VRE immediately upstream.This fact appears to be the rule and seems to apply even to the particular duplication that generated PCDHB8 and PCDHB13.Considering that PCDHB14 and Pcdhb20 ECs are orthologs (Additional file 1), the fact that PCDHB13 and Pcdhb19 share the PGKEI footprint in their DCRs is consistent with the proposed model (see Figs. 6A, 8A, and Additional file 2C).Note that, because most of the phylogenetic information is in the 5'-region of the VREs, the orthology of the new recombinant isoforms was usually based on just the most informative part of the sequences.
Classic cadherins are cell adhesion molecules that participate in different signaling pathways.For instance, N-cadherin is a single-pass transmembrane protein that is expressed in the nervous system and regulates dendritic arborization in hippocampal neurons through its interaction with β-catenin (Junghans et al., 2005).Moreover N-cadherin can be cleaved by metalloproteinases and γ-secretase, resulting in the liberation of its cytoplasmic domain as a signaling molecule (Junghans et al., 2005).
The γ Pcdhs are targeted to both sides of mature synapses, presumably to modulate the strength of the synaptic union (Phillips et al., 2003).Recently, it was shown that the γ cluster cytoplasmic domains can be released when the Pcdhs are inserted into the plasma membrane via γ-secretase-dependent processing, after which the domains translocate to the nucleus (Hass et al., 2005;Hambsch et al., 2005).Moreover, the cytoplasmic constant region of the γ cluster can activate the transcription of reporter genes downstream of different γ Pcdh promoters (Hambsch et al., 2005); however, all the γ Pcdhs are not expressed at the same time (Kohmura et al., 1998;Frank et al., 2005).In this study, I showed that the different Pcdh cleavage products can have different phylogenetic origins.Notably according to the proposed models, the cytoplasmic domain that enters the nucleus evolved together with the immediately downstream promoter region and not with its own promoter.This leaves some interesting questions and suggests some hypotheses about the role of the DCR.To begin with, does the variability in the cytoplasmic region play some role?Also, does the DCR provide some kind of specificity to the induction of the Pcdh locus?ADDITIONAL FILE 10. Analysis of the human γ cluster introns.Percentage identity plots obtained comparing the introns between (A) PCDHGA8 and PCDHGB5, (B) PCDHGB5 and PCDHGA9, and (C) PCDHGA2 and PCDHGA3, with the introns between the exons indicated on the left.The horizontal axis indicates the nucleotide position of the intron on the top, and the vertical axis shows the percentage identity between the introns indicated on the left with that on the top.CpG/GpC=0.60,CpG island where the observed to the expected CpG/GpC ratio lies between 0.6 and 0.75; CpG/GpC=0.75,CpG island where the ratio exceeds 0.75.
For example, it is possible that the cytoplasmic domain preferentially activates the immediately downstream gene.
Conclusions
Comparative analysis between genomic sequences of different species can provide information about the regulation of a particular gene or cluster.Understanding the recent evolution of a particular genomic region may provide further insight into the mechanisms of regulation.In this study, I have provided evidence that duplication events in the Pcdh clusters occur through unequal crossing-over between two and sometimes three different VREs.Interestingly, the unit of duplication always consisted of the EC-coding sequence and the promoter of a certain isoform along with the DCR-coding region of the isoform immediately upstream in the cluster.The reorganization of the clusters after the duplications events suggests new hypotheses about the regulation of the clusters that must be tested experimentally.
FIGURE 2 .
FIGURE 2. Models that explain the evolution of the Pcdh gene clusters.The different VRE, depicted with boxes, are identified with capital letters and divided into 5´-and 3´-regions for simplicity.The first region encodes the variable ECs and the last one the DCR.Models (A) and (C) require one unequal crossing-over event, whereas model (B) requires two.In each case, the resulting strand with the highest number of VREs is selected.
FIGURE 4 .
FIGURE 4. Alignment of cytoplasmic domains.(A) Alignment of human a cluster cytoplasmic domains.(B) Alignment of human β cluster cytoplasmic domains.PCDHB1 was excluded from the analysis because it has a considerably divergent sequence.DCR, divergent cytoplasmic region.
ADDITIONAL FILE 1 .
Evolutionary tree of the β Pcdh ECs.Unrooted evolutionary tree of the human and mouse β Pcdh ECs 1 to 4 obtained with the neighborjoining method.Bootstrap values over 500 are shown.The scale bar represents a phylogenetic distance of 0.1.
FIGURE 5 .
FIGURE 5. A possible scenario for the evolution of each cluster.The different VRE are represented by boxes as shown in Figure 2. The order of the events may change in some cases.(A) Evolution of part of the human β cluster.(B) Evolution of part of the murine β cluster.(C) Evolution of part of the α cluster.(D) Evolution of part of the γ cluster.
FIGURE 6 .
FIGURE 6. Analysis of the human b cluster DCRs.(A) Alignment of human β cluster DCRs.On the left, the three pairs of proteins with very similar ECs are indicated.On the right, three pairs of proteins that have similar DCRs are indicated.The sequences follow the cluster order.Proteins encoded by upstream VREs are on top, and those encoded by downstream VREs appear at the bottom.Genetic footprints were determined manually and are boxed except for the PGKEI footprint, which is underlined.The two crosses on the PCDHB16 sequence indicate the position of two stop codons that were removed from the DNA sequence when the computational translation was performed.PCDHB1 was excluded from the analysis because it has a considerably divergent DCR.(B) Unrooted evolutionary tree of the genomic sequences encoding the human β cluster DCRs obtained using the neighbor-joining method.The PCDHB1 DCR-coding sequence was excluded from the analysis.Bootstrap values over 500 are shown.DCR-coding sequences associated by genetic footprints are indicated with parentheses.The scale bar represents a phylogenetic distance of 0.1.
FIGURE 8 .
FIGURE 8. Analysis of the mouse β cluster DCRs.(A) Alignment of mouse β cluster DCRs.On the left, proteins whose ECs are orthologous to those of PCDHB5 are indicated.The sequences follow the cluster order as shown in Figure6.The footprint common to all the DCRs whose corresponding genes are immediately upstream of genes that encode ECs orthologs of PCDHB5 ECs is boxed.The PGKEI footprint is underlined.The slash on the Pcdhb3 sequence indicates a thymine deleted from the genomic sequence to introduce a reading frame shift when the computational translation was performed.Pcdhb1 was excluded from the analysis because it has a considerably divergent DCR.(B) Unrooted evolutionary tree of the genomic sequences that encode the mouse β cluster DCRs obtained using the neighbor-joining method.The Pcdhb1 DCR-coding sequence was excluded from the analysis.Bootstrap values over 500 are shown.DCR-coding sequences immediately upstream of genes that encode ECs orthologs of PCDHB4 and PCDHB5 ECs are indicated with parentheses.The scale bar represents a phylogenetic distance of 0.1.
ADDITIONAL FILE 4 .
Analysis of the rat α cluster DCRs.(A) Alignment of rat α cluster DCRs.On the left, the two pairs of proteins whose genes are associated by a putative duplication event are indicated.On the right, pairs of proteins that share genetic footprints are indicated.The sequences follow the cluster order as shown in Figure6.The γc-like Pcdhs were excluded from the analysis, and the first constant amino acids were included.Genetic footprints were determined manually and are boxed.(B) Unrooted evolutionary tree of the genomic sequences that encode the rat α cluster DCRs obtained using the neighbor-joining method.DNA sequences of the γc-like Pcdhs were excluded from the analysis.Bootstrap values over 500 are shown.DCR-coding sequences associated by genetic footprints are indicated with parentheses.The scale bar represents a phylogenetic distance of 0.1.
FIGURE 10 .
FIGURE 10. Analysis of the human α cluster DCRs.(A) Alignment of human α cluster DCRs.On the left, the two pairs of proteins whose genes are associated by a putative duplication event are indicated.On the right, one pair of proteins that share the same genetic footprint is indicated.The sequences follow the cluster order as shown in Figure 6.The γc-like Pcdhs were excluded from the analysis.The genetic footprint was determined manually and is boxed.(B) Unrooted evolutionary tree of the genomic sequences that encode the human α cluster DCRs obtained with the neighbor-joining method.DNA sequences of the γc-like Pcdhs were excluded from the analysis.Bootstrap values over 500 are shown.DCR-coding sequences associated by the same genetic footprint are indicated with a parenthesis.The scale bar represents a phylogenetic distance of 0.1.
FIGURE 11 .
FIGURE 11.Analysis of the human α cluster introns.Percentage identity plots obtained by comparing the introns between (A) PCDHA5 and PCDHA6, and (B) PCDHA6 and PCDHA7 with the introns between the exons indicated on the left.The horizontal axis indicates the nucleotide position of the intron on the top, and the vertical axis shows the percentage identity between the introns indicated on the left with that on the top.CpG/GpC>0.60,CpG island where the observed to the expected CpG/ GpC ratio lies between 0.6 and 0.75; CpG/GpC>0.75,CpG island where the ratio exceeds 0.75.
FIGURE 12 .
FIGURE 12. Analysis of the human γa group DCRs.(A) Alignment of the human γa group DCRs.On the left, pairs of proteins with very similar third and fourth ECs are indicated.On the right, pairs of proteins that share genetic footprints are indicated.The sequences follow the cluster order as shown in Figure 6.Genetic footprints were determined manually and are boxed.(B) Unrooted evolutionary tree of the genomic sequences that encode the human ga group DCRs obtained using the neighbor-joining method.Bootstrap values over 500 are shown.DCR-coding sequences associated by genetic footprints are indicated with parentheses.The scale bar represents a phylogenetic distance of 0.1.
ADDITIONAL FILE 7 .
Evolutionary trees of the human and the mouse γb group ECs.Unrooted evolutionary tree of (A) the human and (B) the mouse γb group ECs 2, 3, and 4, obtained with the neighbor-joining method.Bootstrap values over 500 are shown.The scale bar represents a phylogenetic distance of 0.1.ADDITIONAL FILE 8. Evolutionary trees of the human and the mouse γb group DCRs.Unrooted evolutionary tree of (A) the human and (B) the mouse γb group DCR-coding sequences obtained with the neighbor-joining method.Bootstrap values over 500 are shown.The scale bar represents a phylogenetic distance of 0.1. | 6,699 | 2008-04-01T00:00:00.000 | [
"Biology"
] |
Coarsening with a frozen vertex
In the standard nearest-neighbor coarsening model with state space $\{-1,+1\}^{\mathbb{Z}^2}$ and initial state chosen from symmetric product measure, it is known (see~\cite{NNS}) that almost surely, every vertex flips infinitely often. In this paper, we study the modified model in which a single vertex is frozen to $+1$ for all time, and show that every other site still flips infinitely often. The proof combines stochastic domination (attractivity) and influence propagation arguments.
Introduction
As in our earlier paper [1], we study and compare the long time behavior of two continuous time Markov coarsening models with state space Ω = {−1, +1} Z d . One, σ(t), is the standard model in which at time zero {σ x (0) : x ∈ Z d } is an i.i.d. set with θ ≡ P (σ x (0) = +1) = 1/2 and then vertices update to agree with a strict majority of their 2d nearest neighbors or, in case of a tie, choose their value by tossing a fair coin. The modified model, σ ′ (t), is the same except that σ ′ at the origin (0, 0....0) is frozen to +1 for all t ≥ 0.
For d = 2, it is an old result [2] that in the standard σ(t) model, almost surely, every vertex changes sign infinitely many times as t → ∞. The main result of this paper (see Theorem 2.7) is that the same is true for the frozen model σ ′ (t) on Z 2 . It is believed (see, for example, Sec. 6.2 of [3]), but not proved, that the d = 2 behavior of σ remains valid at least for some values of d > 2. If this were so, then the arguments of this paper would show the same for the corresponding σ ′ model.
In the previous paper [1] we considered models with infinitely many frozen vertices and in this paper a model with a single frozen vertex. It would be of interest to study models with finitely many, but more than one, frozen vertices; in this regard, see the remark following the proof of Theorem 2.8 below.
Results
In this section we fix d = 2. We also use the standard convention that the updates are made when independent rate one Poisson process clocks at each vertex ring.
Let A T denote the event that the "right" neighbor of the origin (at x = (1, 0)) is −1 for some t ≥ T . Let A 1 T ⊂ A T denote the event that the right neighbor of the origin is the first neighbor to be −1 at some time t ≥ T (more precisely, that no other neighbor is −1 at an earlier time in [T, ∞]). Let B L,s for s ∈ {−1, +1} ΛL (where Λ L = {−L, −L + 1, ...., L} 2 ) denote the event that σ ′ (0)| ΛL = s and write B L,+ when s ≡ +1. We denote the probability measure for the frozen origin σ ′ (·) model by P ′ and that for the regular coarsening model σ(·) by P .
The result is an easy consequence of symmetry among the four neighbors of the origin and the fact that P (A 0 ) = 1 (indeed, for all T , P (A T ) = 1 -see [2]). Let Σ L T denote the sigma-field generated by the initial spin values and clock rings and coin tosses up to time T inside the box Λ L .
Proof. Letσ L T (.) denote the model with the spin values at all sites in Λ L frozen to +1 from time 0 up to time T and with the spin value at the origin remaining frozen at +1 thereafter. Denote the corresponding probability measure byP L T . Under the standard coupling,σ(·) stochastically dominates σ ′ (·), so we have To continue the proof, we will use the following result about the "propagation speed" of influence between different spatial regions: Proof. Let L ′ >> L and note that given B L ′ ,+ , (D L T ) c can occur only if there is a nearest neighbor (self-avoiding) path between the boundaries of the two sets, Z 2 \ Λ L ′ and Λ L , along which there are clock rings occurring in succession between times 0 and T . Any such path is at least of length L ′ − L (i.e., contains at least L ′ − L vertices besides the starting one).
Consider a particular path γ of length m ≥ L ′ − L. For each m there are no more than 3 m such paths from each boundary point and the time it takes for successive clock rings along γ is at least S m = m i=1 τ i where the τ i are i.i.d. exponential random variables with parameter 1. By the exponential Markov inequality, for any α > 0, Therefore, since there are at most CL ′ possible starting points (for some constant C), where C(α, T, L) is a constant depending on α, T and L. Taking α > 2 and the limit as L ′ → ∞ completes the proof of the lemma.
Proof. We continue the proof of Proposition 2.2. Pick ǫ > 0 and fix T and L. By Lemma 2.3, ∃ L ′ such that P (D L T |B L ′ ,+ ) ≥ 1 − ǫ . Therefore, given B L ′ ,+ , with probability at least 1−ǫ, σ t (·) positively dominatesσ T L (·) for 0 ≤ t < S, where S = inf{t > 0 | σ t (0, 0) = −1}, and sõ Taking the limit as ǫ → 0 completes the proof of Proposition 2.2. Now let Σ T denote the sigma field generated by the initial assignment of spins on Z 2 and the clock rings and coin tosses on Z 2 up to time T .
Proof. This is a straightforward consequence of the preceding corollary. It follows that with probability one, σ ′ (1,0) (t) changes sign infinitely many times as t → ∞.
Letting ǫ → 0 completes the proof of the first part of the theorem. The second part then follows because by stochastic domination (attractivity) and the results of [2], σ ′ (0,0) (t i ) equals +1 for an infinite sequence of t i → ∞.
The next theorem follows from a modified version of the proof of Theorem 2.7.
Proof. For any site z other than the origin, and for L much larger than say the Euclidean norm of z, we consider the unfrozen σ model in which at time zero all the vertex values are set to +1 in the box of side length 2L, centered at z/2 (so that the origin and z are located symmetrically with respect to this box). Then with probability 1/2 the vertex at z flips to −1 before the one at the origin flips and until just after that time, there is no difference between the frozen (at the origin) σ ′ model and the unfrozen σ model. Hence there is probability at least 1/2 in σ ′ that z will flip to minus. By applying the methods used in the proof of Theorem 2.7 (but with 1/4 now replaced by 1/2), we conclude that z will flip infinitely many times with probability one. We note that the line of reasoning in the proof of the last theorem could have also been used to give a modified proof of Theorem 2.7 with 1/4 replaced by 1/2. A more interesting remark is the following.
Remark 2.9. For the process σ ′′ with some finite set S of vertices frozen to +1, it is possible to show by an extension of the arguments used in this paper that there is a finite deterministic S ′ ⊇ S such that all sites in {Z 2 \S ′ } flip infinitely many times in σ ′′ (·) with probability one. In some cases, S ′ must be strictly larger than S -e.g., when S = {(−L, −L), (−L + L), (+L, −L), (+L, +L)}, S ′ includes all of Λ L . One may also consider processes where some vertices are frozen to −1 and some to +1. We expect to to pursue these issues in a future paper. | 1,974 | 2015-12-28T00:00:00.000 | [
"Mathematics"
] |
A scale-adaptive object-tracking algorithm with occlusion detection
The methods combining correlation filters (CFs) with the features of convolutional neural network (CNN) are good at object tracking. However, the high-level features of a typical CNN without residual structure suffer from the shortage of fine-grained information, it is easily affected by similar objects or background noise. Meanwhile, CF-based methods usually update filters at every frame even when occlusion occurs, which degrades the capability of discriminating the target from background. A novel scale-adaptive object-tracking method is proposed in this paper. Firstly, the features are extracted from different layers of ResNet to produce response maps, and then, in order to locate the target more accurately, these response maps are fused based on AdaBoost algorithm. Secondly, to prevent the filters from updating when occlusion occurs, an update strategy with occlusion detection is proposed. Finally, a scale filter is used to estimate the target scale. The experimental results demonstrate that the proposed method performs favorably compared with several mainstream methods especially in the case of occlusion and scale change.
Introduction
Video surveillance is significant for public security [1], while object tracking is the key technology of video surveillance [2,3]. Object tracking has many practical applications in video surveillance, human-computer interaction and automatic driving [4][5][6]. Object tracking aims to estimate the target position in a video sequence by giving an initial position of the target. Due to the deformation, illumination variety, occlusion, and scale change, it is possible that the appearance changes significantly. Therefore, the usage of the powerful convolutional neural network (CNN) features to describe the target appearance can effectively improve the success rate and accuracy of object-tracking algorithms [7,8].
CNN pre-trained for image classification, such as AlexNet [9] and VGG [10], are used to extract target features in most deep-learning-based trackers. Those methods have high computational complexity as they need to extract the features of positive and negative samples. While correlation filter (CF)-based trackers have shown efficient performance by solving a ridge regression problem in the Fourier frequency domain. Therefore, the combination of CNN features and efficient CFs has been exploited in object-tracking research. The multi-channel features are extracted from CNN instead of the handcrafted features for CF-based methods, which achieves the state-of-the-art results on object tracking benchmarks [11,12]. However, there are still some problems: 1. Target localization relies heavily on the high-level features from CNN, such as the outputs of the last layer of VGG network. The high-level features contain more semantic information but lack of detailed information of the target. 2. The weights are fixed in the fusion of response maps.
Inaccurate predictions are inevitable if the filters with a large error have large weights. 3. The filters need to be updated to maintain its discriminative ability as the target appearance changes in the video sequence. Generally, CF-based trackers adopt the updating strategy in all frames, even the frames in which the target is occluded, which degrades the discriminative ability of the filters and results in the loss of tracked target.
1. The CNN with residual structure is used to extract features. DenseNet [13] and Inception [14] are two networks with residual structure. However, the features from DenseNet are not comparable to those of ResNet [15] in terms of success rate and accuracy of tracking. Meanwhile, the features from Inception have large number of channels, and accordingly its implementation is time-consuming. Thus, ResNet is used in this paper due to its advantages of success rate, accuracy, and efficiency. The residual structure of ResNet integrates low-level and high-level features with identical mapping [16]. The high-level features contain more fine-gained details, which are more robust to similar objects and background noise. 2. The response maps are fused based on AdaBoost algorithm. AdaBoost algorithm enlarges the weights of the filters with small error rates while reduces the weights of the filters with large error rates. Consequently, the stronger the discriminative abilities of the filters are, the greater roles they can play in the tracking process. 3. An update strategy with occlusion detection is adopted. When the target is occluded, there are many local maxima in the response map, so the number of effective local maxima (NELM) is used to detect occlusion. If the occluded target is detected, the filters stop the update to avoid the interference of background information. 4. Scale filters are used to track the scale change of the target to solve the scale variation problem.
In the remainder of this paper, we first review some related works in Section 2. Then, we propose a scaleadaptive object-tracking algorithm with occlusion detection in Section 3. The experiments and comparisons are reported in Section 4. We end the paper with a conclusion in Section 5.
Tracking by deep learning
Visual representation is significant in the tracking algorithm [17]. The traditional tracking-by-detection methods focus on the discriminative ability of the discriminator, for example, Zhang et al. [18] proposed a multiple experts using entropy minimization (MEEM) scheme based on support vector machine with hand-crafted features. While, most methods based on deep learning usually focus on the expression of the target feature. Wang and Yeung [19] trained a multi-layer auto-encoder to encode the appearance of the target. Li et al. [20] used face dataset to train CNN and then used the pre-trained CNN to extract face features for tracking. Nam and Han [21] trained a convolutional network to extract target features in multi-domain way and used full connection layers to classify target and background. Hong et al. [22] used the features extracted by a pre-trained CNN and learned discriminative saliency map with back propagation and then used a support vector machine as the classifier. Pu et al. [23] used back propagation to generate attention map to enhance the discriminative ability of full connection layers in [21]. Wang et al. [24] built two complementary prediction networks based on the analysis on the features of the different levels of CNN to obtain the heat map for target localization. Lu et al. [25] proposed a deconvolution network to upsample the features with low spatial resolution; then, the features of the low and high levels are fused by the sum operation to get better target representation. Song et al. [26] solved the problem of unbalanced positive and negative samples based on the generative adversarial networks [27]. The above methods usually need to compute the features of a large number of candidates, while our method only needs the features of search region. Moreover, these methods need back propagation for time-consuming online update; in contrast, our method can online update efficiently thanks to linear interpolation.
Tracking by correlation filter
CF-based methods have shown continuous performance improvements in terms of accuracy and robustness. Bolme et al. [28] proposed a minimum output sum of squared error filter. Meanwhile, peak-to-sidelobe ratio (PSR) was introduced to measure the confidence of response map. It was pointed out that PSR would decrease to about 7.0 when tracking failed. Henriques et al. [29] employed the circulant structure and the kernel method (CSK) to train filters on the basis of [28]. Henriques et al. [30] used the cyclic shift of target features and the diagonalization property of cyclic matrix in the Fourier domain to obtain closed-form solutions based on kernel correlation filter (KCF), which improved the effectiveness and efficiency of the algorithm. Danelljan et al. [31] used position filter and scale filter for discriminative scale space tracking (DSST). Li and Zhu [32] applied scale adaption with multiple features (SAMF) to estimate the target scale adaptively. Danellian et al. [33] performed spatial regularization on the discriminative CFs to alleviate the boundary effect. Li et al. [34] introduced temporal regularization to [33]. Cen and Jung [35] proposed a complex form of local orientation plane descriptor to overcome occlusion; this descriptor effectively considers the spatiotemporal relationship between the target and background in CF framework. The above methods usually use hand-crafted features [36], [37], which lack robustness to target appearance variance. Furthermore, they update filters even when the target is occluded, which degrades the discriminative capability of filters. In our method, robust convolutional features deal with the target appearance variance. In addition, occlusion detection avoids the updating when the target is occluded. Similar to [31], we apply scale filters to track the target scale variance, and we decrease the number of the scale for efficiency.
Tracking combining deep learning and correlation filter
As the robustness of CNN features and the efficiency of CF, some algorithms combined the two methods. Danelljan et al. [38] used the feature extracted from only one layer of CNN on the basis of [33]. In order to use the multiresolution deep feature maps, Danelljan et al. [39] applied a continuous convolution operators for visual tracking, and after that, Danelljan et al. [40] proposed an efficient convolution operators based on [39] for efficiency. Ma et al. [41] developed CFs using hierarchical convolutional features (HCF). Li et al. [42] localized the target using the deep convolution operator in a large search area firstly, and then performed a shallow convolution operator around the location given by the first step. Li et al. [43] trained background-aware filters using a set of representative background patches as negative samples to handle background clutter, and trained scale-aware CFs using a set of samples with different scales to handle scale variance. Qi et al. [44] used convolution operation to model the correlation between the apparent features of the target and background, and employed a two-layer convolution network to learn geometirc structural information for scale estimation. Qi et al. [45] applied CFs on the multiple CNN layers, and then all layer trackers were integrated to a single stronger tracker by Hedge algorithm. Wang et al. [46] proposed a discriminative CFs network (DCFNet) to learn the convolutional features and performed the correlation tracking process simultaneously. Similar to [46], Jack et al. [47] used correlation filters as one layer of the neural network and proposed an end-to-end algorithm.
In some algorithms, ResNet is also used. Zhu et al. [48] proposed a CF-based algorithm using temporal and spatial features. They used two ResNets to learn spatial and temporal features, respectively. He et al. [49] used ResNet to extract features instead of the deep learning features from VGG and hand-crafted features in [40], but the response maps are fused with fixed threshold weights. The boundary effect in correlation filters is dealt with in the algorithms based on [40], but it is not a focus of this paper.
Our method seems similar to HCF, but there are some differences as follows. In HCF, typical CNN without residual structure is used to extract features which lack finegained details, and the response maps are fused with fixed weights. Moreover, in HCF, the filters are updated at all frames even when the target is occluded, which definitely declines the discriminative ability of the filters. In our work, the features are extracted with the pre-trained ResNet, which are more robust to background noisy and occlusion. In addition, the response maps are fused based on AdaBoost algorithm [50], which can choose more reliable weights. Meanwhile, the filters are updated while considering occlusion detection to ensure that the filters are not disturbed by noise. Figure 1 illustrates the procedure of our method. Our method initializes the filters according to the given target position. In the subsequent frames, we first crop the search area centered at the target location in the previous frame, and then, extract the CNN features from different layers of pre-trained ResNet. Secondly, the learned linear filters convolved with the extracted features to generate the response maps of different layers. Then, multiple response maps are weighted and fused to one response map. The target position is located according to the position of the maximum value in the fused response map. After that, in the estimated target location, the histogram of oriented gradient (HOG) features in the regions with different scales are used to find the optimal target scale by scale filters. Finally, the NELM and the PSR of the fused response map are performed to decide whether to update the filter or not.
Convolutional features
The convolutional feature maps from ResNet are used to encode target appearance. With the increment of CNN layer number, the spatial resolution of feature map is gradually reduced. For object tracking, low resolution is not sufficient to accurately locate target. Thus, we ignore the features from the last convolutional layer (conv5) and full connection layers. The features from different layers have different spatial resolutions that are relatively low compared with the input image. Therefore, bilinear interpolation is used to enlarge the resolutions of the features to the same size by: where h represents the features, x represents the features enlarged by interpolation operation, and the interpolation weight depends on the position of i and k-neighbor feature , where δ indicates the kernel width. Correlation filters w l are obtained by minimizing the objective function:
Correlation filter
where means circular correlation and λ indicates the regularization parameter. The optimization problem can be solved in Fourier domain and the solutions are: Here, X and Y are the fast Fourier transformation (FFT) F(x) and F(y), respectively. The over bar represents the complex conjugate. The symbol denotes the elementwise product. At the detection process, the features of the search patch are extracted and transformed to the Fourier domain, the complex conjugate isZ. The response map at conv-l layer can be computed by: where F −1 is the inverse FFT.
Response map fusion based on AdaBoost
In order to select the appropriate weights to fuse the response maps, AdaBoost algorithm is used for adaptive weight adjustment. The error rate e is computed between the normalized response maps at different layers f l , and the desired response map g peaked at the estimated target position in t − 1 frame is: where abs represents absolute value, Mean denotes the operation of average, the weight of conv-l layer β l is: Then, at t frame, the fused response map is: The target position (m,n) is estimated as: After the filters are initialized, the filters of different layers can correctly track the target in the initial frame, as the computation is performed in the initial frame. In other words, these filters have the same error rate; thus, the initial weights are both set to 0.5.
For scale estimation, we construct a feature pyramid center in the estimated target position. Let P×R denote the target size in the current frame, S be the size of the scale dimension, and a represent the scale factor. For each n ∈ − S−1 2 , . . . , S−1 2 , we crop the image patch of the size a n P×a n R and extract the HOG features; then, the scale response map R n is computed by: where where I is the FFT of HOG features, andḠ is the complex conjugate of Gaussian label. We can find then corresponded maximum value as: Then, the best scale of target is anP×anR.
Optimized update strategy with occlusion detection
The filters need to be updated to maintain discriminative ability as the target often undergoes appearance variance. However, when the target is occluded, the filters should avoid using background information to update, or it may cause model drift.
In minimum output sum of squared error (MOSSE) filter [28], PSR was used to describe the state of the response map to detect tracking failure. The peak means the maximum, and the side lobe is defined as the rest of the pixels, excluding an 11 × 11 window around the peak. The PSR is defined as PSR = g max −μ σ , where g max is the peak value, μ is the mean and σ is the standard deviation of the side lobe. The PSR is between 20.0 and 60.0 when the tracking is normal, while PSR drops to lower than 7.0 when the target is occluded or the tracking failed, as shown in Fig. 3. However, when the target moves rapidly or is of low resolution, the PSR stays in a low value, as shown in c and d of Fig. 3. Therefore, PSR cannot accurately reflect whether the target is occluded or not. In this work, NELM is employed to detect occlusion. Observing the response maps, we found that the response maps have more local maxima when the target is occluded than when the target is not occluded. As shown in Fig. 4, the red dotted lines show the locations of the local maxima in the 3D response map.
Let f denote the fused response map in current frame and f max be the peak of f. For each local maximum f i loc (i ∈ {1, 2, 3, . . . , L}), L is the number of local maximum except f max , the ratio between f i loc and f max is In the response map, some local maxima are possibly generated because of the background interference which needs to be avoided. The motion of the target between the initial frame and the second frame should be smooth. Therefore, in the response map obtained from the second frame of the video sequence, the local maximum except the peak (which is the target position) is taken as the threshold γ : In the response map of subsequent frame, T i is greater than the threshold γ ; then, f i is recorded as the effective local maximum, and the number of effective local maximum is expressed as: where Crad represents the number of elements in a collection. If the effective local maxima exist, i.e., NELM > 1, and the PSR is less than the given threshold, the algorithm does not update the filters. PSR is only used to evaluate the response map, similar to MOSSE, the PSR threshold is set to 7.000. If no effective local maximum exists or the PSR is greater than the given threshold, the algorithm allows updating the filters. In Fig. 3b, the PSR value is lower than the empirical value and the NELM is equal to zero, target occlusion is not detected, then the filters can be updated at this time. At t frame, the filter in (3) is represented by W t , A t is the molecule of W t , and B t is the denominator. The updating formulae are: C and D represent the molecules and denominators of the filters H t in (10) , respectively. The updating formulae are: where η p and η s are the learning rates for W t and H t , respectively.
Experimental
We compare the proposed method with the state-of-theart methods on OTB and VOT [51]. Pre-trained ResNet is used to extract features. The learn rate η p is set to 0.01, the same as [30], and η s is set to 0.01, the same as [31]. The scale factor is set to 1.087. The number of scale dimension is set to 5. The parameters are not changed during test. Our tracker is implemented by Python with PyTorch. The experiments are performed on Intel Core i7-6850K 3.6 GHz CPU and a NVIDIA GTX-1080Ti GPU. Our tracker runs at an average of 8 fps on GPU.
The algorithm is validated on standard tracking data sets OTB-13 and OTB-15. OTB-13 and OTB-15 contain 50 and 100 video sequences, respectively. These video sequences contain common challenges in target tracking, including illumination variance, scale variance, occlusion, deformation, motion blur, fast motion, in-plane rotation, out-of-plane rotation, background interference, and low resolution. OTB recommends three evaluation methods, one pass evaluation (OPE), spatial robustness evaluation (SRE), and temporal robustness evaluation (TRE). OPE gives the exact location of the target in the first frame for initialization and then runs the tracker on all frames. Unlike OPE, SRE initializes the tracker by moving or scaling the target position in the first frame, including four kinds of center offset, four kinds of angle offset, and four kinds of scale variance. While, TRE runs the tracker at the part of the whole sequence. The algorithm is evaluated by calculating the precision score and success rate in three evaluation methods. Precision ε is the Euclidean distance between the center positions of the tracked target and the ground truth: where (x c , y c ) and (x g , y g ) denote the locations of the tracked target center and the real target center. Precision score is defined as the percentage of the frames whose precision values are lower than a certain threshold in the total number of frames. The overlap rate is the ratio of the overlap area of the ground truth and the bounding box obtained by the tracking algorithm to the total area of the two boxes: where Bbox and Gbox represent the bounding box obtained by the algorithm and the ground truth, respectively. The success score is the percentage of the number of the frames whose overlap rates are greater than a certain threshold.
Fig. 5 Overlap success plots and Distance precision plots over 100 benchmark sequences in OPE, SRE, TRE
While the scale variance of the target is processed in SAMF and DSST.
Results over all OTB
The results of the algorithms are evaluated in three methods. In Fig. 5, the score in overlap success plots legend represents the area under curve (AUC), the score in distance precision legend represents the distance precision score at a threshold of 20 pixels. Our algorithm achieves the best results in OPE. In TRE and SRE, HCF uses more convolution layer features for target localization, the accuracy score of proposed algorithm is slightly lower than that of HCF. Please notice that some algorithms, including CFNet, do not supply the data for SRE and TRE. Table 1 shows the comparison results at the distance precision threshold of 20 pixels and the overlap threshold of 0.5 on OTB-13 and OTB-15. Note that OTB-15 has more challenging videos than OTB-13. DP, OS, and SPEED represent the score of distance precision, the score of overlap rate, and the speed of the algorithm, respectively. The first and second best results in each row are highlighted by bold and italics. Under the above threshold, the tracking precision and success rate of the proposed algorithm are the best on OTB-15. However, the speed of this algorithm is about 8 frames per second (fps), as the interpolation operation lower the speed of the algorithm.
Results on VOT2016
VOT-2016 dataset contains 60 video sequences. There are two kinds of evaluation methods for VOT, namely supervised and unsupervised evaluation methods. Supervised evaluation method provides the target position to re-initialize the algorithm for continue tracking when the tracked target is lost. In contrast, the unsupervised evaluation method does not re-initialize the algorithm. In VOT, accuracy, robustness, and expected average overlap (EAO) [52] are used to evaluate the tracking results. Accuracy refers to the average overlap rate of tracking algorithm results, robustness refers to the average number of tracking failures (when the overlap rate is 0, it can be determined as failure), and EAO is the average of the average overlap rate on a short-term sequence.
The comparison results are shown in Table 2 and the results of the best algorithm are in bold, and the results of the second best algorithm are with italics. The accuracy and robustness of the proposed algorithm rank the second in the case of supervised. The supervised evaluation re-initializes when target occlusion occurs; then, the algorithms can track the target in the video sequence after occlusion. Thus, the advantages of our method is not remarkable in supervised evaluation. Without reinitialization, the accuracy and robustness of the proposed method are the best.
The A-R plot shows the performance of tracker directly. The abscissa and the ordinate of A-R plot are Accuracy and Robustness, respectively. Since the robustness has no upper bound, the reliability of VOT is replaced by robustness and the reliability is computed by R s = e −SM , where M represents the mean time-between-failures, S is the number of the successful object tracking frames since the last failure. The closer the dot is to the upper right corner, the better accuracy and robustness the algorithm has. In Fig. 6, the accuracy and robustness of the proposed algorithm are remarkably good.
Video with occlusion
The convolution operation further degrades the frame resolution. The proposed algorithm focuses on the solution of the occlusion problem, so the experimental results Occlusion is a great challenge for CF-based methods. The conventional filters usually need to be updated at all frames, including the frames in which the target is occluded, so it is possible that the background information is used to update the filter, and declines the discriminative ability of the filters. The standard CF-based trackers obtain the AUC scores of 0.560 (SAMF), 0.467 (KCF), and 0.464 (DSST). We use the features extracted by ResNet and a novel update strategy to improve the robustness to occlusion. In the video sequence with occlusion, the proposed method obtains the best AUC score (0.592), which is 5.1% higher than that of HCF (0.541), followed by DCFNet (0.584), as shown in the first row of Fig. 7.
Video with scale variation
The tracking overlap rate of our method is improved in the video sequences with target scale variation. The variance of target scale remarkably affects the position estimation, since the size of search area is highly correlated with the target scale. In the video sequences with scale variation, the standard CF-based trackers, without the consideration of scale variation, obtain the scores of 0.425 (KCF) and 0.343 (CSK), while the standard CF-based trackers considering scale variation can obtain the scores of 0.522 (SAMF) and 0.498 (DSST).
The features also can affect the scale estimation, so deep features are used in HCF without the consideration Fig. 8. Our method achieves the best AUC score (0.597), which is higher 9.5% than that of HCF (0.502). Figure 9 shows the qualitative evaluation of the proposed method, HCF, DCFNet, KCF, and DSST on 8 video sequences including occlusion and scale variance. HCF performs well in fast moving (Skiing) while fails to track the occluded target (Girl2, Lemming). DCFNet is good at low-resolution sequences as the resolutions of the extracted features are the same as that of the input image, and it is prone to track unsuccessfully for fast moving, target deformation, and background clutter (Skiing, Human9, and Football). HOG features and kernel method are used in KCF to improve the operation efficiency, so it performs well in the cases of fast moving and background interference (Human9), but it is easy to fail when the target is occluded (Girl2, Lemming). In DSST, scale filter is employed to find the current scale (Dog1) of the target when the target scale changes. The proposed method applies the features extracted with ResNet, which are more robust to several challenges. At the same time, it is not easily disturbed by target occlusion due to the optimized update strategy. Therefore, the proposed algorithm can still track the target stably (Girl2, Lemming, Skiing, Football) in the cases of occlusion, deformed and background interference. We also use scale filters for the variance of the target scale (Human9).
Feature comparison
In order to compare the different combination strategies, the features from different layers of ResNet are combined, Table 3. The best results are in bold. On the OTB-15 dataset, the combination of the features extracted from conv3 and conv4 layers achieves the best results, which verifies the rationality of the feature selection of the proposed algorithm. Table 4 and the best result are in bold. The proposed method achieves the best results by combining the two methods, which verifies the effectiveness of the proposed update strategy.
Different networks
We compare the features extracted from different network structures, and the results are shown in Table 5. The best results are in bold. DenseNet [13] is also a network with residual structure, with fewer parameters and deeper network layers than ResNet, in the same time, its extracted features have more channels. According to the classification of OTB-15, we choose the video sequences with background clutter. What is more, we use only one feature with the same resolution from each network and we do not use any strategies. The experimental results show that the results of DensNet are slightly lower than ResNet. However, the results of ResNet and DensNet have achieved better results than VGG.
Failure cases
We show a few failure cases in Fig. 10. For the Panda sequence, the resolution is 312 × 233. When the target becomes very small, the proposed tracker fails to follow the target because it has few pixels, which can result in poor performance features. An alternative implementation using the feature from conv2 alone is able to track the target, because the conv2 features have higher resolution than the features from deeper layers. For the Biker sequence, the target suddenly moves violently beyond the search area of the proposed tracker. This sequence is still a challenge sequence for many trackers.
Conclusions
Object tracking is a very useful public safety technology. The object tracking algorithm can track specific target in the surveillance video. In addition, combined with some ReID technologies [53], object tracking algorithms can be in used across camera scenes. A scale-adaptive object-tracking algorithm with occlusion detection has been proposed in this paper. ResNet was used to extract more robust features. In the tracking process, the response maps computed from the different layers are weighted and fused based on AdaBoost algorithm for accurate localization. The NELM and PSR of the response map were used for the optimized update strategy, which can handle the problem of target occlusion. Scale filters have been extended for scale tracking. Compared with the mainstream algorithms, the experimental results showed that the proposed method could track the target robustly and accurately even in the cases of occlusion and scale variation.
In the future, we will try to further improve the robustness of algorithm to low-resolution and the real-time performance. | 6,972 | 2020-02-17T00:00:00.000 | [
"Computer Science"
] |
Another Name for Liberty: Revelation, ‘Objectivity,’ and Intellectual Freedom in Barth and Marion
Abstract Karl Barth’s and Jean-Luc Marion’s theories of revelation, though prominent and popular, are often criticized by both theologians and philosophers for effacing the human subject’s epistemic integrity. I argue here that, in fact, both Barth and Marion appeal to revelation in an attempt to respond to a tendency within philosophy to coerce thought. Philosophy, when it claims to be able to access a universal, absolute truth within history, degenerates into ideology. By making conceptually possible some ‚evental’ phenomena that always evade a priori epistemic conditions, Barth’s and Marion’s theories of revelation relativize all philosophical knowledge, rendering any ideological claim to absolute truth impossible. The difference between their two theories, then, lies in how they understand the relationship between philosophy and theology. For Barth, philosophy’s attempts to make itself absolute is a produce of sinful human vanity; its corrective is thus an authentic revealed theology, which Barth articulates in Christian, dogmatic terms. Marion, on the other hand, equipped with Heidegger’s critique of ontotheology, highlights one specific kind of philosophizing—metaphysics—as generative of ideology. To counter metaphysics, Marion draws heavily on Barth’s account of revelation but secularizes it, reinterpreting the ‚event’ as the saturated phenomenon. Revelation’s unpredictability is thus preserved within Marion’s philosophy, but is no longer restricted to the appearing of God. Both understandings of revelation achieve the same epistemological result, however. Reality can never be rendered transparent to thought; within history, all truth is provisional. A concept of revelation drawn originally from Christian theology thus, counterintuitively, is what secures philosophy’s right to challenge and critique the pre-given, a hermeneutic freedom I suggest is the meaning of sola scriptura.
"A theology of docile abandon"? The problem of revelation and freedom
We can trace this critique, in regard to Barth, at least as far back as Erich Przywara's 1923Przywara's review of the (second, 1921 Römerbrief, wherein Przywara, opening a lasting debate between the two men, accuses Barth of having, in his insistence on a 'wholly other' God, "crossed out…the relationship between God and humanity."11 If there is no creaturely, 'natural' capacitas always already able to receive God's selfrevelation (a proto-revelation, says Rahner), how is revelation not just an obliteration of human freedom to cooperate with God's salvific mission? Recent scholars have sensed this critique's implications for the kind of intellectual freedom needed for doctrinal development, too. As feminist readers of Barth have noted, for example, if "dogmatic theology answer [s] only to the call of Jesus Christ, heard in the words of Scripture," then "preoccupation with what is called 'the problem of language' or the feminist critique of 'masculine God-talk'"12 appears illegitimate. The real thrust of this accusation is not so much against the content of Barth's theological views on specific controversies (such as women's role in the church), but against the fact that his method, because it insists on a too-radical rupture between God and humanity, cannot integrate humans' experiences-including their experiences as rational, philosophical, but also politically and socially embedded beings-into its account of God's relationship with humanity. Because he "dissolv[es] the classical synthesis between faith and reason, collapsing all theological understanding into an exercise of faith,"13 Barth remains "apolitical," "naïve,"14 and useless for any theology-such as a liberation or contextual theology-that wishes to grant marginalized persons' or communities' voices conceptual legitimacy.
The same charge is leveled at Marion's theology of revelation, and that by his earliest readers like Janicaud, who detected at the very outset of Marion's phenomenological-and thus supposedly strictly philosophical-project something "destined to lay the foundation platform available for a higher edifice."15 The relationship between philosophy and theology under consideration here is more complex than in Barth, however. As Janicaud concedes, Marion's third reduction, to 'givenness'-the operation which, from Reduction and Givenness onward, makes the saturated phenomenon and, with it, revelation, philosophically possible-remains just that: a philosophical, not theological, move. Yet the problem is the same as in Barth: Marion, to accommodate the excess of the 'call of the given' over objectity and comprehensibility,16 denudes the human subject of epistemic integrity. " [T]he qualifying terms, in any case, are neither human nor finite: pure, absolute, unconditioned-such is this call. It addresses itself, it is true, to a reader, to an interlocutor… But [this] interlocutor is in his or her turn reduced to his or her pure form, to the interlocuted 'as such.'"17 Janicaud's earlier critique of Levinas thus applies to Marion as well. "The reader, confronted by the blade of the absolute, finds him-or herself in the position of a catechumen who has no other choice than to penetrate the holy words and lofty dogmas… All is acquired and imposed from the outset, and this all is no little thing: nothing less than the God of the biblical tradition."18 While Janicaud laments this reentry of God into philosophy for philosophical reasons-phenomenology, as a critical Enlightenment project, must remain secular-theological critics of Marion raise similar concerns. If God's self-revelation, as a saturated phenomenon, defies and confutes what we ordinarily call rationality-which is why, precisely, the rational, metaphysical 'God' is an idol-then how does theological discourse not become unthinking prostration before revelation's self-professed postmen? John Caputo puts it well, albeit forcefully: Marion has done very little to overcome paganism and metaphysics… Indeed, I would say that he has done a great deal to reinstate it, that this theology of docile abandon to the Logos lends onto-theo-political power a helping hand in its most violent form. It does indeed have a great deal to do with how not to speak, with theological silence, namely, with silencing Dutch and Latin American theologians; it has a great deal to say about how not to speak about God, namely, in disagreement with the bishop. God may evidently do without being, but not without the bishop.19 As with Barth, then, the charge against Marion is that his attempt to free theology from metaphysical strictures, far from liberating thought, resurrects fideism. Because philosophy is the medium in which lived human experience matures and then musters a critique against all that is pre-given it, theology's rejection of philosophy in the name of God's sovereignty only fortifies the "onto-theo-politics"20 it wrongly claims to besiege.
We thus find ourselves in a difficult position, but not just because theology would offer no respite from philosophy's totalizing temptations, but also because the fideistic critique falls short of explaining Barth's and Marion's consistent commitments to intellectual freedom. The difficulty is more manifest in Barth's case. If the theologian of Basel ignores human experience and historicity in favor of unbending dogmatics, what does he mean when he writes-in the very text in which his view of God's distance is most pronounced!-that "on the thither side of clarity and thingliness (jenseits aller Anschaulichkeit und Dinglichkeit), on the thither side everything in the law of which those who possess it approve-the 'ethical kernel,' the 'idealistic background,' the 'religious feeling'-beyond all that is valued in European culture-'conduct,' 'poise,' 'race,' 'personality,' 'delicacy of taste,' 'spirituality,' 'force of character'-beyond all these things is set that which men have to lay before God,"21 that "[t]he theocratic dream comes abruptly to an end…when we discover that it is the Devil who approaches Jesus and offers Him all the kingdoms of this world"22? What then does he speak of at the end of his life in Chicago, where he proclaims that, circumstances permitting, he "would try and elaborate a theology of freedom… Of that freedom to which the Son frees us, and which as his gift, is the only real human freedom"23?
The charge of fideism is likewise complicated not only by Marion's regular attempts to legitimate the subject's resistance to ideology, as we will see, but also, and perhaps most centrally, by his project's explicit challenge to his own theological tradition's reigning Thomistic paradigm. That he would write that "[e]very pretension to absolute knowledge therefore belongs to the domain of the idol;"24 would continue, in this same text, to almost accuse his own church's philosophus pereniis-whose "theses," we should remember, "are not to be placed in the category of opinions to be debated one way or another, but are to be considered as the foundations upon which the whole science of natural and divine things is based"25 (Pius X)-of that selfsame idolatry26; and then would finally, also in this text, cajole us into epistemic clericalism, is a claim that merits further investigation. In undertaking such an investigation, however, I not only argue for 19 Caputo, "How to avoid Speaking of God," 147. 20 Ibid. 21 Barth,Romans,68 [43]. Brackets refer to the page numbers in the original: Barth, Der Römerbrief, with which I've modified Hoskins' translation where helpful. 22 Ibid., 479. 23 Qtd. in Godsey, "Epilogue," 79. 24 Marion, God without Being, 23. 25 Pius X, Doctoris Angelici. 26 Marion, God without Being, 81-82: "Such a choice -by a formidable but exemplary ambiguity -Saint Thomas did not make, the Saint Thomas who pretended to maintain at once a doctrine of divine names and the primacy of the ens as first conception of human understanding. For our purposes, the historically localizable heritage of this indecision matters little; all that counts is what provokes it: the claim that the ens, although defined starting from a human conception, should be valid as the first name of God. This claim does not easily escape the suspicion of idolatry, as soon as the ens, thus referred to God, is engendered not only in conceptione intellectus but also in imaginatione intellectus-in the imagination of the understanding, hence in the faculty of forming images, hence of idols." Marion's affinity with Barth-statements like "[t]o do theology is not to speak the language of gods or of 'God,' but to the let the Word speak us (or make us speak) in the way that it speaks of and to God"27 all but suffice on that front-but also ask a broader question about the relationship between philosophy, theology, and what these two 'sciences' offer in terms of intellectual freedom. Could an emphasis on the rupture of theology and philosophy, by appealing to a revelation 'wholly other' than or 'exceeding' (natural) reason, actually enhance, not limit, thought's freedom to critique the pre-given? In other words: could God's sovereignty be human freedom?
The answer-yes!-lies, for both Barth and Marion, in revelation's 'objective' or 'evental' status. Because revelation is an 'event,' by definition unforeseeable, it cannot be confined within any a priori epistemic schema; nor can revelation, because inexhaustibly objective and alien to its witness, be adequately interpreted afterward. All human thought, in the face of revelation's event, is rendered relative, provisional, and, precisely for that reason, not liable to compulsion. The crucial difference between Barth and Marion, however, lies in how they situate this freedom of thought vis-à-vis the relationship between theology and philosophy. Both agree that any philosophy that claims to arrive at an absolute, universal knowledge through the knowing subject's natural self-reflection-what Marion calls 'metaphysics'-compels thought. Such philosophies are, in fact, ideologies, as are their theological doppelgängers: 'liberal theology' (Barth), 'ontotheology' (Marion), and 'natural theology' (both). Where these two theorists of revelation disagree is, first, in their historical assessment of this problem. Marion impugns modernity as the metaphysical epoch par excellence, while Barth sees philosophical overreach within theology as the most refined version of humanity's rebellion against grace, a rebellion that spans all church history but that the Reformation decisively (although not wholly successfully) tried to prune.28 Second, though, and more importantly, whereas Barth understands revelation mainly in dogmatic, Christological-that is, explicitly theologicalterms, Marion, though he adopts Barth's stance on intellectual freedom, secularizes his doctrine of the Word of God to develop a philosophy of revelation. Using phenomenological concepts, Marion argues that God's self-revelation is one among other saturated phenomena. Non-divine revelations populate our everyday experience. There thus is a way of philosophizing, and not just of theologizing, that is (or can be) otherwise than ontotheological: phenomenology, insofar as this 'science of experience' remains everwilling "to elaborate new and rigorous paradoxes."29 In the end, though, the 'objectivity' of revelation on which both Barth and Marion initially rely-and which justifies their critiques of their respective orthodoxies (liberal theology for Barth, Neo-Scholasticism for Marion)-is the scriptural text's objectivity. The intellectual freedom they aim to establish thus finds its conceptual ground in a core doctrine of the Protestant Reformation: sola scriptura, a term which now achieves a philosophical, and not just theological, significance.
2 "Power unto liberty": revelation, freedom, and Barth's critique of philosophy Barth's insistence on God's utter transcendence, as articulated most radically in Romans (II, on which we will rely), and his rejection of philosophical speculation as a means of access to God-which he stigmatizes as "natural theology"-are most often afforded two distinct, but not necessarily incompatible, explanations. The first is that Romans' newfound "emphases upon divine freedom, sovereignty or autonomy"30 were the culmination of a break from prewar German liberal theologians, whom Barth had seen become personally imbricated in bellicose nationalism and whose views he, out of ethical conviction, now had to renounce. The second is that Barth's polemic against natural theology emerged out of his confrontation with Przywara, 27 Ibid., 143. Where Marion 'crosses out' the word 'God' in this text, I quote it simply as " God " (without quotation marks); I maintain his own quotation marks around " 'God', " by which Marion means the conceptual idol. whose defense of analogia entis as the center of Catholic intellectual culture offered Barth the chance to refight the Reformation.31 Although older, more convoluted views of Barth's development (such as Hans Urs von Balthasar's "two-turn theory" and its attempt to enlist Barth as an analogical theologian) are now fairly discredited,32 the extant consensus makes it difficult to understand the continuity of Barth's view on theology's relationship with philosophy outside of sectarian concerns.
These existing explanations are also often dissonant with the texts they aim to interpret. We are asked to believe, for example, that Barth turned against his liberal teachers not over some "epistemological point," but because of "concrete and highly contextual differences in politics, national allegiances, and doctrines of providence"33-that is, over ethical and political squabbles-when he wrote that an "honest humanitarianism… recognizes[s] that the strange chess-board upon which men dare to experiment with men and against them in State and Church and in Society cannot be the scene of the conflict between the Kingdom of God and Anti-Christ,"34 or that the God of revelation exists "beyond [human] good and evil (jenseits von ihrem Gut und Böse)"35 and ties Godself, necessarily, to no "ethics or sacrament (Moral und Sakrament)."36 Likewise, while it is incontestable that "the debate about the analogia entis was and is a manifestation of [other] basic differences"37 between Protestant and Catholic theology, that these other differences were in fact "more central," or that Barth acted merely as a partisan for the Reformed tradition while "obscur[ing]…critical issues"38 in his confrontations with Przywara and other defenders of analogy, does not account for why Barth identified analogy with the problematic intertwining of philosophy and theology he believed the Reformation had overcome.39 A closer reading, however, shows why Barth's two great opponents-liberal theology and the analogia entis-are really two sides of the same coin for him. This becomes clear once we understand why Barth sees intellectual compulsion as underlying both.
With regard to liberal theology, there remains considerable debate about who or what this category denotes vis-à-vis Barth's breakthrough. Usually this discussion centers on whether the 'theologians of Feeling' like Schleiermacher and Adolf von Harnack should be grouped together with other more metaphysically-oriented Protestant theologians, "a camp they"-that is, Schleiermacher and his heirs-"thought themselves beyond."40 As Ingrid Spieckermann has wisely pointed out, however, Barth's insight is that whenever we isolate any subjective datum, be it 'Feeling' or the "inward coming of the kingdom (innerlichen Kommens des Reiches)"41 as the normative criterion by which revelation must be interpreted, we actually only prolong modern philosophy's enframing of revelation within (de)finite, a priori epistemic parameters. The difference lies in that while metaphysically-inflected theologies use ontological categories to restrict revelation, 'theologies of Feeling' use affective categories, even as the latter are reified into the former in their moment of description. "Liberal theology turns out to be the derivative, modern, subjectiverelative flipside of [the] metaphysical model and it happens…first and foremost…in the genuine sublimation of the objectivity of the knowledge of God in the subjectivity of religious feeling and experience."42 Barth thus becomes obsessed with rejecting what he calls "the a priori of all representation."43 There is no finite and natural human faculty that necessarily re-presents God's revelation to the knowing subject and that could thus set an epistemically independent benchmark for revelation's legitimacy. Rather, revelation is entirely "objective." "Reformed teaching does not mean a knowledge which is based merely on feeling, which is peculiar to the individual and which therefore has no binding character. On the contrary, no more objective knowledge can exist."44 Why Barth sees this move to 'objectivity' as grounding intellectual freedom becomes clearest in contrast to the so-called liberal theologians' theories of revelation, which, despite their attempt to destroy and expose the 'philosophical' and 'metaphysical' influences "Hellenism (das Griechische)"45 had wrought upon primitive Christianity, repeat the coercive tendency latent in all philosophically-justified Christianity. Harnack, for example, initially attacks any "elaboration of the Gospel into a vast philosophy of God and the world, in which every conceivable kind of material is handled." This "conviction that because Christianity is the absolute religion it must give information on all questions of metaphysics, cosmology, and history (der Metaphysik, Kosmologie, und Geschichte Auskunft)" is just "Greek intellectualism," which must be overcome along with its doctrine that "Knowledge is the highest good."46 In place of such knowledge, though, historical criticism uncovers the 'essence' (Wesen) and absoluteness of Christianity all over again in the "subjective Act (subjektive That)."47 It finds that "the Gospel is nowise a positive religion like the rest" because it "contains no statutory or particularistic elements… [I]t is, therefore, religion itself (Religion selbst),"48 the religion of the Kingdom of God "as something inward (als etwas Innerliches)" and not as "the external rule of God."49 The latter, of particularistic 'Israelitish' provenance, is non-unique to the historical Jesus.50 All questions of its historical accuracy aside, the epistemic consequence of a move like Harnack's is that historical criticism, because scientifically objective and so rationally necessary, accesses a site of God's continuity with history (the inner Kingdom) from which intellectual dissent is de jure impossible. This site's historical and cultural vehicle is then sanctified. Harnack unsurprisingly reminds his German audience: in Christianity it is "our history [that] was… developed; for without that all-important transformation there would be no such thing as 'mankind,' no such thing as 'the history of the world' in the higher sense… [E]xtended to cover all human relationships and really observed, it contains a civilizing force of enormous strength."51 Christianity, the universal religion because rationally accessible through historical criticism, can rightfully "govern (regieren)"52 the particular religions.
In his invective against any "speculative optimism which thinks it is very well acquainted with God in nature, in history, and in the heart of man,"53 then, Barth, does not just nurse a political animus against his teachers' generation. Rather, he critiques the conceptual link between human knowledge and revelation that allows someone like a Harnack or an Ernst Troeltsch54-whose The Absoluteness of Christianity and the 43 See Barth's August 6th, 1915 letter to Eduard Thurneysen in Karl Barth -Eduard Thurneysen Briefwechsel, vol. 1 (1913-1921 54 Troeltsch did not sign the Manifesto of the Ninety-Three as Harnack did, but nonetheless was a vociferous advocate of German imperialism on the basis of (alleged) German cultural superiority. As Die Welt notes, "Troeltsch was seen as one of the most significant representatives of the intellectual movement in favor of war. He never tired of pointing out the 'decadence' and 'arrogance' of the French, and said of the English that they fought, 'like a physically deficient woman, with nothing other than the means of a calculating, poisonous tongue.' Germany's model soldiers, on the other hand, were culturally and morally superior to their adversaries, and were forced to war by the instinct of self-preservation against the dangers of 'barbarians, fanatics, and illiterates.'" See Bendikowski, "Ernst Troeltsch and the power of the pen." History of Religions follows Harnack's pattern, disavowing Christianity's metaphysical supremacy only to proclaim Christian 'inwardness' the truth of human religion as such-to underwrite that "we shall carry on this war to the end as a civilized nation, to whom the legacy of a Goethe, a Beethoven, and a Kant is just as sacred as its own hearths and homes."55 The link is that of epistemic certainty: as long as human beings can, from a universally accessible perspective within history, identify a definite point of continuity between divine and human activity, ideology arises. Moreover, insofar as 'philosophy' names the most sophisticated human striving for epistemic certainty, philosophy's admixture with theology becomes a prime ideological catalyst. Barth's first formal response to Przywara's Analogia Entis,56 the 1929 "Fate and Idea in Theology" lectures in Dortmund, make this extremely clear and mark the conceptual continuity between his rejection of both liberal and analogical theology. "The great temptation and danger consists in this, that the theologian would actually become what he seems to be-a philosopher."57 The quarrel here is not with philosophy as such, since philosophy and theology alike rely on human concepts. Both use "tools" of "ordinary human thought and speech with their own definite laws, possibilities and limitations."58 Rather, Barth condemns theology's attempt to subsume philosophy (and vice-versa), which would fix God's free irruption into human history to some finite historical reality or concept. When this latter reality is 'external,' theology becomes metaphysics; when 'internal,' liberal theology. Both sides of this "realist theism" are problematic: Classical theism like that of Aquinas's stresses both aspects equally… Within the bounds of realism, however, the possibility exists that one side might be weighted more heavily than the other or perhaps even placed in conflict with the other… A few examples can easily make it clear that these conflicts are not irresolvable, that they beckon to one another, that in one way or another they finally instantiate the same conception. Was anything really new said to Wallenstein, who while gazing at the stars heard a voice tell him, 'In your heart are the stars of your fate.'… Wasn't Luther on target when he lumped together the Anabaptist claim that the Holy Spirit was given in the individual's heart, with the Roman Catholic claim that it was given in papal authority? Or don't those two feuding siblings, pietism and rationalism, actually belong together, since the one elevates the subjective religious experience of the inner world into the criterion of theology, while the other does the same thing with the objective experience of the outer world?59 The problem with such 'realism,' which Barth explicitly identifies with analogia entis,60 is that it deifies thought's rational necessity. This "theological empiricism" thus "discovers God in fate-a fate that befalls human beings inwardly-outwardly, subjectively-objectively, something which becomes all too powerful for them and takes them prisoner, setting them in absolute dependence."61 While "[n]o theology can afford not to share completely the intentions evident here,"62 this thought of God as "causa sui, ens realissimum, and actus purus"63 culminates in, on God's side, the denial of grace; on humans' side, the denial of freedom. "If the basic orientation of realism is not to be something completely different from that of a person who hears God's Word, then the presupposition of an inherent human capacity will have to be met with…outright rejection," for "[i]n contrast to the whole possible range of human experiences, the Word says something new."64 Thus, "God's givenness to us and to the world-God's givenness in his revelation-cannot be understood as though it were somehow accessible to a set of precise conceptual formulations."65 Grace is therefore the operative category that opposes analogy's fatalism; hence why, for Barth, there is stricto sensu no concept of grace. Nor can grace be naturalized: "[n]o inherent grace or capacity for grace can be claimed in virtue of which the knower and the known would exist in relation to God through the analogia entis."66 If "God distinguishes himself from fate by the fact that he is not so much there as rather that he comes"67à-venir-then revelation must never be constructed out of some intellectual necessity or "theosophy."68 Theology is thus freer than philosophy for Barth because-at least when theology remains theology,69 namely, when it "found[s]…both the church and human salvation…on the Word of God alone, on God's revelation in Jesus Christ, as it is attested in the Scripture, and on faith in that Word"70-in it, the objectivity of God's self-revelation renders all human discourse relative and provisional. This very provisionality, however, secures intellectual freedom. No system of human knowledge, not even the most philosophical or exhaustive, can compel its own acceptance; it stands relativized before revelation's 'krisis.' Hence why theology itself "will only be a theology of God's Word if it somehow makes the concept of predestination central to its concept of God."71 Barth signals hereby no agonizing Puritanism, but theology's liberatory epistemic function: so long as theology remains aware of grace qua grace, it tempers thought's absolutist pretensions. This does not subordinate philosophy to theology, since both sciences are prone to deify thought and thereby wage the perennial "conflict against grace that is man's own deepest and innermost reality."72 Indeed, God may act through philosophy as well as through theology.73 Nonetheless, theology, insofar as it conceptually permits and so at least formally submits itself to grace, remains a more potent critic of ideology than philosophy. Philosophy "at least aspires to say an ultimately definitive word, at least aims in that direction, at least considers it to be potentially utterable,"74 while theology knows it should reject this temptation. Barth's preference for "dialectical" theologizing, which tries not to submit a tension of opposites to sublation's finality, should be read in light of this concern.75 Although Barth will nuance his views on theology's relation to philosophy even further in the later Church Dogmatics-though we affirm, pace von Balthasar, that this inaugurates no 'catholicization' of his rejection of natural theology76-the earlier Römerbrief still best lays out Barth's notion of grace as critique. "The encounter of grace," we read there, "depends on no human possession; for achievement…is of no value and has no independent validity in the presence of God. Where God speaks (Wo Gott reden) and is recognized, there can be no speech (Rede) about human being (Sein), having, or enjoying"; hence, "when there arises the possibility of faith, this is intelligible only as an impossibility."77 Again: "The men of God know that faith is faith only when it is product of no historical or spiritual reality (geschichtliche und 67 Ibid., 40. 68 Ibid., 56. 69 I take up Marion's distinction between theology and theology from God without Being, 148-154. Theology (authentic, revealed theology) has as its "first principle…a hermeneutic of the biblical text that does not aim at the text"-historical criticism-"but, through the text, at the event, the referent… Hence the human theologian begins to merit his name only if he imitates 'the theologian superior to him, our Savior' in transgressing the text by the text, as far as to the Word"-i.e. interpreting Scripture Christologically. In contrast, "the status of a science makes of theology a theology," which, "instead of interpreting the text in view and from the point of view of the Word, hence in the service of the community, [has] only one alternative: either to renounce aiming at the referent (positivistic, 'scientific' exegesis) without admitting any spiritual meaning…or else to produce by himself, hence ideologically, a new site of interpretation, in view of a new referent." Barth's interpretation of his own views about the nature of God's relationship with humanity centers upon the question of whether Barth is able to define the being of the human in faith strictly in terms of the being of Jesus Christ without presupposing a prior determination of human being in God's creation. Von Balthasar's key mistake is that he failed to recognize that Barth's entire account of creation is predicated on avoiding precisely this presupposition… [V]on Balthasar failed to see the… 'decisive innovation' in [Barth's] doctrine of creation: Barth's decision to make the human Jesus of Nazareth" -i.e. God's self-revelation -"the condition for the possibility of knowledge of human being as such." In his own rejection of natural theology, Marion will make the same maneuver, subordinating any so-called 'natural' theology to revealed theology (that is, insofar as there is 'natural theology,' God's self-revelation in Jesus Christ is presupposed); see Marion,Givenness and Revelation,[26][27][28][29]Romans,: "Wo Gott reden und erkannt wird, da kann von einem Sein und Haben und Genießen des Menschen nicht die Rede haben." seelische Wirklichkeit). They know that faith is the unutterable reality of God (unsagbare Gotteswirklichkeit), that clarity of sight is no technique (Methode), no discovery of research."78 God's self-revelation is a pure 'event' as phenomenology will understand it, albeit "not an event in history at all," because "[g]race is the incomprehensible fact that God is well pleased with a man." But "only when it is recognized as incomprehensible (unbegeislich), is grace grace."79 Barth's theory of revelation thus disqualifies any philosophical propaedeutic for theology that purports to explain, rather than just proclaim, grace, e.g. as perfeciens non tollens naturae. "[N]o divinity which needs anything, any human assistance" of this kind, "can be God."80 Such assistance stands under "the No (das Nein) under which all flesh stands, the absolute judgment," which "is what God means for the world of men, time, and things (die Gott für die Welt des Menschen, der Zeit und der Dinge bedeutet)."81 This last statement is crucial. With it, Barth marks a dividing line between God's different meanings for theology and philosophy. Within theology-that is, starting from God's self-revelation and it alone-God has one meaning (Bedeutung), which dogmatics elucidates. Within philosophy-that is, "flesh," which denotes for Barth, as it did for Luther, "whatever is best and most outstanding in man…namely, the highest wisdom of reason"82-God's self-revelation means the provisionality of all knowledge. "[I]n the radical dissolution of all physical, intellectual, and spiritual achievements of men, in the all-embracing relativization (Relativierung) of all human distinctions and human dignities, their eternal meaning (ewige Bedeutung) is made known."83 Humanly achieved truths, above all philosophical ones, are not thereby meaningless, but rather always dubitable. Because "[t]he Gospel is not a truth among other truths," "it sets a question-mark against all truths."84 This is the sense in which 'krisis' means, above all, critique. But critique "does not mean…the denial or the depreciation of that which is not God… [I]t does mean," however, "that this latter factor is criticized, limited, and made relative."85 In political terms, this means freedom from ideology, from being the "slaves and puppets of things, of 'Nature' and of 'Culture,' whose dissolution and establishing by God" ideology "overlook [s]."86 In the presence of revelation's judgment, because no human thought is epistemically necessary, "what cannot be avoided or escaped from" can no longer, as under ideology, "become…confused with some necessity of nature," which "is in very truth a demonic caricature of the necessity of God."87 This is why-pace those who would see in it quietism or, worse, a reactionary hatred of social change-Barth closes Romans with a kind of apophatic political theology of endless deliberation. "Having freed himself from all idolatry," the Pauline figure "recognize[s] that relative possibilities"-philosophical, intellectual, and political-"are, in the midst of their evil, good, and…accept[s] them as shadows preserving the lineaments of that which is contrasted with them."88 Then, "when the tone of 'absoluteness' has vanished from both thesis and antithesis…room has perhaps been made for that relative moderateness and for that relative radicalness in which human possibilities have been renounced."89 The intellectual freedom that our extrication from natural theology cements thus culminates for Barth in the human subject's actual historical freedom. This is the "freedom of God (Freiheit Gottes)"90 in the subjective-genitive sense (as it was for Luther91), the "freedom in God's captivity (Freiheit in der Gefangenschaft Gottes)…wrought by Christ and which Grand Inquisitors of all ages have found so awkward and so dangerous."92 Because "[o]ppressed on all sides by God"-a statement which, out of context, might sound fundamentalist-those who "live in Pauline fashion…must dare to live freely"93 from all other oppressions and compulsions, above all the compulsion of thought Barth understands philosophy as imposing. Paradoxically enough, then, theology is, ultimately, what secures philosophy's own critical aspirations. The theologian's "work is, therefore, always exercising itself in criticism; it is lacerating (zerfetzend) and Socratic (sokratisch)"94; more Socratic, indeed, than philosophy itself.
"A more generous rationality": freedom in Marion's philosophy of revelation
Though some recent work has, at last, begun to note the strong affinities between Barth's theology and Marion's,95 it is likely von Balthasar played an outsized role in mediating this influence. Marion himself admits as much but draws no sharp distinction between the two. Both, he says, share a theological "starting point. God reveals himself-that means the self-manifestation of God from himself and according to his own rules."96 Another historical mediator, though more roundabout, may be Levinas, whose supposed discovery of l'Autrui has, with some controversy, been traced to Barth as well.97 While both Barth and Marion understand philosophy's totalizing temptations, however, the crucial difference between the two is that Barth understands the 'incomprehensible,' 'new,' 'gracious' self-revelation that relativizes all human knowledge in dogmatic terms. The "sternness of the Gospel of Christ" is that "which constitutes its tenderness and gentleness and its power unto liberty."98 Marion, on the other hand, moves to understand revelation's 'evental' character precisely with this term and the strictly philosophical pedigree it designates. 'Event' (Ereignis), drawn from Heidegger's 'methodologically atheist' toolkit, joins a host of others-'saturated phenomenon' first of all-that translate revelation, theologically overdetermined, into a philosophical category. Whether this imports a confessional commitment from Marion's part-a commitment which, despite its being debated ad nauseam, he never really tries to hideonly remains a pressing question if, rather unphilosophically, one confuses a concept's genealogy with its coherence. A far more powerful hermeneutic opens up if, reading Marion against the scholarly grain (and perhaps even himself), we understand his notion of the saturated phenomenon as what secures, for philosophy, the intellectual freedom which a Barthian doctrine of revelation authorizes within Christian dogmatics.
Like Barth, however, Marion also immediately ties the problem of intellectual freedom to the relationship between philosophy and theology because it is precisely in the former's pretense to devour the latter that compulsion of thought is born. Equipped with a Heideggerian vocabulary, Marion explicitly names this tendency 'ontotheology,' a term which "established a hermeneutic of the history of philosophy so powerful that it could not be matched, save by the one used by Hegel."99 For Marion, the operative center of Heidegger's triad is Λόγος, which denotes not just that 'God' and 'Being' are somehow related in themselves 91 Luther sees the hermeneutic shift from the objective-genitive or 'active' sense of the term iustitia Dei to its subjective-genitive or 'passive' sense as the core of his discovery; by an 'analogy' (analogia) -but not of being -he then uses the subjective-genitive to read Scripture in a way that is 'other' (alia) than the one pre-given by tradition. (Marion has, for example, no de jure issue with Thomas's apophatic esse100), but that the human being could access this relation adequately through the concept. 'Being' thus takes on a uniquely problematic role only because it is preeminently through this word that philosophy, as modern metaphysics, has "give[n] to the supposed contribution, the representing idea (Vorstellung) of [ontological] difference, a place within Being."101 Marion's initial theological forays make this very clear: 'Being' is the main, but by no means only, conceptual idol ontotheology mobilizes for its ends. That is why, for example, Nietzsche "remains an idolater"-not because he blatantly names 'God' 'Being,' but because "the divine…depends radically on the Nietzschean valuation of Being"102 as the will-to-power, and thus on some a priori conceptual valuation the subject carries out. Heidegger remains implicated in ontotheology for the same reason.103 To avoid such idolatry, then, a genuine theology must impose no 'preliminary conditions' on God's self-revelation, an epistemological decision that runs through Marion's entire oeuvre. "[O]ne could not do a 'theology of the Word,' because if a logos pretends to precede the Logos, this logos blasphemes the Word (of) God."104 Then, thirty years later: "Nothing less is necessary than leaving the essentially finite horizon of Being and beings…which could not harbor the unseen of God, much less disclose it and uncover it."105 Any philosophical logos or "discourse,"106 when it imposes epistemic preconditions upon theology, refracts revelation through the concept's finitude and so makes God's appearance-as infinite reality-impossible. This concern's origins are, of course, rooted in a Christian understanding of God, for if a screen of finitude must filter God's self-revelation, then this revelation cannot be, precisely, a self-revelation. Revelation becomes "a piece of information"107 distinct from God's own self. But "God's intention" in revealing Godself "is not so much to make himself known as to make himself re-cognized, to communicate himself, to enable men to enter into a communication that puts them into communion with him."108 Thus, any theory of revelation that bifurcates God's self from God's revelation is idolatrous, because it ignores the selfcommunicatory divine intention underlying revelation-a view traceable, in fact, to Luther.109 Nevertheless, unlike Barth, Marion attempts to give this priority of the donor over the recipient in the act of revelation a philosophical, and not just theological, intelligibility. To do so, Marion turns to Husserlian phenomenology, which in its 'principle of all principles'-namely, that "what offers itself originarily to us in 'intuition' must be taken wholly as it gives itself, but also only in the limits within which it gives itself"110-yields, at last, a philosophy of revelation. Reduction and Givenness marks the first foray into this philosophy, finding as it does in Heidegger's "What is Metaphysics?" and its analysis of boredom a possible 'liberation' from Being. "If boredom liberates the there [Da] from the call of Being, it sets it free only in order to expose it to the wind of every other possible call; thus, the liberated there is exposed to the nonontological possibility of another claim… [T]he claim might" thus "exert itself under another name than that of Being, in the name of an other than Being."111 This 'third' reduction, still termed here "the reduction to and of this call," "transgress[es] the claim of Being"-Heidegger's 'Dasein-analytic'-but still "belongs to the phenomenological field for precisely the same reason that would allow the Dasein-analytic to replace the constitution of [Husserl's] transcendental I."112 The 'same reason' in question is objectivity, not (importantly) in the precise phenomenological sense of a phenomenon whose intention and intuition are adequate (objectity),113 but in the broader sense of the phenomenon's anteriority over its own reception. In Being Given, Marion tries to accord precisely such an objective consideration to phenomena that, in his view, flabbergast Kant's transcendental categories and thus metaphysics itself: phenomena "invisible according to quantity, unbearable according to quality, absolute according to relation, irregardable according to modality"114-the event, the idol, the flesh, and the icon, respectively.115 Irrespective of their genealogies, these are here secular, philosophical terms. The extent to which one believes Marion successfully articulates a philosophy, and not just theology, of revelation thus depends in large part on whether one finds his descriptions of these saturated phenomena in non-confessional contexts-of a painting as an idol, of the face of the Other as an icon, etc.-convincing or not. This openness to descriptive critique, however, hardly qualifies something as dogmatic, let alone intrinsically 'theological.' The more serious critique of the saturated phenomenon, as we have seen, is that it seems to rob the subject of epistemic integrity. In fact, Marion goes so far as to claim to overcome the subject in favor of l'interloqué ('besought,' Reduction and Givenness) or l'adonné ('gifted (one),' Being Given). "[I]n admitting the blow of the claim, the interloqué acknowledges first and definitively having renounced the autistic autarky of an absolute subjectivity. This compulsion [!] to alterity (whatever it may be) precedes even any form of intentionality or of Being-in-the-world."116 The saturated phenomenon's claim might be so powerful, in other words, that its reception transforms the recipient's own constitution as metaphysics understands it. "Individualized essence" would thus "no longer precede relation." Rather, "relation here precedes individuality. And again: individuality loses its autarchic essence on account of a relation that is not only more originary than it, but above all half unknown, seeing as it can fix one of the two poles-mewithout at first and most of the time delivering the other, the origin of the call."117 This suggests that the 'me' might be helplessly effaced and traumatized 'from elsewhere' and thus have the meaning of her ownmost self dictated by "transcendent" phenomenality, a point which Michel Henry perceptively raised and saw as but a subtler return to "ontological monism."118 This criticism, however, not only overlooks the fact that the epistemological subject's transformation in the light of saturation's brilliance is always, for Marion, able to be refused,119 but also that the saturated phenomenon's excess inaugurates not a dogmatic restriction of meaning but a corresponding excess of signification itself-the "infinite hermeneutic." 112 Ibid.,197. 113 The relationship between intuition and intention, terms Marion draws from Husserl, lies at the heart of Marion's phenomenological claims. Intuition is whatsoever is given in experience (such as sense data), whereas intention is the signification the recipient of that experience (the experiencer -in metaphysics, the "subject") brings to that experience. In the experience of any given phenomenon, intuition and intention can have one of three relationships: either intention exceeds intuition ("poor phenomena"), intention adequates intuition ("common-law phenomena"), or intuition exceeds intention ("saturated phenomena"). Whether one finds Marion's phenomenological claims convincing, or not, ultimately rests on whether one believes that an experience wherein intuition exceeds intention is actually experienceable at all (for Husserl, there is no such experience). See Marion, Being Given, 222 ff. See also n. 16, above. 114 Ibid., 199. Emphasis removed. 115 Ibid., [228][229][230][231][232][233]Reduction and Givenness,200. 117 Marion,Being Given,268. 118 See Henry, "The four principles of phenomenology", 1-21. 119 Marion, Givenness and Revelation, 117: "Faith does not enter in as an obscure replacement for the light of understanding, but in order to bring the understanding to will or not to will to accept to accept the coming of God in and as the event of Jesus… But this decision, which puts into operation the structure of call and response, rhymes, according to a logic as rigorous as it is surprising, with the fundamental structure of the event and of every phenomenon." Emphasis added. In understanding revelation, if not phenomenality tout court, according to the structure of a grace that is freely offered but can always be refused, Marion still remains a Catholic -a Protestant would insists on grace's, and perhaps the phenomenon's, irresistibility. Henry -who comes across, for this reason, as much more Protestant than Marion -criticizes Marion for insisting on Life's ability to resist all given phenomena.
In phenomenological terms, the saturated phenomenon appears when "intuition"-the donor act-"always submerges the expectation of the intention"120-the receptive act, which means that the intention, which operates according to concepts, cannot affix any single concept to what it receives in the intuition. Unlike Kant, however, Marion does not think a 'blindness' results; rather, the intention produces an endless parade of significations that will never adequate (but do still signify) the phenomenon in question. This double structure of saturation-its unpredictability or événementialité on the one hand, the infinite hermeneutic it produces on the other-ultimately characterizes all four types of saturated phenomena, despite the fact that Marion's oeuvre describes their interrelationship in evolving ways. Being Given, for example, for which "the Other showing himself in the icon of the face" still "gather[s] within it the modes of saturation of the three other types,"121 also describes the historical event proper and the idol according to an event/infinite hermeneutic binary. "For those…whom it enlists and encompasses, not one of their (individual) horizons will be enough to unify it, speak it, and especially, foresee it"; this "plurality of horizons forbids constituting the historical event into one object and demands…an endless hermeneutic in time."122 "The intuitive given of the idol," which has its own "purely unpredictable landing," likewise "imposes on us the demand to change our gaze again and again, continually, be this only as to confront its unbearable bedazzlement"; though our hermeneutic may thus be "solipsistic," it is nonetheless also unending.123 So too with the icon. "By gaze and by face, the Other acts, accomplishes the act of his unpredictable landing… Like the historical event, [he] demands a summation of horizons and narrations" and "happens without assignable end."124 In Excess sees these same characteristics playing out, finally, in the phenomenon of the flesh (although there for a stricter temporal reason).125 Marion will thus rather unsurprisingly admit that his work's "main theme is in the end the question of the event,"126 that "all…saturated phenomena turn out to be governed each in their own way by événementialité."127 Moreover, if we couple this claim with anotherthat his "entire project…aims to think the common-law phenomenon, and through it the poor phenomenon, on the basis of the paradigm of the saturated phenomenon"128-we glimpse an attempt to grant the freedom God's (self-)revelation enjoys in Christian theology to all appearing phenomena, including the human (flesh, Other).
Marion's insistence that "paradoxically, but logically, revelation, by virtue of the givenness that it alone performs perfectly, would accomplish the essence of phenomenality"129 should thus be read more as importing the form, rather than the content, of a 'strong' Christian theology of revelation into phenomenological philosophy. A possible "phenomenon of revelation…saturating phenomenality to the second degree, by saturation of saturation"130 would thus further confirm, rather than limit, all phenomenality's événementialité, since it would show that even the conceptual significations phenomenology coins to describe saturation can be exceeded. That Marion describes this possible phenomenon in terms of the Christ-event raises theological rather than philosophical difficulties, since ultimately this event's meaning seems to balloon to titanic proportions: "It could even be that history (in the case of time), civilizations (in the case of space), and spiritualities, literatures, cultures," etc., "are set forth only to decline, unfurl, and discover the paradox of Christ."131 Whether this implied 'Christosis' of phenomenality is not just a more refined natural theology (equating 'Christ' with givenness tout court), or whether Marion somehow avoids this charge by referring phenomenality back to one Trinitarian ὑπόστασις instead of an anonymous divinitas, is an open dogmatic question. This question does not strictly speaking import faith into philosophy, however, because "revelation" so defined has no doctrinal content. It denotes only a formal structure: a necessary openness to whatsoever content may 'eventally' arrive, which may (or may not) include a faith that may (or may not, we could not decide this phenomenologically) itself be an "illusion."132 Moreover, even the reception of any "Revelation" proper, that is, "of God by himself,"133 would still require the "absolutely infinite unfolding of possibilities" of this content, requiring a genuinely "theological progress"134 (i.e. doctrinal development) carried out within the believing hermeneutic community. "We are" thus not only "infinitely free in theology,"135 as Marion explicitly writes, but in philosophy as well, inasmuch as philosophy takes up the freedom for evental openness theology bequeaths it as a method.
Regardless of whether this constitutes an encrypted return to the liber mundi and thus a betrayal of Barth's vision, repeats Barth's subordination of the ontological to the Christological in a phenomenological key, or is just one more conceptual "counterblow"136 of theology upon philosophy, it is at least clear what Marion's framework opposes: namely, a metaphysical totality wherein phenomenal experience is submitted to some a priori discursive field. Marion sees Hegelianism as this trend's modern exemplar, since it attempts to apply Kant's universal moral law, valid for Kant only for noumena, onto the particularities of phenomenal history. This results, Marion claims, in "ideology" and "totalitarianism." "Ideology produces a world that from the outset is in conformity with the demands of discourse. Put another way, it claims to offer reasons for what is by referring to what ought to be, and it thus eventually authorizes destroying whatever is that does not conform itself to what ought to be."137 Ontotheology reaches its logical conclusion and non-confessional meaning: having 'killed God' due to God's non-conformity to metaphysical epistemology's strictures, it now aims this deicidal conceptual violence against human beings. In this movement, metaphysics' subjectivism exposes itself. Where metaphysics had once pretended to ground itself in external, 'objective' reality with the caveat that only what limited itself to the subject's finitude could ever constitute that reality, ideology reverses this relation. 'Objective' reality is now explicitly the projection of the metaphysical subject (in concreto, of the sovereign), of "the new gods" who "for a century have…ceaselessly arrived catastrophically."138 These 'gods' operate by compelling thought. "Feuerbach, Stirner, and Marx rely on the supremely idolatrous identification of 'God' with the absolute Knowledge that Hegel had constructed for them"139; ideology's "du sollst!"-which commands thought as much as (moral) action-thus results from conceptual idolatry, the "political form" of which "displays itself, par excellence, in Leninism and Nazism."140 Unlike other Catholic critics, however, Marion does not advocate returning to Neo-Scholastic metaphysics as a solution to such political idolatry. In fact, this "indiscriminate return to… Thomas Aquinas…form[s]" only "a kind of philosophico-theological ideology of the Catholic Church."141 Marion thus agrees with Barth: modern philosophy's subjectivism extends, rather than turns away from, far older metaphysical systems and their allied theologies.
Intellectual freedom thus consists, ultimately, in granting to thought the événementialité all phenomena enjoy. Because there is no transcendental subject, only l'adonné "said and spoken before being…born from a call that [she] neither made, wanted, nor even understood,"142 an absolute system of knowledge could neither delimit, in advance, the event of this utterly singular recipient, that of 'whatsoever' she intuitively receives, nor that of the interplay between these. To critique Marion here for importing a doctrine of "election"143 into phenomenology needlessly theologizes the point's Humean origins: because we cannot in principle certainly know that the future will resemble the past or present, no epistemic framework can ever consider itself finalized. This only means that the hermeneutic tasks Marion insists upon are impossible, however, if the phenomenological recipient's de jure ever-open malleability was radically realized de facto at each historical instant. That experience suggests that, de facto, this malleability is only relative cannot justify a return to humana natura, though, as e.g. Emmanuel Falque's response suggests (although, somewhat shockingly, only by appealing at once to a doctrine of Creation),144 since doing so only reifies, in terms of a metaphysical ground, a phenomenological fact that needs no such grounding to hold descriptively true.145 Yet this metaphysical groundlessness is what guarantees not only, on the one hand, the subject's free self-reception from herself qua flesh (Marion's debate with Henry here notwithstanding146) and thus her phenomenological right to dissent from 'transcendent Being' (Henry), but reason's inability to close itself off from the flux of incoming revelations that always challenges and expands reason as such. This 'evental' character of phenomenality "arouse[s]…a diversity without end of meanings, all possible, all provisionary, all insufficient."147 Any appearance can confute the certainties pre-given to us, but this absence of certainty is just another name for liberty. And so while it is true that against the "rationality [that] today often makes itself totalitarian, it will be quite necessary for freedom to oppose itself to it,"148 Marion offers the panacea, not of nonsense or unreason, but of a broader understanding of reason's own possibilities. We should "attempt to think of love itself as a knowledge," for example, "and a preeminent knowledge to boot."149 "The heart has its reasons, that reason does not know"150 (Pascal), but reasons they remain. In the end, [a] multiplication of modes of rationality then becomes possible. For it is certainly not a matter of leaping into irrationality. Quite to the contrary, irrationality arises from a very narrow definition of rationality, which limits itself to [phenomenological] objectness and to the transcendental constitution, expelling an immense crowd of phenomena into the shadows of supposed irrationality, phenomena that might very well have enjoyed full citizenship in a more generous kind of rationality.151 By granting revelation a philosophical, and not just theological, meaning, in other words, Marion dissolves the implicit competition between these two 'sciences' as it still exists in Barth. This dissolution, far from chaining thought to dogma, actually frees the former from older orthodoxies. Faced with the given's radical objectivity-which Marion articulates through the originally but not necessarily theological term 'revelation'-philosophy knows it will be ever-incomplete. This incompletion, however, is what grounds its deliberative, critical, and rational task-a task of which phenomenology is, perhaps, the vanguard. Theology, then, liberates philosophy formally (though not materially) for Marion, and although this renewed freedom permits revealed theology, it does not require it. Indeed, in contrast to ontotheology's "forced baptism"152 of reason, which sees in every metaphysical subject nothing less than the 'anonymous Christian,' the believer in ovo, the phenomenology of givenness admits that faith may not be given me. I am free: free to say no. No, I do not receive that-because I receive otherwise. 145 Objections to his project on the basis that "we will not find finitude as such within the thought of Marion" (see ibid.) thus operate under an ontological definition of finitude -namely, the persistence in presence of the ὑποκείμενον, the bearer of an unchanging but defined essentia -the epochē already precludes. The 'principle of all principles' cannot coexist with some presumed anthropological Hinterwelt, since this would preclude the possibility that the recipient could give herself to herself in a radically new appearance incommensurate with this anthropology. This objection is thus not properly speaking a phenomenological one; it is a metaphysical critique (perhaps quite legitimate) of the phenomenological method as a whole.
"No law except Scripture": sola scriptura as intellectual freedom
For both Barth and Marion, then, the category of revelation functions both as a dividing line between theology and philosophy and as a bridge between those modes of philosophizing and theologizing that leave room for thought's 'evental' possibility. In the first instance, the traditional distinction remains: philosophy remains within pure reason's realm, while theology admits of some special revelation. This first instance, however, remains coherent only under a delimited definition of both sciences: philosophy understood as metaphysics in its Scholastic and modern modes ('Natural Theology'), theology as the kerygma of God's self-revelation in Jesus Christ. Barth almost always, and Marion oftentimes, indulge this bifurcation because it easily allows them to contrast the radically new, unforeseeable, gracious character of the Christ event to philosophy's ideological temptations-temptations which they both see philosophy as having surrendered to, in a unique way, in modern political and intellectual history. In their more nuanced moments, however, both reject this oversimplified opposition and admit that philosophy, if it resists absolutizing itself, remains critical and free. "Good for him," Barth says, "if in the framework of philosophy he is nothing but a human thinker, a philosophus among others, reflecting fundamentally on the conception of human existence, and yet is still a witness to thinking based on divine revelation."153 This latter openness to revelation, which Barth understands mainly in dogmatic terms, Marion reframes as a philosophical concept; for him, this openness just designates the proper comportment to all given phenomena, of which God's self-revelation is one especially important instance. Rather than subordinating philosophy to theology, however, and thereby revoking the former's aspiration to intellectual freedom, this aperture to the given strengthens philosophy's critical capabilities. No a priori restrictions can limit the given in the way that metaphysicswhether deified as "the moral 'God'" (classical metaphysical theism to Kant) or "the 'new gods'"154 (modern ideology)-desires; thus, the "task of thinking"155 remains forever open, both to potentially ever-arriving phenomena and to better interpretations of ones already given (the infinite hermeneutic). Philosophy adopts a theological concept-revelation-but in so doing only becomes more, not less, philosophical.
In pursuit of intellectual freedom, Barth and Marion both turn to a renewed understanding of objectivity, which (again) denotes not metaphysical or phenomenological objectity, but the event that, even in its reception, remains epistemically independent from and imposes itself upon its witness according to this event's own logic. Quite counterintuitively, however, the event's objectivity-its freedom to appear from itself-confirms, rather than denies, its receptor's freedom. Both Barth's and Marion's work suggests, moreover (though only Barth says so explicitly), that this kind of objectivity is grounded in a specific theological maneuver: sola scriptura. A typical rejoinder to Marion's philosophy of revelation-that his analysis fails because it presumes in Jesus Christ an ungiven phenomenon-thus does not really get to the heart of the matter, since the revealed objectivity to which Barth and Marion turn is mediated through the objectivity of the biblical text, described by both in terms of "witness." Barth's reason for this is clear: Scripture, God's Word, is not by itself God's self-revelation, who is Jesus Christ and to whom church life and proclamation also witness. However, despite "their similarity as phenomena…there is also to be found between between Holy Scripture and present-day proclamation a dissimilarity in order, namely the supremacy, the absolutely constitutive significance of the former for the latter."156 The Scripture's textuality vouchsafes this significance. "On the written nature of the Canon, on its character as scriptura sacra, hang his [sic] autonomy and independence, and consequently his free power over against the Church and the living nature of the succession."157 For Barth, Scripture's autonomy defines Protestantism's theological breakthrough. For even if a given confession of faith is "good," "very original," and "interesting"; indeed, he says, "even if it had significance as a standard of the church, it could not even then be understood as a code of doctrine binding us by its If Marion's use of Scripture does have an accusatory bent, it is an internal one aimed at his own theological heritage, for his critique of the "epistemological interpretation of revelation"168-and, with it, the entire ontotheological, Neo-Scholastic ('Suárezian') tradition that authorized that interpretation-relies neither on some decisive figure (e.g. Basil of Caesarea) nor on some magisterial document. Although he invokes both such authorities when convenient, Givenness and Revelation bases its argument on Scripture alone. This is obvious not only from its structure as, basically, a running phenomenological commentary on the New Testament's original Greek, but from its self-avowed, albeit at times understated, method. Toward the end of his first lecture, Marion considers the essential point: is there 'natural theology'? "Vatican II understands what the textual evidence requires to be understood, and what the scholastic reading missed or masked: knowledge of God on the basis of creation…does not precede revelation."169 Marion's view is clear: the tradition is correct, but not because it is tradition. It is correct because "it understands…the textual evidence"-namely, if we look to the surrounding passages, the textual evidence of Paul's Romans! And finally, to tie it all together, Marion, like Barth, identifies theology's access to Scripture as the Holy Spirit's work. "The Spirit imposes himself as the phenomenal way of access to the iconic vision of the Father in the Son as Jesus the Christ"-the 'iconic vision' that takes place, as we've seen, in scriptural exegesis-"functioning as the director of the trinitarian [sic] uncovering of God, the only economy of theology."170 The theologian's intellectual freedom to critique the tradition is the Spirit's own freedom, but she only has this freedom when she takes Scripture's objectivity as theology's norma normans: sola scriptura.
But if there is (as Barth suggests, and as Marion develops) a philosophy-and not merely a (Christian, dogmatic) theology-that awaits revelation's possibility, then sola scriptura refers, perhaps, not only to the most powerful critique the Christian tradition ever developed against itself, but to one of the very meanings of critique. Phenomenality may itself be 'text,' the liber mundi, giving itself excessively and with no one fixed interpretation. We may be abandoned, philosophically as well as theologically, to a perpetual uncertainty, but this abandonment is freedom's photographic negative. This uncertainty, this necessary openness of thought, would then make deliberation possible. Λόγος would thus mean here again (as it did in the πόλις) the public dialogue that grounds intellectual plurality, in contrast to the authoritarian metaphysical principle this term became by late antiquity, and certainly for the Gnostics. Hence the full ambiguity of 'the Word,' the quintessentially Greek banner under which the Reformers marched to war against 'reason.' But this was precisely Luther's insight: the "urging of conscience and the evidence of things (urgente conscientia et evidentia rerum)"-the res always being for him Scripture's 'matter,' its πράγματα-run not opposed, but stand united against the tradition, "this Troy of ours (hanc Troiam nostrum),"171 which cloaks itself in the mantle of universal and thus unimpeachable rationality. To refuse the closure of reason would thus be the philosophical heritage of Protestantism, by which we are "loosed…from the whole compulsion of authority and regimentation, from the whole multiplicity of godlike powers and authorities who make up our world."172 What would sola scriptura mean, then, as a philosophical principle? Would it mean, perhaps, to be always willing to reinterpret the text of experience, to read-and so think-better, and again? Foucault asked, or said, it best: | 14,295.4 | 2019-01-01T00:00:00.000 | [
"Philosophy"
] |
Four lepton production in gluon fusion: off-shell Higgs effects in NLO QCD
We consider the production of four charged leptons in hadron collisions and compute the next-to-leading order (NLO) QCD corrections to the loop-induced gluon fusion contribution by consistently accounting for the Higgs boson signal, its corresponding background and their interference. The contribution from heavy-quark loops is exactly included in the calculation except for the two-loop $gg\to ZZ\to 4\ell$ continuum diagrams, for which the unknown heavy-quark effects are approximated through a reweighting procedure. Our calculation is combined with the next-to-next-to-leading order QCD and NLO electroweak corrections to the $q\bar{q}\to4\ell$ process, including all partonic channels and consistently accounting for spin correlations and off-shell effects. The computation is implemented in the MATRIX framework and allows us to separately study the Higgs boson signal, the background and the interference contributions, whose knowledge can be used to constrain the Higgs boson width through off-shell measurements. Our state-of-the-art predictions for the invariant-mass distribution of the four leptons are in good agreement with recent ATLAS data.
A (1-loop) gg,bkg = g the gg → H → 4 signal cross section, the four-lepton continuum background as well as their interference, which are the relevant theoretical ingredients to constrain Γ H . We also provide state-of-the-art numerical predictions by combining nNNLO QCD and NLO EW corrections, and we compare them with recent ATLAS data at 13 TeV [56].
We consider the four-lepton production process fusion contribution is crucial for precision physics of four-lepton production. Note, however, that the quark annihilation and loop-induced gluon fusion processes cannot be treated as being completely independent. Indeed, they mix already at NNLO in QCD, and Figure 3 illustrates an example. Such contributions have to be included, and the interference renders the distinction between the two production mechanisms cumbersome. As in Refs. [41,55] we obtain a partial N 3 LO result, labelled as nNNLO in the following, by combining the NNLO QCD predictions with NLO QCD corrections to the loop-induced gluon fusion contribution, including all partonic channels, but considering only diagrams with purely fermionic loops. Any other N 3 LO contributions cannot be included consistently at present, and thus they are not considered in our calculation. Nevertheless, those contributions can be expected to be sub-dominant with respect to the corrections we include at nNNLO.
In this Letter, take a few decisive steps in order to advance our calculation in Ref. [41]. In particular, we improve the description of the Higgs signal and signal-background interference by evaluating exactly all the contributions that do not depend on the continuum gg → + − + − two-loop helicity amplitude (see below). We perform the calculation by explicitly separating the Higgs boson signal, the four-lepton continuum background, and their interference. Those are the underlying theoretical ingredients the experimental analyses require in order to constrain the Higgs boson width. Finally, as done for W + W − production in Ref. [55] we supplement our nNNLO predictions with NLO corrections in the EW coupling expansion, using the implementation presented in Ref. [50].
Our calculation includes the complete dependence on heavy-quark masses in all contributions, but in the two-loop helicity amplitudes of qq → + − + − and gg → + − + − , where they are unknown 1 . For the quark annihilation process the contribution of closed fermion loops is relatively small, and heavy-quark effects can be safely neglected at two-loop level. By contrast, such effects are important for the loop-induced gluon fusion process, where they enter effectively at LO, i.e. O(α 2 S ), through one-loop diagrams, see Figure 2 (a) and (b). While at one-loop level the full mass dependence is known and included throughout our calculation, at two-loop level, see Figure 2 (c) and (d), the heavy-quark effects for the continuum amplitude in Figure 2 (c) have not yet been computed. As it is well known, the impact of heavy-quark loops is particularly relevant for the Higgs signal-background interference, since in the high-mass region the off-shell Higgs boson decays to longitudinally polarised Z bosons, which in turn have a stronger coupling to heavy quarks. Using an appropriate reweighting procedure the missing heavy-quark contributions can be approximated. In the following, we discuss in detail the approach used in Ref. [55] and the improvement pursued here. To this end, we define the finite part (in the hard scheme [59]) of the one-loop and two-loop amplitudes A separately for the four-lepton continuum background and the Higgs-mediated contribution, as indicated in Figure 2 with one sample diagram for each amplitude. The α S expansion of the full gg → + − + − amplitude and its square can then be written as In fact, each of the amplitudes in Eq. (1) is known with its full heavy-quark mass dependence, except for A (2-loop) gg,bkg , which is known only for massless quark loops. In Ref. [55] we have computed the entire α 3 S contribution to the squared amplitude in the massless approximation, reweighted with the full mass dependence at one-loop, is known including the full heavy-quark mass effects [60][61][62][63]. In particular, in the new implementation we use the explicit expression of Ref. [64] and combine it with the gg → H → + − + − one-loop amplitude, taking care of the correct complex phases in the amplitude definition, to obtain the full result for A (2-loop) gg,H . In a second step, we apply a judicious reweighting procedure, using the full one-loop amplitudes to approximate the mass effects in all contributions interfered with A Note that this reweighting procedure is implemented at the level of the squared/interfered amplitudes, since this amounts to simply multiplying complex numbers, rather than at the amplitude level before summing over helicities, which would be more involved. However, Eq. (4) effectively corrects only for the missing quark-mass effects in A (2-loop) gg,bkg . It is clear that with this approximation we obtain a much better treatment of the heavy-quark effects, especially of the Higgs contributions, than using Eq. (3). In fact, with the new implementation the Higgs signal does not include any approximation. One part of the interference contribution is complete as well, while the other part includes the mass effects of the one-loop correction. Only the background contribution is treated essentially in the same approximation as in Ref. [55]. However, given that also the NNLO qq cross section is part of the background to the Higgs signal, this approximation is subleading. In particular, the Higgs interference contribution in our approach is a new result including all contributions known to date, which will be useful for constraining the Higgs width.
Our calculation is performed within the Matrix framework [51]. With Matrix, NNLO QCD predictions can be obtained for various colour-singlet processes at hadron colliders [43,45,46,[65][66][67][68][69][70][71]. 2 The core of Matrix is the Monte Carlo program Munich [80] which contains a fully automated implementation of the dipole subtraction method [81,82] and an efficient phase space integration. NLO corrections can be obtained using either dipole subtraction or q T subtraction [83], which provides a self-consistency check for our results. All tree-level and and one-loop amplitudes can be evaluated with either OpenLoops 2 [52][53][54] or Recola 2 [84,85], and the corresponding numerical results are in full agreement. In case of OpenLoops, we use dedicated squared amplitudes to separate the Higgs signal, background and interference contributions in the gluon fusion channel. In case of Recola, we exploit the SM FERM YUK model to select the order of the top and bottom Yukawa couplings. With this model, our Recola 2 implementation allows us to separate the Higgs signal and background at the level of helicity amplitudes. Also in the calculation of the two-loop corrections, following Eq. (1) and using the approximation in Eq. (4), we exploit . For the two-loop amplitudes, we exploit the calculation of the massless helicity amplitudes of Ref. [40] that are implemented in VVamp [86] to obtain A (2-loop,massless) gg,bkg , and we apply the two-loop Higgs form factor including the full heavy-quark mass effects of Ref. [64] to the one-loop helicity amplitude A . To obtain the NNLO corrections to the quark-initiated process we exploit the general implementation of the q T subtraction formalism [83] within Matrix and rely on the two-loop qq → 4 helicity amplitudes of Ref. [49] that are also provided by VVamp [86].
Our implementation of NLO QCD corrections to the loop-induced gluon fusion production with separation of Higgs signal, background and interference has been validated by comparing fiducial and differential cross sections to the results of Ref. [37]. Ref. [37] presents an NLO calculation of the Higgs signal, the continuum ZZ background and their interference, considering the e + e − µ + µ − channel. The calculation of Ref. [37] is limited to the gg partonic channel and includes the topquark loops in the two-loop gg → ZZ amplitude through a large-m t expansion. To the purpose of our comparison we exactly reproduce the setup of Ref. [37], except for the treatment of the bottom quarks, which Ref. [37] considers as massless in the background amplitudes and as massive in the Higgs-mediated amplitudes. Matrix, on the other hand, treats bottom quarks as either massless or massive particles throughout the calculation in a consistent manner. At LO we find complete agreement with the results of Ref. [37], and we have independently checked our results with the parton level Monte Carlo program MCFM [18,87,88]. At NLO we are able to reproduce the results of Eq. (4) of Ref. [37] to better than 1% percent. We also find reasonably good agreement with the four-lepton invariant mass distributions reported in Fig. 6 of Ref. [37]. Considering the different treatment of the bottom quarks, and the different approximation used for the top-quark contributions, we regard this agreement fully satisfactory.
We now present predictions for pp → + − + − production at √ s = 13 TeV. The two lepton pairs definition of the fiducial volume for pp → 4 + X muon selection with p T,µ > 5 GeV and |η µ | < 2.7 electron selection with p T,e > 7 GeV and |η e | < 2.47 + − and + − may have the same ( = ) or different ( = ) flavours with , ∈ {e, µ}. We use the selection cuts adopted in the ATLAS analysis of Ref. [56], summarized in Table 1. The three leading leptons must have transverse momenta p T,l 1 , p T,l 2 and p T,l 3 larger than 20, 15, and 10 GeV, respectively. The fourth lepton is required to have p T > 7(5) GeV for electrons (muons). The electron and muon pseudorapidities must fulfil |η e | < 2.47 and |η µ | < 2.7, respectively. For each event, the lepton pair with an invariant mass m 12 closest to the Z boson mass is required to have m 12 in the range 50 GeV < m 12 < 106 GeV. The remaining pair is referred to as the secondary pair, with mass m 34 , and it must fulfil This selection strategy is tailored to preserve a good acceptance for low m 4 values, but to suppress events with leptonic τ decays at higher m 4 . Leptons with different (same) flavours are separated by ∆R > 0.2(0.1). The invariant mass of each same-flavour opposite-sign lepton pair is required to be larger than 5 GeV. Finally, an invariant-mass range of 70 GeV < m 4 < 1200 GeV is imposed on the four-lepton system.
For the electroweak parameters we use the G µ scheme and set α = Table 2: Fiducial cross sections in the phase space volume defined in Ref. [56] and summarized in Table 1 at different perturbative orders. Statistical uncertainties for (n)NNLO results include the uncertainties due the r cut extrapolation in q T subtraction [51].
We start the presentation of our results in Table 2 with the fiducial cross sections corresponding to the selection cuts in Table 1. We use the following notation: qqNNLO refers to the NNLO result for the qq-initiated process, see Figure 1, without the loop-induced gluon fusion contribution; ggLO and ggNLO refer to the loop-induced gluon fusion contribution, see Figure 2, at O(α 2 S ) and up to O(α 3 S ), respectively; nNNLO is the sum of qqNNLO and ggNLO; nNNLO bkg is the corresponding cross section including only the continuum background without Higgs contributions, whereas all other cross sections include resonant and non-resonant Higgs diagrams, where applicable; nNNLO EW is our best prediction for the fiducial cross section. It is obtained as in Ref. [55] for W W production by including EW corrections (to the qq channel) in a factorised approach [50].
With respect to the NLO cross section, the NNLO corrections in the qq channel amount to +6.3% while the full NNLO corrections amount to +15.1%. 3 Therefore, the loop-induced gluon fusion process contributes 58% of the NNLO correction. This is in line with previous computations [41,43,45,46,51]. The NLO corrections to the loop-induced contribution are huge, increasing ggLO by +81.2%, which is even slightly higher than the +70.8% correction found with the setup considered in Ref. [41], where the Higgs resonance region is excluded from the fiducial volume. Table 1, compared to data from Ref. [56].
This confirms once more that those corrections depend on the fiducial cuts under consideration and cannot be included through global rescaling factors. The nNNLO result is +22.2% higher than the NLO cross section, +6.2% higher than the NNLO cross section, and +2.6% higher than the nNNLO bkg Higgs background prediction. This means that the Higgs boson has a positive contribution of 2.6% at nNNLO. Finally, the EW corrections lead to a reduction of the cross section by −5.6%, which cancels almost exactly the contribution from the ggNLO corrections so that the nNNLO EW prediction is only two permille above the NNLO cross section. However, while such cancellation may occur at the level of the integrated cross section with a given set of fiducial cuts, ggNLO and NLO EW corrections have an effect in different regions of phase space and therefore do not compensate each other in differential distributions.
In Figure 4 we show different predictions for the invariant-mass distribution of the four leptons and compare them against ATLAS data from Ref. [56] (green dots with experimental error bars). In particular, we show qqNNLO (light blue, dotted), NNLO (blue, dashed), nNNLO with (magenta, solid) and without (orange, dash-dotted) Higgs contributions, and finally nNNLO predictions including EW corrections (green, long-dashed). The top panel shows the absolute distributions, while the lower two panels show predictions and data normalised to the nNNLO result. The agreement between theory and data is quite good. Unfortunately, the experimental uncertainties are still too large to clearly resolve the differences between the various theoretical predictions. In particular, despite clear differences of the nNNLO and nNNLO EW predictions at high invariant mass and in the bin below the 2 m Z threshold, both predictions show a similar level of agreement to data, given the rather large experimental uncertainties. Nevertheless, one can make the following two interesting observations: First, in that bin below the 2 m Z threshold, where large QED corrections are indeed expected, the nNNLO EW prediction is in better agreement with the data point. Second, in the tail of the invariant-mass distribution, where EW corrections have a large impact, data actually seem to be quite high and more consistent with the nNNLO result. Although this comparison has to be taken with caution due to the large experimental errors in that region, this (small) ∼ 1.5σ excess over nNNLO EW in the last two bins is an important demonstration of why EW corrections are so crucial: If with decreasing experimental uncertainties one were to consider only the QCD prediction in such phase space region, an excess of the data over the actual SM prediction including EW corrections might go unnoticed.
We also find that in the region around m 4 ∼ 200 GeV the NNLO prediction in the qq channel is almost 20% smaller than the nNNLO result, which shows that also the loop-induced gg channel yields a substantial contribution to the cross section. Indeed, the analysis of Ref. [56] extracts a signal strength of the loop-induced gluon fusion contribution of µ gg = 1.3 ± 0.5. Note that the gg contribution becomes even larger in the bin around m 4 = 125 GeV due to the Higgs resonance.
In that bin qqNNLO and nNNLO bkg predictions are way below data, since they do not include resonant Higgs contributions. Also the NNLO prediction is quite low, since it misses the large NLO corrections to Higgs production. However, one should bear in mind that also the full nNNLO prediction misses the relatively large higher-order corrections to on-shell Higgs production beyond NLO (see Ref. [92] and references therein).
In the bottom panel we increase the resolution of the relative differences to nNNLO for a subset of the QCD predictions. Comparing the NNLO and nNNLO results we see that their uncertainty bands overlap almost everywhere. The largest effect of nNNLO corrections is in the region m 4 ∼ 200 GeV where the difference with NNLO is about 7%. We also notice that the Higgs background prediction departs from the full result as m 4 increases, where it becomes larger. The effect is about +5% in the last m 4 bin. This means that in this region the relative impact of the Higgs contribution is negative and becomes increasingly large, which is caused by the Higgs signal-background interference. In the following, we will investigate in more detail the relative effects when separating Higgs signal, background and interference contributions.
We now continue our presentation of phenomenological results by studying the theoretical ingredients used in Higgs off-shell studies to constrain Γ H at the LHC. The relevant quantity is the ratio of the off-shell to the on-shell Higgs cross section [12,13]. To this end, we report in Table 3 various contributions to the fiducial cross section in the off-shell region with m 4 > 200 GeV (left) and in the Higgs signal region 120 GeV < m 4 <130 GeV (right). Besides the notation already introduced in the discussion of Table 2, we use the abbreviations "sig", "bkg", and "intf" to separate the 4 Higgs signal contribution, the 4 continuum background contribution, and their interference, respectively. We recall that this separation is needed when constraining the Higgs width at the LHC [12,13]. In particular, in the scenario proposed in Ref. [4] the Higgs couplings and width are √ s = 13 TeV m 4 > 200 GeV 120 < m 4 < 130 GeV +70.6% We start our discussion from the region m 4 > 200 GeV. We see that in this region the interference is negative, as expected from unitarity arguments. Therefore, the gluon fusion cross section is smaller than the sum of the signal and background cross sections by about 11% both at LO and at NLO. In particular, the interference is almost twice as large as the signal in absolute value, and its size is about 12% compared to the background, which in turn is only about 17% of the NNLO result in the qq channel. The large cancellations between signal and interference render the separation of the off-shell Higgs cross section from the background difficult. We note that, as argued in early off-shell studies (see e.g. Ref. [93]), the NLO K-factor for the interference is very close to the geometrical average of the K-factors for signal and background. However, we stress that this conclusion is strongly dependent on the fiducial cuts and setup under consideration.
We now continue our discussion of Table 3 with the region 120 GeV < m 4 <130 GeV. As expected, the Higgs signal cross section is by far dominant due to resonant Higgs contributions, being about 60 times larger than the gluon fusion background. The interference is positive, but about two orders of magnitude smaller than the background. It is worth noticing that the size of the NLO corrections for signal, background and interference is relatively similar when the Higgs boson is off-shell, whereas in the region where the Higgs boson can become on-shell the NLO K-factor of the signal contribution is significantly larger than that of background and interference.
In Figure 5 we study the behaviour of the signal, background and interference contributions to the invariant-mass distribution in the loop-induced gluon fusion channel. We use the same invariant-mass range and binning as considered in the ATLAS analysis [56]. In Figure 5 (a) we show the full result at LO (turquoise, long-dashed) and NLO (magenta, solid). For comparison, also the signal (blue, dashed), the background (red, dotted), and the modulus of the interference contribution (purple, dash-dotted) are shown at NLO in the main frame. The separate LO and NLO results for the signal, the background and the interference are presented in Figure 5 (b), (c), and (d), respectively. In the lower panels we study the different behaviour of the NLO K-factors, i.e. the ratios of the NLO to the LO predictions.
For the signal contribution we clearly see the peak at m 4 = 125 GeV from the Higgs resonance, and then the cross section quickly drops and increases again at the 2m Z threshold, remaining roughly constant up about 400 GeV where it starts to decrease again. Above 400 GeV the signal and the interference are of the same order, while for 200 m 4 400 GeV the absolute value of the negative interference contribution is even larger. These features are well known [3,94]: the decrease of the signal cross section due to the off-shell Higgs boson propagator is compensated by the |A| 2 ∼ m 4 4 increase of the decay amplitude, thereby leading to the plateau observed in Figure 5 (b). For the signal the impact of the NLO corrections is about +170% at small invariant masses, and it slowly decreases as m 4 increases, being about +60% in the high-mass region. The background distribution has a broad maximum for m 4 2m Z due to the ZZ resonance, while the impact of NLO corrections is more uniform, ranging from about 100% in the second bin to 60% in the high-m 4 region. The interference is negative and peaked at m 4 ∼ 200 GeV, but it changes sign in the Higgs signal region. In the region m 4 ∼ 200 GeV the NLO corrections to the interference are very large (about +150%), and they are larger than for the signal and the background, decreasing to about 70% at large values of m 4 .
In conclusion, in all cases radiative corrections have the effect of increasing the absolute size of the individual contributions. However, the relative size of the corrections for the individual contributions is quite different, especially at small m 4 values, and the full result is a combination of all of those effects. Only at large invariant masses (m 4 400 GeV) the relative size of the corrections becomes similar for signal, background and interference. It is therefore difficult to make a direct connection between the QCD corrections beyond NLO for the signal, which are known to be relatively large (see Ref. [92] and references therein), and the other contributions, where they are not known. Nevertheless, the NLO corrections in the off-shell region are not that different among the three contributions, and the QCD effects beyond NLO are expected to be significant. Therefore, in order to approximately take higher-order corrections into account, one might be tempted to rescale our NLO result for the off-shell cross section by using the relative impact of the QCD corrections beyond NLO evaluated in the off-shell region for the signal contribution [92]. Needless to say, much care should be taken when following such approach.
In this Letter, we have studied the production of four charged leptons in pp collisions at 13 TeV, and we have computed the NLO QCD corrections to the loop-induced gluon fusion contribution. Our computation consistently accounts for the Higgs boson signal, its corresponding background and their interference. The contribution from heavy-quark loops is exactly included in the calculation except for the two-loop gg → ZZ → 4 diagrams, for which the heavy-quark effects are approximated through a reweighting procedure. Our calculation is combined with the NNLO QCD and NLO EW corrections in the quark-annihilation channel, and it includes all partonic channels, spin correlations and off-shell effects. The computation is implemented in the Matrix framework and allows us to separately study the Higgs boson signal, the background and the interference contributions. Those are the central theoretical ingredients of experimental analyses that place bounds on the total Higgs boson width. In particular, for the background and the interference our calculation constitutes the most advanced prediction. We look forward to applications of this calculation and the corresponding implementation in Matrix to off-shell Higgs boson studies at the LHC and beyond. | 5,824 | 2021-02-16T00:00:00.000 | [
"Physics"
] |
A Phenomenological and Dynamic View of Homology: Homologs as Persistently Reproducible Modules
Homology is a fundamental concept in biology. However, the metaphysical status of homology, especially whether a homolog is a part of an individual or a member of a natural kind, is still a matter of intense debate. The proponents of the individuality view of homology criticize the natural kind view of homology by pointing out that homologs are subject to evolutionary transformation, and natural kinds do not change in the evolutionary process. Conversely, some proponents of the natural kind view of homology argue that a homolog can be construed both as a part of an individual and a member of a natural kind. They adopt the Homeostatic Property Cluster (HPC) theory of natural kinds, and the theory seems to strongly support their construal. Note that this construal implies the acceptance of essentialism. However, looking back on the history of the concept of homology, we should not overlook the fact that the individuality view was proposed to reject the essentialist interpretation of homology. Moreover, the essentialist notions of natural kinds can, in our view, mislead biologists about the phenomena of homology. Consequently, we need a non-essentialist view of homology, which we name the “persistently reproducible module” (PRM) view. This view highlights both the individual-like and kind-like aspects of homologs while stripping down both essentialist and anti-essentialist interpretations of homology. In this article, we articulate the PRM view of homology and explain why it is recommended over the other two views.
Homology, a fundamental concept in biology (Wake 1999;Wagner 2016), provides useful explanations of a broad range of biological phenomena by referring to the historicity of characters (Ereshefsky 2012). However, the concept of homology has been the subject of considerable controversy for a long time (Spemann 1915;Hall 1994Hall , 1999Laubichler 2000;Wagner 2014). Although there can be no doubt that homology is an important concept in biology, the metaphysical status of homology, especially whether a homolog is a part of an individual or a member of a natural kind, is still a matter of intense debate (cf. Assis and Brigandt 2009;Ereshefsky 2009Ereshefsky , 2010bWagner 2014). 1 In particular, the rise of evolutionary developmental biology (EvoDevo) in the past couple of decades has fueled debate over the metaphysical status of homology (see "The Individuality View and the HPC View of Homology" section).
In the following sections, we review the debate between the individuality view and the natural kind view of homology in detail. In the individuality view, homologs are regarded as parts of an individual rather than members of a kind in the metaphysical sense (Ereshefsky 2009, p. 228). In the natural kind view, on the other hand, homologs are regarded as members of a natural kind, an abstract class in the natural world with common essential properties (see the "Homologs as PRMs" section in detail). First, let us identify the point of disagreement. The proponents of the Abstract Homology is a fundamental concept in biology. However, the metaphysical status of homology, especially whether a homolog is a part of an individual or a member of a natural kind, is still a matter of intense debate. The proponents of the individuality view of homology criticize the natural kind view of homology by pointing out that homologs are subject to evolutionary transformation, and natural kinds do not change in the evolutionary process. Conversely, some proponents of the natural kind view of homology argue that a homolog can be construed both as a part of an individual and a member of a natural kind. They adopt the Homeostatic Property Cluster (HPC) theory of natural kinds, and the theory seems to strongly support their construal. Note that this construal implies the acceptance of essentialism. However, looking back on the history of the concept of homology, we should not overlook the fact that the individuality view was proposed to reject the essentialist interpretation of homology. Moreover, the essentialist notions of natural kinds can, in our view, mislead biologists about the phenomena of homology. Consequently, we need a non-essentialist view of homology, which we name the "persistently reproducible module" (PRM) view. This view highlights both the individual-like and kind-like aspects of homologs while stripping down both essentialist and antiessentialist interpretations of homology. In this article, we articulate the PRM view of homology and explain why it is recommended over the other two views.
1 3 individuality view criticize the natural kind view by pointing out that homologs are subject to evolutionary transformation, and natural kinds do not change in the evolutionary process (e.g., Grant and Kluge 2004). Conversely, some proponents of the natural kind view of homology argue that a homolog can be construed as both a part of an individual and a member of a natural kind (e.g., Assis and Brigandt 2009;Brigandt 2009). Strictly speaking, they do not maintain that the individuality view is entirely wrong. Instead, they emphasize the merits of the pluralistic construal of homologs, which, in their view, will lead biologists to the recognition of novel problems (explananda).
The proponents of the natural kind view adopt the Homeostatic Property Cluster (HPC) theory of natural kinds (cf. Boyd 1999;Wilson 1999). The HPC theory seems to strongly support their construal of a homolog as both a part of an individual and a member of a natural kind. This is because the theory defines a natural kind by using a cluster of properties, which includes historical properties and the very properties that characterize individuals. Here, we are concerned with the validity of this version of the natural kind view of homology, which we call the HPC view of homology below.
When we examine the validity of the natural kind view of homology in general, and the HPC view of homology in particular, we must note that the view includes an essentialist interpretation of homology. If we consider the essentialist claim, the construal of a homolog as both a member of a natural kind and a part of an individual does not make sense. Looking back on the history of the concept of homology, we observe that the individuality view was proposed to reject the essentialist interpretation of homology. The proponents of the natural kind view of homology will reply that they have updated essentialism and that the new kinds of essentialism can fit the individual-like aspects of homologs well. That reply is logically possible but practically futile as the essentialist notions of natural kinds can, in our view, mislead biologists about the phenomena of homology. Alternatively, the individuality view of homology is quite unsatisfactory because it tends to ignore "serial homology." This is an important aspect of homology, and the concept provides useful explanations in evolutionary developmental biology. Hence, we reject both the essentialist natural kind view and the anti-essentialist individuality view of homology. Instead, we advocate a non-essentialist view of homology, which we name the "persistently reproducible module" (PRM) view. This view highlights both the individual-like and kind-like aspects of homologs while stripping down both the essentialist and anti-essentialist interpretations of homology. In a sense, it mediates between the individuality and natural kind views of homology. This article articulates the PRM view of homology and explains why it is better than the other two views.
In the next section, we briefly summarize the history of the concept of homology before the EvoDevo era. In the section following, we review the individuality and natural kind views of homology (especially, the HPC view of homology). Then, in the "Homologs as PRMs" section, we articulate the PRM view of homology and explain its advantage over the two existing views. Finally, in the last section, we briefly explore the possible uses of the PRM view outside biology.
A Brief History of the Concept of Homology Before the EvoDevo Era
Although it was not called "homology" at the time, the concept of homology can be traced to Aristotle. In History of Animals, he distinguished three types of sameness related to biological characters (Aristotle 1965). The first one is specific identity; this type, which would be exemplified by two men with identical noses and eyes, is manifest when two kinds of living thing are specifically identical as a whole (i.e., they belong to the same "species" or eidos). The second type is identity with a difference with respect to excess and deficiency. This type of identity is found between two species belonging to the same group (genus or genos). In such cases, parts of living things in the same group differ in terms of their secondary characteristics, such as color, shape, and size. The third is pseudo-identity, which is identity by "analogy" or superficial similarity.
In the sixteenth century, Pierre Belon created a famous illustration of homology, providing a comparison of the skeletons of a bird and a man that shows the correspondence of bones (Belon 1555). In the early nineteenth century, comparative anatomists, such as Georges Cuvier and Étienne Geoffroy Saint-Hilaire, analyzed the corresponding structures and organs found in different species in great detail (Russel 1916). Their reports regarded "homology" (it was not yet called that) as the sameness or correspondence of biological characters in different species.
The biologist who first defined homology in a more or less modern way is Richard Owen. In 1843, he clearly distinguished homology from analogy (Owen 1843). 2 According to him, homology is not sameness of functions but sameness of characters (organs and structures). He defined a homolog as "the same organ in different animals under every variety of form and function" (1843, p. 379). On the other hand, he defined an analog as "a part or organ in one animal which has the same function as another part or organ in a different animal" (1843, p. 374) In other words, a character is homologous with another because of what it is and analogous with another because of what it does (cf. de Beer 1971).
Based on this distinction, the paired fins of fish and tetrapod limbs are homologs, whereas the wings of flies, birds, and bats are analogs because they perform the same function (i.e., flying). We now know that these lineages evolved their flying abilities independently of one another and that the sameness of functions is due to the convergent evolution of wings. However, we should note that the wings of birds and bats are homologs as vertebrate limbs (birds and bats share the identical vertebrate limb organization derived from their common ancestor). The important point here is the distinction between character identity and character state (cf. Wagner 2007). The vertebrate limbs are homologs and share their character identities, but the character states of the vertebrate limbs are diverse; bird forelimbs possess feathers, whereas bats have parachutes. These diverse structures evolved independently from each other but perform the same function (i.e., flying) in somewhat different ways.
Furthermore, Owen subdivided the homology concept into special homology and general homology (Owen 1848, pp. 7-8). He defined special homology as the correspondence of parts (or organs) in different animals, and he defined general homology as the higher relationship between a part or a series of parts and the fundamental or general type to which it belongs. In particular, the term serial homology is used for a series of general homologs. Today, the use of the term "general homology" is rare, and the term "serial homology" is generally preferred.
A representative example of serial homology (or general homology sensu Owen) is that of the tetrapod forelimb and hindlimb. These parts seem to have a general type of osteological structure (Fig. 1). Proximally, there is only one bone; it is the humerus in the forelimb, the femur in the hindlimb, and it is generally called the "stylopod" in the tetrapod limb. Medially, there are two bones; they are the ulna and radius in the forelimb, the tibia and fibula in the hindlimb, and they are generally called the "zeugopod." The most distal region is generally called the "autopod"; it is the wrist and fingers in the forelimb and the ankle and toes in the hindlimb (Goodrich 1930, p. 159;Wagner 2014, p. 335). There are many other examples of serial homology. One is the segments of arthropods and insects (Snodgrass 1935). Each segment of these animals is thought to have evolved from serial uniform segments, such as those of millipedes (1935, p. 40). Gill slits in vertebrates (Kuratani et al. 2001) and leaves and flowers in plants (Wagner 2014, Chap. 12) are also well-known examples of serial homologs.
Owen's homology concepts are well known to have been based on essentialism. He thought that homologs share the "essential nature" of animal body parts (Owen 1849, p. 70). On the other hand, Darwin (1859) and his followers considered homology not as the identity with a hypothetical "archetype" but as the signature of common ancestry from the viewpoint of the Darwinian theory of evolution. According to the latter perspective, homologs provide evidence of the affinity between organisms that have evolved from a common ancestor. In particular, Lankester (1870) strongly criticized the essentialist view of homology and pointed out that "no genetic (i.e., phylogenetic or evolutionary) identity can be established between fore and hind limbs" (p. 38), but "the fore legs have a homoplastic agreement with the hind legs" (p. 39). Here, the term "homoplasy" means plastic or ostensible similarity between parts or organs. He also introduced the new term "homogeny" in place of homology to avoid the essentialist connotations of the word "homology," although this term failed to become popular.
The situation regarding homology is remarkably similar to that of the concept of species. In the second half of the 20th century, it was widely accepted in the fields of biology Similarity between the tetrapod forelimb and hindlimb has been regarded as a representative example of serial homology, that is, these parts appear to have a general type of osteological structure; the stylopod (the humerus in the forelimb and the femur in the hindlimb), the zeugopod (the ulna and radius in the forelimb, and the tibia and fibula in the hindlimb), and the autopod (the wrist and fingers in the forelimb, and the ankle and toes in the hindlimb) and philosophy of biology that "the death of essentialism" had occurred with respect to the species problem owing to the Darwinian theory of evolution and phylogenetic systematics (Hull 1965a, b;Ereshefsky 2010a). Instead, the individuality thesis of species became influential (Ghiselin 1974;Hull 1978). The proponents of this thesis argue that species are not natural kinds with an essential nature but are individuals in a metaphysical sense; they emphasize that particular species are defined not by their "essence" but by their history. In the same manner, they criticize the essentialist view of homology and advance the individuality view of homology, which we examine in the next section.
The Individuality View and the HPC View of Homology
According to the individuality view of homology, this concept is defined as a relationship of correspondence between parts of individual organisms (Ghiselin 2005, p. 97), which are representative individuals in the metaphysical sense. The proponents of the individuality thesis regard homologs as "parts of an individual rather than members of a kind" (Ereshefsky 2009, p. 228).
But, what are individuals in the metaphysical sense? What kinds of entity are they? Let us examine the differences between individuals and natural kinds. According to Ghiselin (1997Ghiselin ( , 2005, individuals (1) are concrete rather than abstract, (2) engage in process, (3) have no defining properties (i.e., essential properties), (4) have no instances, (5) are spatiotemporally restricted, and (6) do not function in laws. On the other hand, natural kinds (1′) are abstract rather than concrete, (2′) do not engage in process, (3′) have defining properties (i.e., essential properties), (4′) have instances, (5′) are not spatiotemporally restricted, and (6′) function in laws. Thus, there is a sharp contrast between individuals and natural kinds.
"Homology statements are strictly historical propositions," Ghiselin (2005, p. 95) emphasized; "they are not laws of nature and they lack the necessity that characterizes laws of nature." Contrary to essentialism, there is nothing like the essential nature of animal body parts that every homolog of animal body parts shares. As Ereshefsky (2009) stresses, homology relationships depend on phylogeny. "…[H]omologs must be historically connected and cannot be spatiotemporally scattered across the universe" (2009, p. 228). Thus, homologs are regarded not as natural kinds but as individuals.
The individuality view seems to fit the Darwinian theory of evolution and phylogenetic systematics well. However, this view tends to ignore serial homology (or "iterative homology"), although the concept of serial homology provides useful explanations in evolutionary developmental biology (de Beer 1971;Roth 1984;Wagner 1989). 3 In the individuality view, as Lankester (1870) asserted, serial homology is generally explained away as homoplasy (plastic or ostensible similarity between parts or organs).
On the other hand, several contemporary authors (e.g., Rieppel 2005, pp. 25-26;Brigandt 2009, p. 78) embrace the HPC view of homology and argue that homologs are HPC natural kinds. The HPC view is a form of new essentialism, which does not define natural kinds using necessary and sufficient conditions. According to Brigandt (2009), an HPC natural kind has a cluster of properties that permits variation, and there are homeostatic mechanisms that determine the identity of the kind. Here, "homeostasis" means maintenance of the clustering of various properties by underlying causal mechanisms.
It seems that the emergence of the HPC view of homology over the individuality view was accompanied by the rise of EvoDevo (Brigandt 2007(Brigandt , 2009). Many theoretical notions of homology have been proposed in recent decades (e.g., Van Valen 1982 as a precursor; Roth 1984Roth , 1991Wagner 1989;Abouheif 1997;Shubin et al. 1997Shubin et al. , 2009Müller 2003Müller , 2010Ochoa and Rasskin-Gutman 2015). These notions basically focus much more on the developmental mechanisms of homologs and criticize "the historical concept of homology" (Wagner 1989;Laubichler 2000), which focuses exclusively on phylogenetic continuity and has a high affinity with the individuality view of homology. Some of these new proposals actually favor the idea of homologs as natural kinds (Wagner 1996(Wagner , 2014Rieppel 2005).
The HPC view is different from traditional essentialism, which holds that every member of a natural kind has the same characteristic, essential properties (cf. Boyd 1999). The HPC view does not require essential properties to be intrinsic or necessary and sufficient for kind membership. Despite the difference, the HPC view is thought to be a kind of essentialism because HPC natural kinds perform the predictive and explanatory roles of traditional essentialist kinds (cf. Wilson et al. 2007;Brigandt 2009).
One critical issue in the HPC view is that the distinctions between individuals and kinds and between natural and functional kinds (hence, the distinction between homology and analogy) becomes vague (Brigandt 2009, p. 77;Wagner 2014, p. 239). For example, proponents of the individuality thesis focus on the historicity of homologs. However, this historicity is easily absorbed in the homeostatic property cluster (not as an intrinsic property but as an extrinsic property) by the HPC view, although the historicity concept has traditionally been connected to the individual concept and disconnected from the kind concept and essentialism. Moreover, the HPC view seems to take scant account of the distinction between natural and functional kinds. The fact that there is no clear-cut distinction between these kinds in general does not rule out that they are two metaphysically distinguishable entities. Proponents of the HPC view attach so much importance to this ambiguity that they tend to conflate different explanatory and classificatory practices in science (cf. Ereshefsky 2009, p. 228).
The same kind of criticism can be applied to another new form of essentialism. According to relational (or historical) essentialism, the essential properties of natural kinds are relational (or historical), and this presumably stands in contrast to their place in traditional essentialism (Griffiths 1999). The relational (or historical) properties are extrinsic ones because (historical) relationships are not intrinsic to members of natural kinds.
In light of relational essentialism, not only species but also individual organisms are natural kinds defined by relations between the individual organism and its parental organisms. Although the idea of "historical essence" might, at first glance, seem to restore essentialism, it spoils the important and evident distinction between two classes of metaphysically distinguishable entities, which have been called individuals and kinds, respectively, in traditional metaphysics. Ereshefsky (2010b) points out that parts of an individual must have certain causal relationships with one another, whereas no such causal requirement is placed on members of a kind. He expresses this idea humorously; "… the tail and the nose of a dog cannot be on different planets and be parts of a single dog: those parts must be causally connected in certain ways" (Ereshefsky 2009, p. 228). On the other hand, members of the paradigmatic kind, such as the element gold, need not be causally connected in any way. At first glance, the new kinds of essentialism seem to fit with scientific practices. However, they actually underestimate the metaphysical diversity of the world. As a result, they lead us into conceptual confusion and provide almost no pragmatic conceptual frameworks for scientific investigation.
As discussed above, the ontology and epistemology of biological phenomena, such as taxa and homologs, are still sources of great controversy (Brigandt 2009;Ereshefsky 2010b). There is a need for a new conceptual framework that is geared to the dynamic aspects of homology and free from conceptual confusion. This situation prompted us to seek an alternative view of homology that can deal with different explanatory and classificatory practices in modern biology better than the individuality view and the HPC view of homology. In the next section, we attempt to provide such an alternative view of homology.
Homologs as PRMs
In this section, we introduce an alternative view of homology. The distinguishing feature is that it is free from both essentialism and anti-essentialism. It is a non-essentialist view of homology. One may be puzzled by this view because it seemingly recommends that one eschew metaphysical investigation of the nature of homology. To elucidate our motivation for a non-essentialist view of homology, we want to cite a similar situation in the context of the scientific realism debate.
In 1984, Arthur Fine proposed the "natural ontological attitude" as an alternative position to scientific realism and antirealism. Examining the arguments of the realist and antirealist, Fine found that "both the realist and the antirealist accept the results of scientific investigation as 'true,' on par with more homely truths" (Fine 1984, p. 96;italics added). He calls this acceptance of scientific truths as the "core position" and named it the "natural ontological attitude (NOA)." The NOA is "the core position itself, and all by itself" (1984, p. 97; emphasis in original). It is neither realist nor antirealist in itself: it mediates between the two. By contrast, each realist and each antirealist makes additions to the core position. It is the additions that make each position realist and antirealist and cause them to confront each other. What then are the additions each realist and antirealist makes to the core position? Regarding antirealists, it depends on their specific position. Some antirealists (pragmatists, instrumentalists, or conventionalists) may add to the core position a particular analysis of the concept of truth. Others (idealists, constructivists, phenomenalists, or others) may add a special analysis of concepts or certain methodological strictures. In comparison, realists just add "a desk-thumping, foot-stamping shout of 'Really!'" (1984). This realist emphasis means to deny the additions that the antirealists make to the core position. Additionally, the realists also want to explain the robust sense of "reality," which they assume. Fine (1984) found that these additions made by each realist and antirealist to the core position were useless and misleading and recommended the core position itself as a third alternative for an adequate philosophical stance toward science.
We do not need to go deep into the scientific realism controversy and argue for NOA here. However, we think that following Fine's suggestion would lead us to a third alternative to the essentialist natural kind view and the antiessentialist individuality view of homology. There seems to be a phenomenon of homology that both the essentialist and the anti-essentialist accept as "true." In other words, there seems to be a point of agreement between the essentialist and anti-essentialist concerning the phenomena of homology. Let us call this the "core position" of homology. Now, we need to clarify the core position that both the essentialist and anti-essentialist would accept. Following Fine's lead, we should not make any additions to the core position, because they cause useless metaphysical inflation (essentialist or anti-essentialist interpretations).
First, we consider that the basic feature of the phenomenon of homology is the repetitive generation of homologs (typically, parts of an individual organism, such as limbs or organs) (Fig. 2). This phenomenon is not limited to the evolutionary process (phylogeny), but is also observed in the developmental process. As shown in Fig. 2, homologs are generated repeatedly in each generation via the evolutionary process, as well as in regeneration via the developmental process. Second, we consider the fact that the phenomenon of homology is autonomous. 4 Of course, in the evolutionary process, evolutionary lineages maintain their genetic continuity by the inheritance of genetic information. However, homologs are themselves formed and perish in each generation, and therefore have no genetic continuity (with the exception of asexual reproduction, such as budding). In the developmental process of an individual organism, the same parts are often discontinuous upon regeneration (Fig. 2). There is no continuity between the former part and the newly regenerated one. However, homologs are repeatedly generated in each regeneration via the developmental process. The automaticity of the phenomenon of homology can be partly captured by the concept of modularity. According to Schlosser (2004), modules are integrated, quasi-independent, and autonomous subprocesses. 5 Using the concept of modularity, we can characterize homologs as modular structures distinguishable from other subprocesses in both evolutionary and developmental processes. Focusing on repetitive generation, automaticity, and modularity, we can outline the core position: homologs are persistently reproducible modules in evolutionary and developmental processes. Here, we refer to this as the PRM view of homology. Next, we want to articulate the applicability of the PRM view to various evolutionary and developmental processes.
First, the PRM view can be applied to cladogenesis (Fig. 3). When we observe homologs in two lineages and the lineages are related phylogenetically, the homologs are considered to have been PRMs in the parental lineage. When the parental lineage splits into two (or more) daughter lineages, i.e., a cladogenesis occurs, the PRMs in the parental lineages also split into two PRM lineages. Thus, the characters in the daughter lineages are homologous Fig. 2 The phenomenology of homologs. In reproduction, which is in the evolutionary process (above), homologs form and perish in each generation, although evolutionary lineages maintain their genetic continuity by the inheriting of genetic information. In the developmental process (below), homologs are often discontinuous upon regeneration. In both cases, homologs lack continuity and are repetitively generated as autonomous modules because they can be traced back to that character in the parental lineage.
It is worth noting that the PRM view is also applicable to serial homology. For example, in the evolution of the vertebrate paired appendages, the pectoral appendage first appeared and the pelvic appendage subsequently evolved (Young 2010). The co-option of a developmental mechanism (originating from that of the midline fin) seems to have participated in this process (Shubin et al. 1997;Feritas et al. 2006;Shimeld and Donoghue 2012) (Fig. 4a). Based on the PRM view, serial homology can be treated as similar to the case of cladogenesis mentioned above (Fig. 4b). This Fig. 3 An application of the PRM view in cladogenesis. a The cladogenesis of two daughter lineages from a parental lineage. b The interpretation of homologs in the cladogenesis in the PRM view. When the cladogenesis occurs, PRMs also split into two daughter PRMs Fig. 4 An application of the PRM view in serial homology. a The evolution of vertebrate paired appendages. The pelvic fin is thought to have evolved by co-option of a developmental mechanism to form the pectoral fin. b The interpretation of serial homology in the PRM view. Note that serial homology can be treated as similar to the case of cladogenesis depicted in Fig. 3 suggests a unifying framework for the ontogeny and phylogeny of corresponding characters.
There is actually a clear difference between the evolution of special homology and that of serial homology. This difference is comparable to the difference between the evolution of orthologous genes and that of paralogous genes, as Wagner (2014) pointed out. In the case of the evolution of serial homology, one should pay attention to the level of modules in the developmental hierarchy. In the evolution of vertebrate paired appendages, the developmental mechanisms within fore-and hindlimb buds is highly conserved, although this is less the case for characteristics at later stages, such as muscle structures (Diogo and Ziermann 2015). However, this is not only a specific case in serial homology, but it is also observed in special homology. For example, as mentioned above, the wings of birds and bats are homologous at the limb level, but they possess nonhomologous structures (feathers and a parachute, respectively) for the flying function.
In both cases, homologs as PRMs permit modest generalizations because we can assume some basal mechanisms behind the persistent reproducibility of homologs by comparing homologs between species (in the case of special homology) or organs (in the case of serial homology). However, contrasting the HPC view of homology, the PRM view does not necessarily require basal mechanisms to be essential. In other words, the PRM view denies that there are basal mechanisms that enable robust generalizations. Indeed, developmental mechanisms often diverge over time without accompanying changes in the phenotypic outcomes. This phenomenon is known as developmental system drift (DSD) (True and Haag 2001). In philosophical jargon, this situation is called "multiple realizability"; homologs are multiply realizable at the phenotypic level and can be realized by many distinct developmental mechanisms or cannot be reduced to a single set of developmental mechanisms (cf. Ereshefsky 2012, p. 394). Again, we emphasize the automaticity of the phenomenon of homology: homologs can be generated repetitively even if the basal underlying mechanisms are subject to profound change and variation.
The neurulation process in vertebrates is a notable example of DSD. In amniotes (e.g., Xenopus), inhibition of bone morphogenetic protein (BMP) family signaling molecules is necessary and sufficient to influence neural fates. However, in amniotes (e.g., chicks), inhibition of the BMP pathway causes no obvious defects in neural specification, suggesting that some other factors replace or function redundantly with BMP signaling to specify the neural plate. Thus, the PRM view of homology can adequately explain a dynamic aspect of homology; the phenomenon of the multiple realizability of homologs suggests that homology undergoes dynamic changes during the evolutionary process. 6 We should pay attention to this aspect and be careful not to make excessive generalizations.
Let us show some advantages of the PRM view of homology over the two existing views by examining the color patterns of colored carp (Koi). A variety of colored carp, called Kohaku (red-white), exhibits a red-white color pattern (Axelrod 1988). In this variety, the color pattern of the trunk varies, whereas spots on the head are often observed. There is actually a further modified variety, which shows a red spot only at the top of the head. This variety is called Tancho (red-cap). In this variety, it is possible to identify the head spots as homologs (a shared derived character; i.e., synapomorphy) of the body color pattern in this variety, that is, this evolutionary lineage. For the head spots to be identified as homologs, they must be modules, because homologs are modules in the PRM view. The head spots are regarded as modules when they are observed only at the top of the head, or at least when they are separated from other trunk spots. When the head spots are repeatedly observed in the lineage and recognized as modules, they are PRMs according to the PRM view.
If we regard the head spots of the variety Tancho as PRMs, we can predict that there are genetically fixed developmental mechanisms for the generation of the head spots. Interestingly, we can find similar varieties of goldfish that show a red spot only at the top of the head (Matsui 1972). It would be interesting to investigate the developmental mechanism behind the pattern observed in each variety and to examine the diversity and commonality of the mechanism. Note that this question is more consistent with the PRM view than with the HPC view, because the HPC view would impatiently tend to find "deep homology" between two lineages and asks whether the common basal mechanisms are conserved between colored carp and goldfish (the case for deep homology is discussed in detail below). However, the head spots of the two lineages are actually the results of convergent evolution (Wang and Li 2004;Komiyama et al. 2009). We should avoid prematurely deciding that there are some common developmental mechanisms, such as deep homology, because the existence of such mechanisms depends on species or lineages, and the PRM view can avoid such premature and broad generalizations. In the PRM view, the head spots of the two lineages are regarded as distinct (non-homologous) PRM lineages. Consequently, the PRM view of homology requires a more temperate methodology of evolutionary developmental biology (EvoDevo). The PRM view warns of the risk of assuming "essential" properties behind the phenomena of homology a priori and supports instead the extraordinary diversity of nature.
Notably, the PRM view of homology can incorporate several advantages of the individuality view and the HPC view. In other words, the PRM view can accommodate both the individuality view and the HPC view.
First, the PRM view of homology highlights the important fact that homologs are historical (spatiotemporally restricted) entities engaging in evolutionary or developmental processes. Suppose that a series of modules start to reproduce persistently at some point of time: when this persistent reproduction ends, the series ceases, and the homologs become extinct. In the example of fish coloration, the persistent reproduction of the head spot in colored carp starts independently from that in goldfish varieties. Even if the head spots in both varieties are quite similar, and they may share many properties, they are not homologs because they are the result of convergent evolution in each variety. The head spots in each variety are PRMs with a fate of their own-they engage in the evolutionary processes as something like individuals. Therefore, the PRM view enables more accurate recognition of the phenomenon of homology than the natural kind view.
Second, the PRM view can attribute predictive and explanatory roles to the PRMs in biological investigations. In a sense, the PRMs have somewhat kind-like roles. However, we must draw attention to the difference between natural kinds and PRMs. The natural kind view typically emphasizes the predictive and explanatory roles of "essence," which are assumed to underlie the natural kind (cf. Wilson et al. 2007;Brigandt 2009). By contrast, the predictive and explanatory roles that the PRM view attributes to PRMs in biological investigations are relatively modest ones. As we have already discussed above, the PRM view can warn of the risk of assuming an "essential" spot-forming mechanism behind the head spots of the varieties of both carp and goldfish a priori, although it is scientifically interesting to examine the diversity and commonality of the spot-forming mechanisms between these two lineages.
We noted that the HPC view tends to find deep homology between lineages. The term "deep homology" refers to sharing the same genetic regulatory apparatus that is used to build morphologically and phylogenetically disparate (i.e., non-homologous) characters (Shubin 1997(Shubin , 2009. For example, the Drosophila melanogaster gene Distal-less (Dll) and its mouse homolog Dlx control appendage development in each animal, even though these appendages are not homologous (i.e., the appendages of insects and vertebrates evolved independently in these two lineages). However, they are homologous at the "deeper" GRN level.
The deep homology concept has strong affinity to the HPC view because the "deeper" GRN can be regarded as a basal mechanism leading to homeostasis. Recall that basal mechanisms underlying homeostatic properties play an essential role in the HPC view. In contrast, the PRM view focuses on the phenomenological level rather than the basal-mechanism level of homology. As the appendages of insects and vertebrates evolved independently in these two lineages, they are not homologous at the phenomenological level, even if at first glance the shared GRN suggests homology at the deeper level. If there is a basal mechanism for vertebrate limb development as a somewhat conserved GRN and if this GRN is also conserved in Drosophila as deep homology, what is the difference between the basal mechanisms for vertebrate appendages (which are, in fact, homologous as appendages) and those for Drosophila and vertebrates appendages (which are not homologous as appendages)? They are hardly distinguishable! As such, there should be no clear boundary between homology and deep homology in the HPC view.
Atavisms are another example that sheds light on the PRM view. For example, some sperm whales have been reported to have visible hind legs (Berzin 1972). In this case, an interesting issue is why and how the persistent reproducibility of hind legs was once lost in ancestral whales but has reappeared in the current lineage. In fact, all whale embryos possess limb buds at some period of development, but these generally disappear before cartilage formation (Hall 1984). Against this background, it can be considered that the developmental modules of hind legs retain persistent reproducibility at least at the limb bud level, even though they remain lost at the mature hind leg level. Thus, the PRM concept can be applied to various levels of biological processes; that is, not only mature phenotypes but also developmental modules are candidates for PRMs.
In summary, taking the phenomenology of homology seriously, we regard homologs as PRMs in both evolutionary and developmental processes. PRMs are not only restricted spatiotemporally but can also be used to make modest biological generalizations. Based on these generalizations, the PRM view can play predictive and explanatory roles in scientific investigations. Furthermore, this view can accommodate the fact that homologs are subject to dynamic evolutionary and developmental changes. Why is the PRM view preferable to the individuality and natural kind views? It is because the PRM view is the "core position" to which the proponents of the other two views can admit, and it makes no additions that cause useless metaphysical inflation (i.e., essentialist or anti-essentialist interpretations).
The Scope and Perspective of the PRM View
It is worth noting that the PRM view can be applied to various phenomena outside biology. In this section, we discuss the intriguing applicability of PRMs to diverse natural phenomena.
First, the PRM view can also be applied to species; species are groups of persistently reproducible modules, with these modules being what we usually call individual organisms. Individual organisms themselves are PRMs because they are persistently reproducible, somewhat integrated, quasi-independent, and autonomous subprocesses in evolutionary processes (which are usually called species, evolutionary lineages, or populations).
One may notice that the PRM view has the potential to be applied to other kinds of natural phenomena, including behavioral and psychological phenomena. In fact, some authors have attempted to apply the concept of homology to these phenomena (Lorenz , 1973Love 2007;Hall 2013;Brown 2014). 7 For example, some courtship behaviors or emotions can be regarded as homologs. The PRM view seems to be applicable to these phenomena because behavioral and psychological phenomena have modular structures and are persistently reproducible in evolutionary and developmental processes. As for courtship behavior, it is persistently reproduced in the evolutionary process (it should be conserved in the species) and during the life cycle of individuals (an individual may show such behavior many times over the course of its lifetime).
Thus, the PRM view can provide a new viewpoint for understanding the metaphysically diverse natural world and an adequate conceptual framework for scientific investigations. However, when applying this view to appropriate phenomena, careful examination is needed in terms of what modules are and how (much) they are persistently reproduced. Through this examination, the application of the PRM view to various research fields would set a new research agenda and provide a useful perspective.
In this article, we propose a new view of homology, the PRM view, to provide a non-essentialist standpoint as the "core position" in Fine's (1984) sense, stripping down both the essentialist and anti-essentialist interpretations of homology. Actually this view has affinity with other recent homology concepts that have been proposed from the developmental perspective of homology, 8 indicating the adequacy of this view for biologists' daily use. By emphasizing the basic features of the phenomenon of homology, this view regards homologs as PRMs in evolutionary and developmental processes. PRMs are not only spatiotemporally restricted but can also be used to make biological generalizations. Based on these generalizations, the PRM view can perform predictive and explanatory roles in scientific investigations. Moreover, this view can accommodate the fact that homologs can change dynamically in evolutionary and developmental processes. It can also be applied to various domains outside biology, such as those involving behavioral and psychological phenomena. | 9,609.6 | 2017-05-22T00:00:00.000 | [
"Biology",
"Philosophy"
] |
Follicular Histomorphometry and Evaluation of Ovarian Apoptosis in Queens of Different Age Groups
Background: In humans and bitchs the age is another factor that may affect the size of ovarian structures, verifying alterations in the quality of the pool and size of follicular structures, which can compromise the use of these structures for in vitro maturation. There are no reports correlating the morphometric characteristics of the follicles and ovarian apoptosis at different ages in cats. The aim of this study was to evaluate the histomorphometric parameters of follicular growth and the relationship with the occurrence of apoptosis in ovarian tissue of young, adult and senile queens. Materials, Methods & Results: Eighteen domestic queens, multiparous, of different breeds and age groups were used in this study and divided into three groups according to their ages: five months to one year young; (7.8 ± 1.0 months); one to six years adults (2.8 ± 0.5 years); and more than six years senile (8.0 ± 0.9 years). Vaginal cytology was performed in order to characterize the estrous phase associated with plasma concentrations of progesterone. The morphology and percentage of the vaginal epithelium cells were evaluated and queens were classified into estrous and non-estrus and plasma concentrations of progesterone were determined. Ovarian samples were collected after ovariohysterectomy to routine histological processin and all follicles were counted and categorized into two groups, non-atresic and atresic. The mean follicular and oocyte diameters were calculated between the measurement of the largest diameter and perpendicular diameter. The relationship between follicle and oocyte were determined using the measurements of diameter, area and perimeter. The apoptotic cells were detected and cells were considered positive when TUNEL reaction was detected. The morphometric index of 1039 follicles were evaluated. Primordial follicles in young animals showed larger diameter, follicular area and perimeter than the structures of adult queens, as well as the unilaminar primary follicles of the same group were larger compared with senile animals (P < 0.05). Comparing adult and younger queens, the first showed a significant decrease of oocyte diameter in primary and unilaminar primary follicles, as well as oocyte area and perimeter (P < 0.05). The values for follicular diameter, oocyte area and perimeter for multilaminar primary, secondary and pre-ovulatory structures did not present statistical differences between the groups (P > 0.05). For the pre-ovulatory follicles there was no positive correlation between the oocyte growths regarding the follicles (P > 0.05). Only in senile animals positive markers for apoptosis were identified in nuclei of primordial follicles. No significant differences concerning the number of follicles and Tunel positive cells were observed between groups (P > 0.05). Discussion: Considering the importance of this study for greater knowledge in the basic aspects for reproductive biotechnologies, we verified that secondary follicles showed the largest diameters and younger animals the largest values for diameter, area and perimeter, suggesting that this age group could be ideal for the use and manipulation of oocytes. The process of follicular atresia is characterized by the occurrence of apoptosis, or programmed cell death when the organism begins to efficiently eliminate dysfunctional cells. The study of follicular apoptosis in small animals, especially in cats, is very important for the development of reproduction biotechnologies. Phenomenon of apoptosis showed no relationship with age in queens, occurring in a physiological, continuous and proportionate manner considering the number of nondominant follicles involved in each estrous cycle.
INTRODUCTION
The domestic cat is an important and necessary experimental model for reproduction biotechnologies.Reliable methods for the in vitro maturation and in vitro fertilization have been developed through the use of oocytes obtained from domesticated cats [18].The study of ovarian folliculogenesis provides better understanding of the reproductive physiology and may be useful for improving the preservation of endangered wild cats [3].
Recent study using prepubertal and sexually mature queens evaluated the ovarian condition and follicle growth in vitro maturation and concluded that size and quality of follicles and oocytes can affect maturational ability [16].According to studies in humans [17] and bitches [5], age is another factor that may affect the size of ovarian structures, in other words, the morphometric characteristics of the follicles are related to the age of individuals, verifying alterations in the quality of the pool and size of follicular structures, which can compromise the use of these structures for in vitro maturation.
The process of follicular atresia in the ovarian tissue is characterized by the occurrence of apoptosis of granulosa or theca cells [4,14].To our knowledge, there are no reports correlating the morphometric characteristics of the follicles and ovarian apoptosis at different ages in cats.
The aim of this study was to evaluate the histomorphometric parameters of follicular growth and the relationship with the occurrence of apoptosis in ovarian tissue of young, adult and senile queens.
MATERIALS AND METHODS
Eighteen domestic queens, multiparous, of different breeds and age groups, with mean body weight of 3 kg were used in this study.None of the animals had any report of reproductive diseases and were considered healthy after physical and clinical examinations.The animals were divided into three groups according to their ages: five months to one year -young; (7.8 ± 1.0 months); one to six years -adults (2.8 ± 0.5 years); and more than six years -senile (8.0 ± 0.9 years).The age was estimated by analyzing the dental arch [6].
Vaginal cytology was performed in order to characterize the estrous phase associated with plasma concentrations of progesterone.
Vaginal smears were obtained using a cotton sterile swab previously moistened with saline solution.The smears were stained with a modified Wright-Giemsa stain (Diff-Quick ® ) 5 and analyzed by light microscopy (Olympus BX61) 6 .The morphology and percentage of the vaginal epithelium cells were evaluated [15] and queens were classified into estrous and non-estrus.
Blood samples (3 mL) were collected by jugular vein puncture.The blood was centrifuged immediately and plasma was stored at -18°C until hormone analysis.Plasma concentrations of progesterone were determined, in duplicate, by solid-phase I125 radioimmunoassay 7 .
In this study only animals classified as in a non-estrus period were enrolled, based on the results of progesterone levels and vaginal cytology evaluations [15].
After ovariohysterectomy the ovaries were fixed in paraformaldehyde solution at 5% (pH of 7.2 -7.4) for 24 h and samples submitted to routine histological processing.Five-micron serial sections were mounted onto plain glass slides and stained with hematoxylin and eosin for light microscope (Olympus BX61) 6 evaluation.All follicles were counted and categorized into two groups, non-atresic and atresic, following the morphological criteria established [2].
The non-atresic follicles were those with intact granulosa membrane and the presence of a few pyknotic nuclei (≤ 5% pyknotic nuclei) and atresic follicles showed attenuated granulosa membrane, rupture of granulosa cells and increased number of pyknotic cells (≥ 15% pyknotic nuclei) according to literature [2].
Only non-atresic follicles were considered and classified [7]: primordial, primary oocytes surrounded by a single layer of follicular cells; unilaminar primary, oocytes surrounded by a single layer of cuboidal fol-licular cells already with the formation of the zona pellucida; primary multilaminar or pre-antral, follicles characterized by the enlargement of the granulosa cells, after the formation of the zona pellucida and theca interna; secondary or antral follicles, enlargement of the structures' sizes, presence of follicular fluid and antrum, theca interna and externa, organization of granulosa cells begins; mature, pre-ovulatory or Graafian follicles, presence of the antrum, organized granulosa cells, theca interna and externa, corona radiata and oophorus cumulus.
For the morphometric study, histological sections were observed and photomicrographed by light microscopy (Olympus BX61) 6 using the Image J 1.45 software.For each follicular stage, ten follicles were examined (or what was available in two histological sections).The mean follicular and oocyte diameters were calculated between the measurement of the largest diameter (maximum diameter) and perpendicular diameter (minimum diameter) [Figure 1].
The relationship between follicle and oocyte were determined using the measurements of diameter, area and perimeter.For each follicular stage three linear equations were established, representing the patterns of follicle and oocyte growth, where x = follicular measurement and y = oocyte measurement.
The samples were 10% buffered formalinfixed and processed for further paraffin embedding.The apoptotic cells were detected by a commercial kit for in situ detection of apoptosis (FragEL TM DNA Fragmentation Detection Kit Colorimetric -TdTEnzyme) 8 , Sections of 5 μm mounted on sig-nalized glass microscope slides were deparaffinized and washed in TBS (Tris-buffered saline).The sections were treated with Proteinase K for 8 min, and washed again in TBS.After washing, endogenous peroxidase was inactivated by using 9% hydrogen peroxide and then washed in TBS and incubated in equilibration buffer for 20 min at room temperature.Subsequently, the sections were incubated with the TdT enzyme (terminal deoxynucleotidyltranferase) at 37°C for one-half h in a moist chamber.The reaction was interrupted and the sections incubated, with conjugated antibody, in the moist chamber for 1 h.After washing with TBS, the reaction was detected using diaminobenzidine (DAB) for 30 min.The sections were counterstained with methylgreen during 10 min.Images were scanned by Image-ProPlus 7.0 software, and obtained through camera (Q Color 5 Olympus) 6 , attached to an Olympus BX43 microscope 6 .Cells were considered positive when TUNEL reaction was detected.
Data was submitted to analysis of variance for group comparison, with the means being compared by Tukey test.Data was tested for normality and homogeneity of variances and in non-normal distribution the Kruskal-Wallis test was performed for group comparison, followed by Dunn multiple comparison test.The correlation coefficient (r) of determination (r2) and the regression equation were calculated for the selected variables [20].The values were considered significantly different when P < 0.05.Statistical analysis was performed using the program "GraphPad version 3.10".
Primordial follicles in young animals showed larger diameter, follicular area and perimeter than the structures of adult queens, as well as the unilaminar primary follicles of the same group were larger compared with senile animals (P < 0.05).Comparing adult and younger queens, the first showed a significant decrease of oocyte diameter in primary and unilaminar primary follicles, as well as oocyte area and perimeter (P < 0.05).The values for follicular diameter, oocyte area and perimeter for multilaminar primary, secondary and pre-ovulatory structures did not present statistical differences between the groups (P > 0.05) [Tables 1, 2 & 3].
For the pre-ovulatory follicles there was no positive correlation between the oocyte growths regard-ing the follicles (P > 0.05).For primordial follicles, unilaminar and multilaminar primary and secondary, a correlation was observed between the measurements of oocytes as to the follicles (Table 4).
A total of 1975 follicles were histologically prepared and evaluated.The apoptosis signs of primordial follicles (1664), unilaminar primary (120), multilaminar primary (134), secondary (41) and pre-ovulatory follicles (16) were studied using Tunel analysis.Positive apoptosis was identified in oocytes and pre-granulosa cells of primordial follicles and in other follicle stages positive apoptosis was observed in both granulosa and theca cells.Only in senile animals positive markers for apoptosis were identified in nuclei of primordial follicles (Figure 2).
No significant differences concerning the number of follicles and Tunel positive cells were observed between groups (P > 0.05).Table 5 shows the number of follicles and Tunel positive cells at each follicular stage of 18 queens according to their age.
DISCUSSION
Considering the importance of this study for greater knowledge in the basic aspects for reproductive biotechnologies, we verified that secondary follicles showed the largest diameters and younger animals the largest values for diameter, area and perimeter, suggesting that this age group could be ideal for the use and manipulation of oocytes.These characteristics were also observed in female dogs by literature [5], who evaluated the effects of age and oocyte size upon subsequent oocyte maturation (IVM).They verified that oocytes smaller than 100 μm in diameter had less nuclear material and meiotic competence and younger bitches showed optimized oocyte maturation.
Age had an important effect on oocyte morphometry in queens, given that remaining follicles and their respective oocytes decrease in size with the process of ageing.This fact is justified through the morphometric parameters of unilaminar primary follicles, which are smaller in older cats compared with the younger group (diameter -G1: 70.30 μm; G3: 62.25 μm) (area -G1: 3810.3 μm 2 ; G3: 3086.7 μm2) (Perimeter-G1: 222.4 μm; G3: 201.05 μm).This pattern was also observed in women [17], who found a reduction in mean follicular diameter of primordial and primary follicles in women older than 36 years (39.4 μm) in comparison to younger groups (20 to 27 years: 41.9 μm; 27 to 36 years: 41.5μm).
As described by literature [11], the oocyte and follicle growth in cats showed a biphasic pattern.During the first stage of follicular development, a significant correlation between follicle and oocyte growth was verified, resulting in a linear equation y = 0.3048x + 25.018, r2 = 0.7239 (x = follicular diameter y = oocyte diameter).After antral formation and development of secondary and pre-ovulatory follicles, the oocyte growth pattern significantly decreased compared to follicular growth, resulting in a new linear correlation: y = 0.0072x + 98.001, r2 = 0.0015.These results are superior to those described in cats [11].
In adult animals, the mean follicular diameter was lower in primordial follicles, unilaminar and multilaminar primary follicles and higher in secondary or antral follicles (41.51 μm, 64.43 μm, 132.53 μm and 400.51 μm, respectively), when compared to the results of literature [11] (44.3 μm, 86.2 μm, 155.6 μm and 223.8 μm, respectively).In contrast, the results in this study were superior when comparing to other works, where queens showed a decrease in the mean diameter of primordial follicles, unilaminar primary and secondary follicles of 28.3 μm,41 μm and 74.6 μm, respectively [3], and 38.8 μm, 63.9 μm and 98.7 μm, respectively [1].Different values were also observed in the mean follicular diameter of other species such as adult sheep, in which primordial follicles showed a mean of 39.4 μm [9] and bitches with values for primordial, unilaminar and multilaminar primary follicles of 44.3 μm, 50.7 μm and 148.9 μm, respectively [1].
The process of follicular atresia is characterized by the occurrence of apoptosis, or programmed cell death when the organism begins to efficiently eliminate dysfunctional cells [12].The study of follicular apoptosis in small animals, especially in cats, is very important for the development of reproduction biotechnologies.
The TUNEL technique is widely used to detect DNA fragmentation as an indicative factor of apoptosis.Cells that had suffered apoptosis were detected in human embryos using the chromatin condensation state and DNA fragmentation, which was subsequently confirmed by TUNEL [8].To our knowledge, this is the first study evaluating the frequency of apoptosis with the TUNEL assay in ovarian tissue in queens according to their age.The TUNEL method was effective labeling oocytes of primordial follicles and pre-granulosa cells in older cats and multilaminar primary and secondary follicles or antral follicles in granulosa cells of young animals.
In our study, there was no significant difference of apoptosis in the different age groups, indicating that none of the older, adult or younger queens showed an increase or decrease of apoptosis.These results in felines are contrary to those verified by literature [19], who demonstrated that apoptosis rate of human oocytes was significantly higher in a group of older women (41 -50 years) compared to younger women (21 to 40 years).
CONCLUSION
Young queens presented larger values for unilaminar primary follicles than older animals.We suggest that the pool of remaining small follicles and their respective oocytes decrease with age and in senile phase these follicles are of lower quality, when compared with the beginning of reproductive life.On the other hand, the phenomenon of apoptosis showed no relationship with age in animals, occurring in a physiological, continuous and proportionate manner considering the number of non-dominant follicles involved in each estrous cycle.
The results of this study regarding histomorphometry, morphology and apoptosis of ovarian tissue may contribute as a model for the study of physiological parameters or set up of assisted reproductive technologies of endangered wild cats.
Figure 1 .
Figure 1.Maximum (mx) and minimum (mn) diameter of multilaminar primary follicle in domestic cat: (A) follicular diameter and (B) oocyte diameter.
Table 1 .
Mean values (x), standard error of the mean (SEM) and median (Md) of the diameter (μm) of follicle and oocyte for female cats according to age.
Primordial (P), unilaminar (UP), multilaminar primary (MP), secondary (SE) and pre-ovulatory (OP); a, b: means followed by different small letters in line differ significantly by Tukey test (P < 0.05); A, B: means followed by different uppercase letters in line differ significantly by Dunn's test (P < 0.05).
Table 2 .
Mean values (x), standard error of the mean (SEM) and median (Md) of the area (μm2) of follicle and oocyte for female cats according to age.
Primordial (P), unilaminar (UP), multilaminar primary (MP), secondary (SE) and pre-ovulatory (OP); a, b: means followed by different small letters in line differ significantly by Tukey test (P < 0.05); A, B: means followed by different uppercase letters in line differ significantly by Dunn's test (P < 0.05).
Table 3 .
Mean values (x), standard error of the mean (SEM) and median (Md) of the perimeter (μm) of follicle and oocyte for female cats according to age.
Table 4 .
Linear equations (x = follicular measurement and y = oocyte measurement) and r2 values for follicular and oocyte growth of female cats according to age.
Table 5 .
Mean values (x), standard error of the mean (SEM) and median (Md) of follicles and apoptosis positive cells of primordial (P), unilaminar (UP) and multilaminar primary (MP), secondary (SE) and pre-ovulatory (OP) follicles of female cats in different age groups. | 4,069.8 | 2018-03-19T00:00:00.000 | [
"Biology"
] |
Identification of indels in next-generation sequencing data
Background The discovery and mapping of genomic variants is an essential step in most analysis done using sequencing reads. There are a number of mature software packages and associated pipelines that can identify single nucleotide polymorphisms (SNPs) with a high degree of concordance. However, the same cannot be said for tools that are used to identify the other types of variants. Indels represent the second most frequent class of variants in the human genome, after single nucleotide polymorphisms. The reliable detection of indels is still a challenging problem, especially for variants that are longer than a few bases. Results We have developed a set of algorithms and heuristics collectively called indelMINER to identify indels from whole genome resequencing datasets using paired-end reads. indelMINER uses a split-read approach to identify the precise breakpoints for indels of size less than a user specified threshold, and supplements that with a paired-end approach to identify larger variants that are frequently missed with the split-read approach. We use simulated and real datasets to show that an implementation of the algorithm performs favorably when compared to several existing tools. Conclusions indelMINER can be used effectively to identify indels in whole-genome resequencing projects. The output is provided in the VCF format along with additional information about the variant, including information about its presence or absence in another sample. The source code and documentation for indelMINER can be freely downloaded from www.bx.psu.edu/miller_lab/indelMINER.tar.gz. Electronic supplementary material The online version of this article (doi:10.1186/s12859-015-0483-6) contains supplementary material, which is available to authorized users.
Background
Genetic differences between individuals are encoded as local changes consisting of substitutions and small indels that alter a few base pairs, and large-scale changes that consist of larger indels, rearrangements and copy number variations. Whole genome sequencing using NGS technologies offers a unique opportunity to study these variations and enable a better understanding of genome function and diversity. There are a number of mature software packages and associated pipelines that can identify single nucleotide polymorphisms (SNPs) with a high degree of concordance [1]. However, the same cannot be said for tools that are used to identify the other sources of variation.
Indels are the most common structural variant that contribute to pathogenesis of disease [2], gene expression and functionality. Current approaches to identify indels include de-novo assembly of unaligned reads [3], read splitting [4,5], depth of coverage analysis [6] and analysis of insert size inconsistencies. Each of these approaches has their own strengths and weaknesses. For example, even though de-novo assembly offers the best opportunity to accurately call these variants, assembly with short reads is a challenging problem that requires significant computational resources. Similarly split-read approaches perform with a high degree of accuracy for short and medium sized indels, but the false-negative rate increases significantly with increase in size of the variations. Paired-end read and depth of coverage approaches frequently miss small indels, and are unable to predict the breakpoints accurately. We believe a hybrid strategy that integrates the information using more than one of the above approaches is required to identify these indels with a high degree of sensitivity and specificity.
Here we present indelMINER, a method that uses a combination of split-read and paired-end approaches to identify the breakpoints of insertions and deletions. The identified indels can be annotated with additional information such as the depth of coverage across the predicted breakpoints, and the list can be subsequently filtered to generate a high quality subset of variants. In addition to identification of indels, indelMINER can also be used to investigate the absence or presence of support for a set of indels in another sample. This is valuable in investigation of normal/tumor pairs as well in cases where several individuals of a family are sequenced to identify de-novo changes in the proband, and a novel feature of indelMINER. We present the results of using indelMINER on simulated data as well as real data from the individual NA18507 and a cancer genome dataset. We compare the performance and results of indelMINER to previously published results from several other similar tools.
Simulated dataset
In order to calculate the sensitivity and specificity of our method, and compare it to that of a few other popular tools, we implanted 3,723 known homozygous deletions, and 3,777 known homozygous insertions [7] into chromosome 22 of the human genome. 100 bp long paired-end (average insert distance 500 bps, s.d. 30 bps) Illumina reads were simulated from this modified sequence using pIRS [8], such that each nucleotide on the reference was covered 20 times on an average. The reads were mapped to the human reference chromosome 22 using BWA [9] version 0.5.9, with the default parameters. The resulting BAM file was sorted based on the chromosomal coordinates, and the reads were realigned around putative indels using IndelRealigner tool from the GATK suite [10].
We ran SAMtools [11], PINDEL [4], PRISM [5] and indelMINER on this dataset and the results are summarized in Table 1. The indels identified by the tools were compared to the true set, and in case of deletions a call was marked as validated if there was a reciprocal overlap with at least half of the actual deletion. The details of the arguments and parameters used for this experiment are detailed in the Additional file 1. SAMtools exhibits the lowest false-positive rate for this dataset (2.65%), but its false-negative rate is significantly higher when compared to the other software. Out of the remaining tools, indelMINER exhibits the lowest false-positive rate (3.57% compared to 4.06% for PRISM and 4.53% for PINDEL) as well a false-negative rate that is significantly lower than PINDEL (10.54% for indelMINER, 15.59% for PINDEL), and comparable to that of PRISM (10.46%).
Real dataset
We used about 28-fold data corresponding to the Yoruban HapMap individual NA18507 (Accession: SRX016231) to evaluate indelMINER on real data. The same sample has been characterized in multiple studies [4,5,12,13] and sequenced using multiple platforms [14,15], making it an ideal test case to compare the results of indelMINER. We downloaded the fastq reads for the HapMap individual from the Short read archive (Accession: SRX016231). These 101 bp reads were generated using the standard Illumina paired-end library protocol, with an average insert length of about 500 bps. We aligned these reads to the hg19 reference sequence using BWA version 0.5.9 with the default parameters except -q 15, which was used to trim the low quality segment of the read down to 35 bps at the 3' end. The reads around putative indels were realigned using GATK IndelRealigner, followed by use of MarkDuplicates (http://picard.sourceforge.net) to flag the potential PCR duplicates. The resulting BAM file was used to identify indels using SAMtools, PINDEL, PRISM and indelMINER (See Additional file 1).
For the NA18507 genome, indelMINER detected 643,636 indels (347,590 deletions and 296,046 insertions). Additional file 1: Figure S1 shows the length distribution of the identified indels and Additional file 1: Figure S2 shows their distribution across the human chromosomes, which correlates well to the amount of DNA present in the chromosomes. 313 of the identified indels overlap with the protein coding exons corresponding to the set of RefSeq [16] genes. 44.81% of these coding indels are of lengths that are a multiple of 3. This is in close concordance with previous studies [17,18] that have reported that in-frame indels should constitute about 50%-60% of all coding indels. 412,001 (64.01%) of these indels identified using indelMINER were also found in dbSNP version 137 and 454,120 (70.55%) of them were found in the Database of Genomic Variants (DGV). 220,434 (34.25%) of the variants were also identified in the Phase 1 release 3 of the 1000 genomes project in African samples.
We also compared the variants identified using indelMINER to those identified using SAMtools, PINDEL and PRISM. Figure 1 shows a comparison of the variants called by the various tools using the same read alignments. Two calls were marked as an overlap if they had a reciprocal overlap greater than 50% of the breakpoint range. All of the included software agreed on 315,159 of the indels, whereas about 658,363 of the indels were supported by at least two of the software we looked at as part of this study.
Cancer normal/tumor pair
All cancers arise as a result of accumulation of mutations that confer growth advantage. The advent of next-generation sequencing provides a powerful and cost-effective tool to characterize these genome-wide changes. The primary tumor tissue and adjacent or distal normal tissue are frequently sequenced and analyzed to identify germline and rare somatic mutations. The first step in such an analysis is to identify the mutations that are unique to the cancer. Issues such as normal DNA contamination of tumor DNA complicate the analysis by reducing the tumor variant allele frequency. Large granular lymphocyte (LGL) leukemia is characterized by a clonal expansion of either CD3 + cytotoxic T or CD3 − NK cells, and is frequently associated with autoimmune diseases such as rheumatoid arthritis [20,21]. A patient was consented under Institutional Review Board protocols initiated at the Pennsylvania State University and continuing at the University of Virginia in accordance with the Declaration of Helsinki. The patient consented to inclusion in an LGL Leukemia patient registry which permits the publication of de-identified patient characteristics and an additional addendum consenting to next generation sequencing and the public deposition of data derived therefrom. We sequenced the peripheral blood and matched saliva from a patient diagnosed with LGL, to a coverage of 29-fold and 17-fold respectively (Additional file 1: Figure S3). We used indelMINER to (a) identify indels in the blood sample and (b) investigate and tag those indels based on their presence or absence in the matched saliva sample. indelMINER identified 575,426 indels in the blood sample, out of which 572,188 of them were also observed in the saliva. Indelocator (https://www. broadinstitute.org/cancer/cga/indelocator) has been used in earlier studies [22] to identify indels in normal/tumor pairs. We used Indelocator on the same dataset, and it identified 478,534 indels in the blood sample, 438,331 of which were also observed in the matching normal sample. We found that 392,512 (82.02%) of the indels found by Indelocator were also found by indelMINER, whereas the remaining indels were observed by only one of the two software tools. We randomly selected 10 indels that were identified by indelMINER but not identified by Indelocator for validation (Additional file 2). We were not able to design a reliable pair of primers for 5 of the indels due to their location in low-complexity regions or repeat regions in the human genome. 4 of the remaining 5 indels were validated using Sanger sequencing, including a large deletion spanning 350 bases (Figure 2).
Discussion and conclusions
Recent studies have reported on the concordance of singlenucleotide variants identified using different software tools [1] as well using different sequencing platforms [23]. The fraction of polymorphic sites where all platforms/tools agree varies between 70-90% for the SNP calls. Often the overlap of predicted indels between different methods is much lower, indicating that none of the methods offer a comprehensive satisfactory solution.
indelMINER uses a combination of approaches to identify indels of arbitrary size from paired-end short reads. It can predict the exact breakpoint for small and medium size indels, and the approximate breakpoints for the larger deletions. The performance of the algorithm degrades in regions where a single short-read covers multiple indels, as well as in regions where the mapping quality of the sequences is low. A de novo assembly approach has been shown to be more suitable in a large fraction of such regions. The current version of indelMINER can only handle indels; however the same algorithm can be extended to handle other types of structural variants, in a manner similar to PINDEL and PRISM. We do not use sequences where both reads from the same fragment align with a mapping quality of zero, i.e., cases where neither of the mates can be aligned unambiguously in finding the indels. If one of the reads can be aligned unambiguously, then indelMINER can use that information to split and align the second read. As explained earlier, we do use such sequences that align ambiguously in the mode where we are just looking to tag the presence or absence of a variant. When tagging the presence or absence of indels in sample B indelMINER uses all the alignments including the Figure 1 Comparison of indels identified using SAMtools, PINDEL, PRISM and indelMINER drawn using VennDiagram [19]. secondary alignments to check against the indels found in sample A.
We used both simulated and real data to show that indelMINER has low false-positives and a low false-negative rate when compared to several other tools in the same category. indelMINER can also be used in study of normal/ tumor pairs, and in studies where multiple individuals from the same family are being sequenced. The PCR validations confirm the accuracy and sensitivity of indelMINER, and its ability to identify indels in high-throughput sequencing datasets.
Methods
Overview indelMINER relies on a combination of split-read and paired-end read approaches to identify indels from a BAM file for a sample (Figure 3). Even though it can be run on any coordinate sorted BAM file, we recommend running the GATK IndelRealigner [10] on it prior to running indelMINER. This local realignment serves to transform regions with misalignments due to indels into clean reads containing a consensus indel that can be then easily identified. The cleaned reads are analyzed in order of their alignments to the reference sequence, and segments of candidate reads are realigned within a specified diagonal band [24], identified using a fast k-mer comparison of the read and the reference sequence. These alignments are collected and used to identify candidate insertions and deletions. The identified variants are annotated with additional information pertaining to the region within the breakpoints, including the average depth of coverage, the RMS mapping quality of reads, and the count of reads with a mapping quality equal to zero. These can be used to filter the calls to obtain a more reliable set of differences between the target and the reference genome. Here we describe each of the steps in greater detail.
Definitions
First we define a few terms that will be used in the description of the workflow and algorithms used in indelMINER. R (a, b, o) is defined as a set of paired-end sequences that are the product of a single lane or barcode of a sequencing run. The expected outer distance for the pairs in this group is described by the interval [a, b] and the expected relative orientation is given by o, where o ∈ ['++','+-','-+','-'].
A read group
The first symbol represents the orientation of the mate that comes earlier on the chromosomal co-ordinates. For example, the expected relative orientation for Illumina paired-end reads is '+-'. 2. A paired-end sequence P (r1, r2, r, o, i) consists of two reads r1 and r2 that are sequenced from the same DNA fragment. The paired-end fragment belongs to the read group r, and o and i refer to the relative orientation and the outer distance of r1 and r2 when both of them are aligned to a reference sequence. The first symbol in o defines the orientation of r1, and the second symbol defines the orientation of r2. 3. maxsrdelsize and maxpedelsize are user specified thresholds that refer to the maximum size of the deletion that we want to identify using split read and paired-end read approaches respectively.
Identification of candidate reads
The reads in the BAM file are analyzed in order of their alignment to the reference sequence. A read r1 (r2) of a pair P (r1, r2, r, o, i) is selected for split read alignment if any of the following conditions is satisfied a) P (r1, r2, r, o, i) is properly paired (o ∈ ['+-'] for Illumina PE reads, and a ≤ i ≤ b where r = R (a, b, o)), and r1 (r2) aligns to the reference with one or more indels, or has a unaligned/soft-clipped segment in it.
In other words, these are the reads that align to the reference genome with the expected orientation and outer distance, but one of the mates is either aligned partially, or aligned to the reference genome with one or more gaps (Figure 3, Identification of candidate reads (a)). b) The mate r2 (r1) is aligned but r1 (r2) is unaligned (Figure 3, Identification of candidate reads (b)).
We also collect the pairs P (r1, r2, r, o, i) where r1 and r2 align to the reference with the expected orientation, but the insert length constraints are not satisfied i.e. (i < a) or (i > b), where r = R (a, b, o) for a separate pairedend read analysis (Figure 3, Identification of candidate reads (c)).
Split read alignment of reads
As discussed above, a read r1 that is selected for splitread analysis has a mate read r2 that aligns to the reference sequence at a position denoted by mpos (Figure 3, Identification of candidate reads). The read r1 is now aligned to the reference within the interval [mposb, mpos + b], where b refers to the maximum expected outer distance for their read group (Figure 3, Identification of diagonal, Split read alignment). If one end of the read aligns to position pos in the above interval, then we attempt to align the read from the other end within the interval [pos, pos + maxsrdelsize] or [posmaxsrdelsize, pos], depending on the relative orientation of r1 and r2 ( Figure 3, Identification of indel). If that fails then we check to see if the unaligned segment of r1 is a candidate insertion. The above alignments proceed in two steps. First, we use a k-mer comparison of the read sequence to the candidate reference segment to find the best diagonal band i.e. the diagonal band where the read and the reference share the most number of unique k-mers. The alignments are then performed using a strategy that requires only O(NW) computation time and O(N) space, where N is the length of the shorter of the two subsequences and W is the width of the band [24]. Each split read where both read ends can be aligned in a way that they support an indel, are saved for further analysis.
Identification of indels
In this step, we collect all the candidate variants V (e1, e2), where e1 and e2 refer to either the two split halves as a result of the realignment in the previous step, or refer to the two reads from the same fragment that did not satisfy the outer distance constraints. We create a graph G and represent every candidate variant supported by a split or paired-end read, as a vertex. Two vertices are joined by an edge if they support the same variant in the target genome. If the two vertices represent split-reads, then the only condition for an edge between them is that they support the same breakpoints. If the two vertices represent mates that do not satisfy distance constraints, then an edge can be drawn between them if the resulting breakpoints from the two variants do not violate the outer distance constraint for reads represented by V1 and V2.
Each clique in the graph should now represent a variant in the target genome. However due to errors in sequencing, ambiguous alignments, ploidy, incompleteness and inaccuracies in the reference genome, a significant fraction of these subgraphs are not fully connected. So instead of restricting the definition of a variant to a clique, we consider each connected component in the graph G to represent a variant in the target genome, and the vertices in the connected component to represent the evidence that (See figure on previous page.) Figure 3 Overview of the indelMINER algorithm. Panel titled "Identification of candidate reads" shows three of the cases when a read is identified for realignment or paired-end analysis. (a) shows a case when mates align with the expected orientation but one of the mates is only partially aligned, (b) shows a case when one mate from a fragment aligns to a location mpos on the reference, while the other mate does not align, and (c) shows the case when both mates align with the expected relative orientation, but the outer distance constraint is violated. Panel titled "Identification of diagonal" shows the various alignments of the unaligned mate using k-mer comparisons, and the subsequent selection of one of the diagonals based on alignment score and distance from mpos. Panel "Split read alignment" shows the extension of the chosen diagonal, and the panel "Identification of indel" shows the alignment and extension of the remaining sequence from the unaligned mate, to a region around mpos selected based on a user-specified threshold. | 4,841.2 | 2015-02-13T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Korteweg-de Vries Equation in Bounded Domains
where μ, ν are positive constants. This equation, in the case μ = 0, was derived independently by Sivashinsky [1] and Kuramoto [2] with the purpose to model amplitude and phase expansion of pattern formations in different physical situations, for example, in the theory of a flame propagation in turbulent flows of gaseous combustible mixtures, see Sivashinsky [1], and in the theory of turbulence of wave fronts in reaction-diffusion systems, Kuramoto [2]. The generalized KdV-KS equation (1.1) arises in modeling of long waves in a viscous fluid flowing down on an inclined plane. When ν = 0, we have the KdV equation studied by various authors [6-12]. From the mathematical point of a view, the history of the KdV equation is much longer than the one of the KS equation. Well-posedness of the Cauchy problem for the KdV equation in various classes of solutions was studied in [6-9]. Solvability of mixed problems for the KdV equation and for the KdV equation with dissipation in bounded domains studied Bubnov [11], Hublov [12], see also
Introduction
The goal of this paper is to prove the existence, uniqueness and the energy decay of global regular solutions of the KdV equation in a bounded domain approximating it by the Kuramoto-Sivashinsky equations.
From the mathematical point of a view, the history of the KdV equation is much longer than the one of the KS equation.Well-posedness of the Cauchy problem for the KdV equation in various classes of solutions was studied in [6][7][8][9].Solvability of mixed problems for the KdV equation and for the KdV equation with dissipation in bounded domains studied Bubnov [11], Hublov [12], see also [19].In [10], Bui An Ton proved well-posedness of the mixed problem for the KdV equation in (0, ∞) × (0, T ) approximating the KdV equation by the KS type equations.Mixed problems for some classes of third order equations studied Kozhanov [13] and Larkin [18].The Cauchy problem for (1.1) was considered by Biagioni et al [6].They proved the existence of a unique strong global solution and studied asymptotic behaviour of solutions as ν tends to zero.This gave a solution to the Cauchy problem for the KdV equation as a limit of a sequence of solutions to the Cauchy problem for the KdV-KS equations.The Cauchy problem for the KS equation considered Tadmor [3] and Guo [5].In [5], Guo studied also solvability of the mixed problem for the KS equation in bounded domains in onedimensional and multi-dimensional cases.Cousin and Larkin [4] proved global well-posedness of the mixed problem for the KS equation in classes of regular solutions in bounded domains with moving boundaries.The exponential decay of L 2 − norms of solutions as t → ∞ was proved.
In the present paper we study asymptotics of solutions to a mixed problem for (1.1) when ν tends to zero in order to prove therewith that solutions to a mixed problem for the KdV equation may be obtained as singular limits of solutions to a corresponding mixed problem for the KS equation.The passage to the limit as ν tends to zero is singular because we loose one boundary condition in x = 0.
We consider in the rectangle Q the mixed problem for (1.1) which is different from the one considered in [4,5,10].In Section 2, we state our main results.In Section 3, exploiting the Faedo-Galerkin method with a special basis, we prove solvability of the mixed problem for (1.1) when ν > 0. In Section 4, we prove the existence and uniqueness of a strong solution to the mixed problem for the KdV equation letting ν tend to zero.It must be noted that the Fourier transform, commonly used to solve the Cauchy problem, see [6][7][8][9], is not suitable in the case of the mixed problem.Instead, we use the Faedo-Galerkin method to solve the mixed problem for (1.1) and weighted estimates to pass to the limit as ν tends to zero.In Section 5, we show that if u 0 L 2 (0,1) is sufficiently small, then u(t) L 2 (0,1) decreases exponentially in time and no dissipativity on the boundaries of the domain is needed for this.
Notations and results
We use standard notations, see Lions-Magenes [16], some special cases will be given below.We denote Our result on solvability of (2.1)-(2.3) is the following.
Then there exists a unique solution to (2.1)-( 2.3) from the class, When ν tends to zero, we obtain the following result.
Proof: It is easy to see that if u, v ∈ H 4 (0, 1) and satisfy boundary conditions of Lemma 1, then This means that the operator corresponding to the problem above is selfadjoint and positive.Hence, assertions of Lemma 1 follow from the well-known facts, see Coddington and Levinson [15], Mikhailov [14].
We construct approximate solutions to (2.1)-(2.3) in the form, where w j (x) are defined in Lemma 1 and g N j (t) are to be found as solutions to the Cauchy problem for the system of N ordinary differential equations, g N j (0) = (u 0 , w j ), j = 1, ..., N.
2) is a normal nonlinear ODE system, hence, there exist on some interval 0, T N ) functions g N 1 (t), ..., g N N (t).To extend them to any T < ∞ and to pass to the limit as N → ∞ , we prove the following estimates: where C 1 does not depend on N, t ∈ (0, T ), ν > 0.
where C 2 , C 3 do not depend on N, t ∈ (0, T ).Estimates (3.4), (3.5), (3.6) imply that u N (x, t) can be extended to all T ∈ (0, ∞) and that approximations (u N ) converge as N → ∞.Passing to the limit in (3.2), we prove the existence part of Theorem 1. Uniqueness can be proved by the standard methods, see [ 4 ].Thus Theorem 1 is proved.
Solvability of the KdV equation
Theorem 1 guarantees well-posedness of the problem (2.1)-(2.3)for all ν > 0. Our aim now is to pass to the limit as ν tends to zero.For this purpose we need a priori estimates of solutions to (2.1)-(2.3)independent of ν > 0. First we observe that estimate (3.4) does not depend on ν, but (3.5), (3.6) do depend.
Due to Theorem 1, for all ν > 0 we have the integral identity, which is true for any v ∈ L 2 (0, 1).It can be shown that u ν satisfy uniformly in ν > 0 the following inclusions:
Proof of Theorem 2
Proof: Letting ν → 0, we have a sequence of functions u ν satisfying (4.1).The last inclusions imply that there exists a subsequence of u ν , which we denote also by u ν , and a function U such that
Proof: Due to Theorem 1, for all ν ∈ (0, 1/2) the following identity is valid where v is an arbitrary function from L 2 (0, T ; L 2 (0, 1)), in particularly, we can take v an arbitrary function from W. Then, taking into account boundary conditions (2.3), we can rewrite the last identity in the form, Passing to the limit as ν → 0, we obtain for a.e.t ∈ (0, T ) and for all v ∈ W. The boundary conditions U (0, t) = U (1, t) = 0 obviously are fulfilled and the boundary condition U x (1, t) = 0 is fulfilled in a weak sense.It is clear that functions U and v have conjugate boundary conditions. 2 Taking into account properties of U, we can write where It means that U is a weak solution to the following boundary value problem, ) Now we must prove that a weak solution is regular.To prove this fact, we use the following Lemma 2 A weak solution to (4.2)-(4.4) is uniquely defined.
On the other hand, it is easy to verify that the function belongs to H 3 (0, 1), U 0 (0) = 0 for any F ∈ L 2 (0, 1), and satisfies the equation, Given F (x), the constants K 1 , K 2 can be found to satisfy the boundary conditions, By Lemma 2, U − U 0 = 0, hence, U = U 0 a.e. in (0, 1), It implies that U ∈ H 3 (0, 1).Returning to (4.2), we rewrite it as This proves the existence part of Theorem 2.
Stability
We have the following result.
This implies the assertion of Theorem 5. | 1,985.2 | 2009-06-28T00:00:00.000 | [
"Mathematics"
] |
A lifetime-enhancing cooperative data gathering and relaying algorithm for cluster-based wireless sensor networks
Despite unique energy-saving dispositions of cluster-based routing protocols, clustered wireless sensor networks with static sinks typically have problems of unbalanced energy consumptions, as the cluster head nodes around the sink are typically loaded with traffic from upper levels of clusters. This results in reduced lifetimes of the nodes and deterioration of other crucial performances. Meanwhile, it has been inferred from current literature that dedicated relay cooperation in cluster-based wireless sensor networks guarantees longer lifetime of the nodes and more improved performance. Therefore, to attain further enhanced performance among the current schemes, a lifetime-enhancing cooperative data gathering and relaying algorithm for cluster-based wireless sensor networks is proposed in this article. The proposed lifetime-enhancing cooperative data gathering and relaying algorithm shares the nodes into clusters using a hybrid K-means clustering algorithm that combines K-means clustering and Huffman coding algorithms. It makes full use of dedicated relay cooperative multi-hop communication with network coding mechanisms to achieve reduced data propagation cost from the various cluster sections to the central base station. The relay node selection is framed as a NP-hard problem, with regard to communication distances and residual energy metrics. Furthermore, to resolve the problem, a gradient descent algorithm is proposed. Simulation results endorse the proposed scheme to outperform related schemes in terms of latency, lifetime, and energy consumption and delivery rates.
Introduction
A wireless sensor network (WSN) is a collection of sensor nodes distributed within a particular area to observe some physical conditions and gather the observation at a base station (BS). 1 The nodes are typically battery-operated, and they have inherent limitations in computation and communication capacities. In most monitoring applications, it is often pricey and unattainable to interchange the node's batteries since the networks are formed in harsh regions with hundreds to thousands of nodes. 2 As a result, the sensor nodes are expected to operate cooperatively for extended periods without any battery back-ups. 3 Conventional WSN applications produce sizable quantities of data. Transporting such large amounts of data from the nodes to the BS will deplete the deficient energy reserves of the nodes timelier and reduce their lifetime.
Moreover, since nodes within a close range may sense identical events, it is incompetent to transport the sensed readings directly from the nodes to the BS. 4 Therefore, it is very significant to consolidate the sensed observations from the sensor nodes into valuable information at the intermediate nodes to guarantee considerably reduced overheads of energy consumption and data redundancies. Clustering technology provides an effective method to reduce the communication energy consumption of the sensor nodes and enhance their lifetime. [5][6][7] Clustering helps to simplify the network structure and evade direct transmission among the nodes and BS. It facilitates the self-organization of nodes into clusters by specific rules, such that each cluster head (CH) can fuse the data from their members and filter them to eradicate redundancies. This aggregation strategy reduces the traffic loads and communication energy consumption of the network.
The low-energy adaptive clustering hierarchy (LEACH) protocol is one of the well-known classical routing schemes based on clustering. 8 LEACH protocol runs in two phases: setup phase and steady phase. During the setup phase, CHs are designated and made known to other nodes. In the steady phase, the nodes report their data to the heads in fixed slots. Although LEACH has some drawbacks such as poor CH node allocation, various improvements have been made on the LEACH protocol over the recent years. 2,9 Typically, cluster-based sensor networks with static sink are faced with overheads of uneven power consumption and low success rates. This is because CH nodes around the sink are usually loaded with traffic, as they serve as relay nodes to transfer the data packets from the upper levels to the BS and thereby deplete their energy faster. Once a CH node dies at a fade zone surface, the other crucial performance metrics of the network such as throughput drop hastily. A possible solution to ensure success rate and longer lifetime of the nodes is by assigning the relay nodes discretely from various clusters. In this way, the CHs can have relay cooperation to transmit their accumulated packets to the destination and achieve longer lifetimes.
The scalable efficient clustering hierarchical (SEACH) routing is one among the classical cluster-based routing that is based on dedicated-relay cooperation. 10 Though the data deliveries model of SEACH protocol is time-driven, which makes it more appropriate for WSN applications with periodic communication requirements. In the literature, SEACH protocol is revealed to encourage more energy efficient and scalable communication well above the other conventional time-driven clusterbased routing protocol. In general, relay cooperation in cluster-based WSNs facilitates improved total channel capacities and guarantees data deliveries from the cluster sections even when there is a loss of connectivity or failure of a CH node. It also allows more utilization of WSN broadcast nature and efficient cooperative communication with network coding (NC). 11 Mainly, NC communications in WSN offer many benefits ranging from improved throughputs with reduced latency to enhanced network lifetime. It enables linear combinations of many packets from various nodes at a shared channel. 2 At the receiver nodes, the coded packet can be recovered upon computations of a few logical or mathematical formulae.
However, one of the significant challenges of cooperative relaying techniques in WSN is the selection of the appropriate relay nodes. 12,13 In WSN, relay communication typically increases the traffic activates of the relay nodes. This causes the relay or cooperator nodes to deplete their energy quicker than the other nodes. 14 Once a relay node fails, a coverage hole appears in that area and other nodes may have no links to reach their destination. When the relay nodes are isolated or far apart, more energy is expended from long-distance communications. Moreover, NC operations in WSN necessitate good connectivity and extra energy resources at the coding links to successfully receive and process the packet flows from other shared mediums. 15 Therefore, when the coverage and residual energy of the cooperator nodes or coding nodes are poor, the coding gain drops adversely.
Motivated by these aforementioned problems, this article proposes a lifetime-enhancing cooperative data gathering and relaying algorithm (LCDGRA) that combines full advantages of clustering technology with multi-hop cooperative relay and NC communication. The proposed LCDGRA operates in three phases and mainly focuses on resolving the routing problem of the cluster-based WSN through enhancing the power consumption and network lifetime, packet latency, and data receiving rate metrics.
Our new contributions in this article are the following: 1. We propose a new cluster-based event-driven scheme called LCDGRA. The proposed scheme can be applied in various event-driven monitoring applications to strengthen the routing performances in terms of network lifetime, energy efficiency, data delivery rate, and latency. 2. We propose a centralized hybrid-clustering scheme that combines K-means clustering and Huffman entropy coding design. The hybridclustering scheme is meant to secure the coverage and residual energy conditions of the CH nodes as well as minimize the energy consumption of the nodes at the setup phase. 3. We frame the relay node assignment as an NPhard problem in terms of residual energy, and placements of the relay nodes. Also, to resolve the NP-hard problem, we propose a gradient descent heuristic-based algorithm. This scheme can be applied in various cluster-based networks to enhance the relay node selection and achieve higher transmission success rates.
Subsequent sections of this article are as follows: section ''Related works'' reviews some of the significantly related works. The network communication and energy models are discussed in section ''Proposed system model.'' Section ''Design flow of LCDGRA'' adequately describes the design phases of the proposed system. Section ''Simulation results'' presents and discusses the comparative performance evaluation results, while section ''Conclusion'' gives the closing comments.
Related works
In recent years, significant focus has been given to routing protocols in WSN due to the differences in network architectures and routing stipulations in various sensing applications. 16 In WSN, routing is needed to establish reliable communication paths between nodes and their BSs, and it significantly influences the power consumption of the nodes. Recently, many energy-efficient cluster-based routing schemes have been applied in various WSNs. 17 The LEACH-centralized (LEACH-C) protocol is an improved version of LEACH. 9 This scheme is meant to improve the CH allocation in LEACH by using a centralized control algorithm to distribute the CH nodes throughout the system. In LEACH-C, the BS computes the average residual energy of the nodes across the network and only allows the nodes with residual battery powers equivalent to the computed average residual energy to work as CH in each round. The LEACH with fixed clusters (LEACH-F) is another centralized variant of LEACH. 18 In LEACH-F, the setup phase is not required at all rounds, only the CH position rotates among the nodes. The data fusion oriented clustered routing protocol based on LEACH (DF-LEACH) is another variant of LEACH protocol. 19 DF-LEACH aims to facilitate both global energy efficiency and reduced energy dissipation at the CHs. In this scheme, data fusion is performed in a hop-by-hop mode among the CHs, before transmission to the BS.
The threshold sensitive energy-efficient sensor network (TEEN) protocol was proposed for WSNs as an event-driven cluster-based protocol. 20 TEEN employs two limits: a soft and a hard threshold. The hard threshold is aimed to decrease total data transport. Therefore, this limit enables the nodes to communicate their sensed data only when their observations correspond to the attribute of interest events. The soft threshold is intended to decrease total data transport by enabling the nodes to disregard data deliveries when variations in their observations are minor, or there are no variations. The adaptive periodic TEEN (APTEEN) protocol is a hybrid alternative of TEEN that merges the benefits of reactive routing and proactive routing systems. 21 In APTEEN, the nodes can react to changes in their monitoring environment and as well perform periodic data deliveries.
Roy and Das 22 introduced the cluster-based eventdriven routing protocol (CERP). In CERP, the nodes form clusters utilizing an event-based clustering system, whereby the clustering takes place only when there are event occurrences in the field. In the clustering phase, every node estimates a competing value concerning the transceiving and aggregation energy. Eventually, each node having the maximum competing value in the respective clusters is elected as CH and introduced to the members. The hybrid energy-efficient distributed clustering protocol (HEED) was proposed to support various time-driven WSNs by Younis and Filmy. 23 In HEED, initial probabilities of sensor nodes to serve as CH depend on their residual energy and proximities to other neighbors.
The K-means and Dijkstra's algorithm-based routing (KDUCR) protocol is an energy-aware routing scheme based on K-means clustering algorithm. 24 The main aim of K-means clustering in WSN is to select random points in the sensing area and allocate the nodes to their nearest points to establish K-sections of cluster. The algorithm computes the centroid for each group repeatedly until the centers converge. This algorithm minimizes the intricacies associated with developing clusters at various levels in WSN. KDUCR protocol uses Dijkstra's shortest path algorithm to determine the shortest routes with sufficient residualenergy to dispatch the aggregated data from each CH node in the sink.
The secured scalable energy-efficient clustering hierarchical (S-SEACH) protocol is an improved version of SEACH. 25 S-SEACH uses similar relay node assignment scheme as SEACH and mainly aims to achieve both improved energy efficiency and protection against intrusion. Recently, one time-driven cluster-based scheme with relay cooperation was Was proposed for WSNs by Wu et al. 26 It is intended to satisfy the basic needs of network connectivity in large-scale WSN farmland services. Ahmed and Samreen, 27 proposed the cluster chain-based relay nodes assignment protocol (CCBRNA). It merges the advantages of relay cooperative communication with clustering and chain-based routing mechanisms.
The mode of linear NC was first recommended for WSN by Ahlswede et al. 11 The authors showed that a node could combine many packets of other nodes by logical or mathematical formulae. The coding mode is of two types, namely the intra-section NC and the intersection NC. 28 The intra-section NC is intended to improve data delivery rates in lossy networks, while the intersection NC is intended to improve the throughputs of lossless sensor networks. Meanwhile, both coding methods are described generally as binary (XOR) and random linear coding (RLNC). In the literature, there are specific recent works carried out on NC in WSN. 29 Yin et al. 28 proposed a multi-hop routing scheme that combines both NC and compressive sensing (CS) methods. It is proposed to facilitate energy-saving and reliable data reporting multi-hop WSNs. Sun et al. 29 introduced the NC-WSN protocol to promote data delivery reliability, effective load balancing, and improved response time in various WSNs. Migabo et al. 30 proposed the cooperative and adaptive NC for gradient-based routing (CoAdNC-GBR) protocol to support efficient NC and data communication in WSNs. CoAdNC-GBR estimates the total amount of nodes in the neighborhoods periodically to implement NC processes and uses the sink nodes' address to deliver the coded packets. However, CoAdNC-GBR is more suitable for query-based applications since it considers an on-demand gradient-based routing system, in which queries from the data operator actuates the data communication phases of the network.
Hence, considering the works above, a new LCDGRA for cluster-based WSNs is proposed.
Network model
In this study, we consider the sensor network model depicted in Figure 1. G = (V, E) signifies the directed graph of the sensor network. V signifies the vertexes, which include sets of nodes scattered randomly in the sensing area and a central BS placed at the end of the area. E denotes the set of edges or links. Based on the functions of the different nodes in the sensing area, each node belongs to one of the following types: normal node (NN), a relay node (RN), and CH.
This study aims to develop an improved event-based clustering routing scheme, characterized by low latency, low-energy consumption, extended longevity, and high data receiving rates. Therefore, considering the sensor network model above, a new event-driven cluster-based routing scheme called LCDGRA is presented in this work. The proposed system is designed acknowledging existing WSN event-driven cluster-based routing design, including the assumptions as follows: 1. The BS and sensor nodes are static after placements. 2. There is only one BS in the network that is at the far end of the network. 3. An external source powers the BS. 4. The sensor network is uniform, and the initial energy of all nodes is the same.
Energy consumption model Figure 2 shows the considered radio model in this work. 2 Both free space d 2 and multi-path d 4 channel models are considered, including path loss P(i, j) loss . The path loss is based on the communication distance d and received signal strength (RSSI) of two nodes, i and j, which can be defined as follows where 2 mp and 2 fs are the receiving and transmitting amplifier's attributes in free space and multipath communication. The energy required in transmitting k length of packets over a distance d is estimated using where E elec is the energy diffusion per k bit packet of the transmitter or receiver, which is usually influenced by digital coding, filtering, and modulation factors. The energy needed to receive the k bits quantity packets by the receiver is estimated using equation (4) The energy for transceiving and processing k quantity of packets per round is estimated as follows where i = 1, 2, . . . , N , N is the total nodes involved in the communication round, and E DAN is the energy required for aggregation and random linear coding of the packets per hop from the source region to the destination.
Design flow of LCDGRA
This section describes the design methods of the proposed LCDGRA, which functions in three phases: 1. Network initialization and clustering 2. Relay nodes selection
Network initialization and clustering
In this phase, the network is initialized and the nodes are grouped into few clusters as described below.
Network initialization. Here the network is initialized. The central BS initiates this with initialization messages conveyed to all the nodes in the network. Then, all the nodes respond to this request with initialization responses. The response messages from the nodes contain their residual energy and current locations in the sensing region. Through the processes described above, the network is initialized.
Clustering based on hybrid K-means. In this point, the nodes are grouped into clusters. A hybrid K-means clustering system is proposed in this work to group the nodes into K number of clusters effectively. The hybrid approach merges the advantages of K-means clustering algorithm and Huffman coding algorithm. It is meant to optimize the sensor nodes' transmission distances and energy usage during the clustering phase. Hence, the clustering and CH decisions are carried out by the central BS as described in the stages below.
Stage 1: cluster formation. In this stage, the nodes are divided into K units of clusters. Initially, the optimal total K-points of clusters is estimated as follows 31 where D(N, S) is the Euclidian distances of the nodes (N) to the BS, d o is the communication threshold as specified in equation (2), and A is the area of the network.
Once the total amount of K-points is computed, the nodes are assigned to their closest cluster-centroids. The distance between the nodes and the cluster centroids is expressed as follows where i = 1, 2, . . . , N , N is the total nodes, and X i and X j are the X coordinates of the nodes and cluster centroids, respectively, while Y i and Y j are their Y coordinates. Finally, new centroids are computed for every cluster until the centroid points become fixed. Stage 2: appointment of CHs. In this stage, the CHs are elected. Initially, a competing value is calculated for each node in all the clusters as follows where i = 1, 2, . . . , N , N denotes the total members of that cluster, d node is the distance from the node to the cluster members, E resi is the residual of the node, E rx is the energy needed for the reception of k-bit packets of the cluster members, E tx is the energy needed to send the packets to the nearest relay nodes, and E DAL is the required energy for in-network aggregation and random linear coding of the packets.
Once the contending value of every node is estimated, the value for each node is multiplied with a random value in the range of 0-1, to determine their respective probabilities. The probabilities obtained for every node is later summed-up to one and organized in descending form. After that, a code is formed for each of the nodes by the Huffman coding algorithm to evaluate their weights. Finally, the node that has the smallest weight in the different clusters is chosen as the CH node and declared to the cluster members.
The reader can get more information about the Huffman coding algorithm in the work of Mehfuz et al. 32 In each round, different CHs are chosen in all clusters to facilitate load distribution until every node depletes within the sensing area.
Relay node selection
In this stage, the relay node selection takes place. Relay nodes are appointed in the various cluster regions to cooperatively help the CHs to deliver their aggregated data to the BS. Similar to the CH election, the election of relay nodes is carried out by the central BS. However, it is a familiar phenomenon in WSN that nodes utilize higher energy in data deliveries as opposed to receiving and processing. Hence, the central BS is bound to opt for relay nodes that have higher residual energy and shortest communication distances with adequate coverage. This study examines this condition as an NP-hard problem and proposes an effective gradient descent algorithm to solve this problem.
Gradient descent algorithm is an incremental first order iterative method intended to provide optimal weights that effectively reduce a cost function. The algorithm finds the local minimum weight by taking steps reciprocal to the negative gradient of the cost function, as it searches for the best weight. Typically, the weight update operation is given by W N = W E + (a 3 Df =DW ) where f is the cost function, W N and W E are the new and existing sets of weight, W is the vector coefficient of the weights, and a is the step length or learning rate. The step length needs to be selected carefully as a large value of step length, will overshoot the iteration. As well, when the step length is too small, the iteration to reach the local minimal will increase. The most preferred values are in the range of 0.01-0.3. Therefore, the step length is taken as 0.003, which is reasonably not too large or too small. More details about gradient descent optimization can be gotten from the works of Dong and Zhou. 33 In the proposed system, the following are defined as the constrained functions for decent gradient optimization: Constraint f a . The first constraint function f a is defined as the ratio of average residual energy of the relay nodes to that of the normal nodes. Minimizing this constraint implies that high residual energy nodes are to become relay nodes. The average residual energy of the nodes can be determined using where i = 1, 2, . . . , N , N is the number of nodes, and E ri is the residual energy of each node under consideration. Furthermore, the average residual energy of the normal nodes is denoted with E NN , while that of the relay nodes is represented as E RN .
Constraint f b .
The second constraint function f b is defined as the ratio of the average distances of the normal nodes to the central BS, to that of the relay nodes to the BS. Minimizing this constraint implies that wellplaced nodes with minimal communication distances to the central BS and good coverage, are to become relay nodes. The average distance between the nodes and the central BS can be determined using where i = 1, 2, . . . , N , N is the total number of nodes, and X Ni and X BS are the X coordinates of the nodes and the central BS, while Y Ni and Y BS represent their Y coordinates, respectively. Likewise, the average distance from the normal nodes to the BS is represented by d NN , while d RN is the average distance from the relay nodes to BS. Hence, based on the above-constrained functions, as per f a and f b the cost function for optimization is formulated as follows where f (a, b) is the cost function under minimization, and b and g are control limits 0 ł b ł 1 and 0 ł g ł 1, which are fixed usually in advance. Through minimizing the cost function expressed in equation (11), it is expected that the nodes with minimal path losses between the CHs, higher residual energy, shortest transmission distances, and adequate coverage are to become relay nodes. The nodes' locations can be reached with the localization utilities available in work done by Luo and Chen. 34 Algorithm 1 gives a sufficient description of the proposed relay node selection scheme. Once the algorithm is computed for the various nodes, the BS learns the optimal cost for electing the relay. Once it selects the relay nodes based on the above preferences, it notifies the selected nodes with a declaration message, which indicates their current functions as relay nodes. Afterwards, the relay nodes declare their information to the network with short notification messages, which has their residual energy, cluster identity (ID), relay node ID, and a header that reveals the message as a mere notification. Once each CH or relay node gets a join notification, it examines the message for membership suitability by analyzing the communication distance and signal strength of the sender. After the relay node or CH has confirmed the membership suitability, they notify the relay node of interest with a join message. The join message contains the ID of the preferred relay node and the sender ID that indicates the duty of the node as either relay node or a CH node; this is how the routing information that is needed for the cooperatively relaying operations is updated across the network. Once a relay node is about to ON or OFF its radio unit during the operation, it declares such transition to other relay and CH nodes having it as their relay members. This proposed relay node selection scheme can be applied in WSN to ensure transmission reliability and improvement in relay node allocation.
Data aggregation and reporting
After the CH and relay nodes are elected, the nodes transit into idle states awaiting events. Besides, an event represents a variation in the sensed value that exceeds specific sensing thresholds. Followed by the event occurrence in the sensing field, the stages of data aggregation and reporting are performed. It is well known in WSN, that the amount of data communications has a significant influence on energy consumption of the network. Therefore, it is essential to reduce the number of deliveries to guarantee considerably less energy consumption. In the proposed scheme, to ensure minimal energy consumption, random linear coding is executed per hop from the source to the destination.
Once an event occurs, the cluster members of that particular region, report their sensed data to the CH node. Upon receiving the sensory data of the members, the CH node organizes the variations in the readings above specific threshold into N blocks of data packets P i = ½1, 2, . . . , P N based on their identities. After that, the packets are assigned coding vectors C i = ½1, 2, . . . , C N from Galois field (2 8 ). The packets are later coded together by blending them linearly as follows where i = 1, 2, . . . , N , P coded is the encoded packet,P i denotes the source packets, and C i denotes the coding vectors. After the coding process, the coded packet is sent from the source CH to the nearest hop relay nodes. Reconstruction of the coded packet P coded into the original packets is based on the obtained packets at the destination. Initially, Gaussian eradication is carried out. The header message is setup into an n 3 n matrix, and later into a reduced row-echelon (reff). Finally, the original packets are rebuilt by working out a few sequences of the underlying linear equations. Figure 3 shows the complete flowchart of the proposed LCDGRA.
Simulation results
In this section, the performance of the proposed LCDGRA scheme is assessed using MATLAB 2018b simulations. The analysis was conducted with 100 nodes scattered randomly over an (X = 100 m, Y = 100 m) area including one sink node placed at (X = 100 m, Y = 50 m) far end of the area. Other key parameters applied in the simulation analysis are presented in Table 1. In this work, an efficient hybrid K-means clustering scheme combining K-means algorithm with Huffman coding design has been applied to organize the nodes into K quantity of clusters. In addition, an efficient gradient descent heuristic-based algorithm has been applied to find the most appropriate relay nodes to serve the CHs report their aggregated data to the destination cooperatively. Figure 4 plots the convergence of the gradient decent cost function as per equation (11). It can be observed from Figure 4 that the cost function converges successively at about 31 iterations. Figure 5 compares the latency of the network when using each of the schemes. The average latency presents the time expended from when data are disseminated from the sender to the arrival time at the destination. It includes delays in data queuing, data propagation, and processing. It is visible from the simulation results presented in Figure 5 that the latency of the proposed LCDGRA routing system is 18% lesser than that of CERP. It is also noticeable that the latency of CERP is 16% lesser than TEEN, when examining the network latency. Figure 6 shows the data-receiving rate of the network. The packet-receiving rate represents the ratio of the sent quantity of data to the amount of data well received at the destination. It reveals the packet loss rates and link qualities of the sensor network. From the simulation results depicted in Figure 6, it is evident that the proposed scheme keeps a stable, even, and high receiving rate in the range of 1-0.98, which is 0.82-0.72 higher than that of CERP protocol. For TEEN protocol, it is observed that the receiving rate drops entirely at a later stage of network operation, and the receiving rate is merely about 0.99-0.98, which is considerably less than that of LCDGRA and CERP. Figure 7 analyzes the average energy consumption in the course of data aggregation. The average energy consumption in each round is estimated using equation (13), where N denotes the total nodes, E is the energy consumption per round, R is the overall round, and E resi is the nodes' residual energy upon completion of each Rth round. From the simulation result revealed in Figure 7, it is obvious that the energy consumption of the proposed LCDGRA is 21% less than that of CERP protocol, and 37% less than that of TEEN Figure 8 compares the lifetime of the network until half of the nodes deplete. The network lifetime is regarded as the period from the network initialization, to when half of the nodes deplete over the network. From Figure 8, it is clear that for the proposed LCDGRA, half of the nodes deplete around 3500, while in CERP and TEEN, half of the nodes depletes around 2600 and 2100, respectively.
The above simulation results confirm that the proposed LCDGRA truly merges the benefits of random linear coding and cooperative multi-hop communication with the best relay nodes in a cluster-based topology.
Furthermore, the proposed LCDGRA outperforms both the protocols, TEEN and CERP, on the following rationales: in TEEN, the nodes wait for their assigned time slots to report their data to CH nodes. Thus, in scenarios where sensor nodes have no data to report, they are still constrained to waste their limited energy resources. Besides, the CH nodes also keep their radio units turned on always, to collect the sensed data of their cluster members. Therefore, during network operations, the CHs spend their energy quicker, which cause them to fail faster and thereby reducing the data receiving rates. In CERP, some of the nodes are isolated such that they have no links to other nodes due to variations in the clusters created at every round initiation. Therefore, the nodes fail faster, which creates more packet loss.
These are in contrast to the proposed LCDGRA, where the nodes are clustered adopting an efficient hybrid K-means clustering method that considers the communication distance and energy. Moreover, dedicated cooperating relay nodes that have good placements and energy resources consistently assist the CH nodes to communicate their aggregated data. In addition to that, the payload packets are coded at each hop from the source to the destination with random linear NC. These results prove that the proposed scheme does not only decrease sensor network latency and energy consumption but improves the throughput and longevity as well. Hence, it is attestable that the proposed scheme guarantees to meet the essentials for more energy efficiency, coherent and timely event reporting in WSNs.
Conclusion
In this article, a new event-driven cluster-based routing algorithm called LCDGRA was presented. The proposed LCDGRA routing method is simple and consists of three main phases. In Phase 1, the nodes are grouped into K clusters and the CHs are allocated in every cluster by a hybrid K-means clustering that combines both K-means clustering and Huffman coding algorithms. In Phase 2, rather than dedicating the relay nodes task to the CH nodes, relay nodes are assigned from the non-CH nodes to perform the data delivery tasks. Therefore, the CHs have sets of cooperating relay nodes that are dedicated to obtain and communicate their aggregated data to the destination. The relay node election is expressed as an NP-hard problem in studies of residual energies and communication distances of the nodes. Also, an efficient gradient descent heuristicbased scheme is proposed to solve the NP-hard challenge. In the final phase, the aggregated packets from the event region are coded with random linear coding and relayed in multi-hops to the central BS cooperatively.
The performance of the proposed LCDGRA has been assessed with other classical event-driven clusterbased protocol, including TEEN and CERP, through simulation. The simulation results confirm that the proposed LCDGRA significantly outperforms both TEEN and CERP routing protocols in terms of reduced energy consumption with prolonged lifetime and increased data delivery rates with reduced latency.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 7,777 | 2020-02-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Lutin: A Language for Specifying and Executing Reactive Scenarios
This paper presents the language Lutin and its operational semantics. This language specifically targets the domain of reactive systems, where an execution is a (virtually) infinite sequence of input/output reactions. More precisely, it is dedicated to the description and the execution of constrained random scenarios. Its first use is for test sequence specification and generation. It can also be useful for early simulation of huge systems, where Lutin programs can be used to describe and simulate modules that are not yet fully developed. Basic statements are input/output relations expressing constraints on a single reaction. Those constraints are then combined to describe non deterministic sequences of reactions. The language constructs are inspired by regular expressions and process algebra (sequence, choice, loop, concurrency). Moreover, the set of statements can be enriched with user-defined operators. A notion of stochastic directives is also provided in order to finely influence the selection of a particular class of scenarios. Copyright
INTRODUCTION
The targeted domain is the one of reactive systems, where an execution is a (virtually) infinite sequence of input/output reactions.Examples of such systems are control/command in industrial process, embedded computing systems in transportation.
Testing reactive software raises specific problems.First of all, a single execution may require thousands of atomic reactions, and thus as many input vector values.It is almost impossible to write input test sequences by hand; they must be automatically generated according to some concise description.More specifically, the relevance of input values may depend on the behavior of the program itself; the program influences the environment which in turn influences the program.As a matter of fact, the environment behaves itself as a reactive system, whose environment is the program under test.This feedback aspect makes offline test generation impossible; testing a reactive system requires to run it in a simulated environment.
All these remarks have led to the idea of defining a language for describing random reactive systems.Since testing is the main goal, the programming style should be close to the intuitive notion of test scenarios, which means that the language is mainly control-flow oriented.
The language can also be useful for early prototyping and simulation, where constrained random programs can implement missing modules.
Our proposal: Lutin
For programming random systems, one solution is to use a classical (deterministic) language together with a random procedure.In some sense, nondeterminism is achieved by relaxing deterministic behaviors.We have adopted an opposite solution, where nondeterminism is achieved by constraining chaotic behaviors; in other terms, the proposed language is mainly relational not functional.
In the language Lutin, nonpredictable atomic reactions are expressed as input/output relations.Those atomic reactions are combined using statements like sequence, loop, choice or parallel composition.Since simulation (execution) is the goal, the language also provides stochastic constructs to express that some scenarios are more interesting/realistic than others.
Since the first version [1], the language has evolved with the aim of being a user-friendly, powerful programming language.The basic statements (inspired by regular expressions) have been augmented with more sophisticated control structures (parallel composition, exceptions) and a functional abstraction has been introduced in order to provide modularity and reusability.
Related works
This work is related to synchronous programming languages [2,3].Some constructs of the language (traps and parallel composition) are directly inspired by the imperative synchronous language Esterel [4], while the relational part (constraints) is inspired by declarative languages like Lustre [5] and Signal [6].
Related works are abundant in the domain of models for nondeterministic (or stochastic) concurrent systems: Input/Output automata [7], and their stochastic extension [8] (stochastic extension of process algebra [9,10]).There are also relations with concurrent constraint programming [11], in particular, with works that adopt a synchronous approach of time and concurrency [12,13].However, the goals are rather different; our goal is to maintain an infinite interaction between constraints generators, while concurrent constraint programming aims at obtaining the solution of a complex problem in a (hopefully) finite number of interactions.
Moreover, a general characteristic of these models is that they are defined to perform analysis of stochastic dynamic systems (e.g., model checking, probabilistic analysis).On the contrary, Lutin is specifically designed for simulation rather than general analysis.On one hand, the language allows to concisely describe, and then execute a large class of scenarios.On the other hand, it is in general impossible to decide if a particular behavior can be generated and even less with which probability.
Plan
The article starts with an informal presentation of the language.Then, the operational semantics is formally defined in terms of constraints generator.Some important aspects, in particular constraints solving, are parameters of this formal semantics; they can be adapted to favor the efficiency or the expressive power.These aspects are presented in the implementation section.Finally, we conclude by giving some possible extensions of this work.
Reactive, synchronous systems
The language is devoted to the description of nondeterministic reactive systems.Those systems have a cyclic behavior; they react to input values by producing output values and updating their internal state.We adopt the synchronous approach, which here simply means that the execution is viewed as a sequence of pairs "input values/output values." Such a system is declared with its input and output variables; they are called the support variables of the system.
Example 1.We illustrate the language with a simple "tracker" program that receives a boolean input (c) and a real input (t) and produces a real output (x).The high-level specification of the tracker is that the output x should get closer to the input t when the command c is true or should tend to zero otherwise.The header of the program is system tracker (c: bool; t: real) returns (x: real) = statement. ( The core of the program consists of a statement describing the program behavior.The definition of statement is developed later.
During the execution, inputs are provided by the system environment; they are called uncontrollable variables.The program reacts by producing outputs; they are called controllable variables .
Variables, reactions, and traces
The core of the system is a statement describing a sequence of atomic reactions.
In Lutin, a reaction is not deterministic; it does not define uniquely the output values, but states some constraints on these values.For instance, the constraint ((x > 0.0) and (x < 10.0)) states that the current output should be some value comprised between 0 and 10.
Constraints may involve inputs, for instance, ((x > t − 2.0) and (x < t)).In this case, during the execution, the actual value of t is substituted, and the resulting constraint is solved.
In order to express temporal constraints, previous values can be used; pre id denotes the value of the variable id at the previous reaction.For instance, (x > pre x) states that x must increase in the current reaction.Like inputs, pre variables are uncontrollable; during the execution, their values are inherited from the past and cannot be changedthis is the nonbacktracking principle.
Performing a reaction consists in producing, if it exists, a particular solution of the constraint.Such a solution may not exist.
Example 2. Consider the constraint (c and (x > 0.0) and (x < pre x + 10.0)), where c (input) and pre x (past value) are uncontrollable.
During the execution, it may appear that c is false and/or that pre x is less than −10.0.In those cases, the constraint is unsatisfiable; we say that the constraint deadlocks.
Local variables may be useful auxiliaries for expressing complex constraints.They can be declared within a program: local ident : type in statement. ( A local variable behaves as a hidden output; it is controllable and must be produced as long as the execution remains in its scope.
Composing reactions
A constraint (Boolean expression) represents an atomic reaction; it defines relations between the current values of the variables.Scenarios are built by combining such atomic reactions with temporal statements.We introduce the type trace for typing expressions made of temporal statements.A single constraint obviously denotes a trace of length 1; in other terms, expressions of type bool are implicitly cast to type trace when combined with temporal operators.
The basic trace statements are inspired by regular expression, and have following signatures: Using regular expressions makes the notion of sequence quite different from the one of Esterel, which is certainly the reference in control-flow oriented synchronous language [4].In Esterel, the sequence (semicolon) is instantaneous, while the Lutin construct fby "takes" one instant of time, just like in classical regular expressions.
Example 3.With those operators, we can propose a first version of our example.In this version, the output tends to 0 or taccording to a first-order filter.The nondeterminism resides in the initial value, and also in the fact that the system is subject to failure and may miss the c command.
((−100.0 < x) and (x < 100.0)) fby-initial constraint loop{ (c and (x = 0.9 * (pre x) + 0.1 * t))-x gets closer to t | ((x = 0.9 * (pre x))-x gets closer to 0 Initially, the value of x is (randomly) chosen between −100 and +100, then forever, it may tend to t or to 0. Note that, inside the loop, the first constraint (x tends to t) is not satisfiable unlessc is true, while the second is always satisfiable.If c is false, the first constraint deadlocks.In this case, the second branch (x gets closer to 0) is necessarily taken.If c is true, both branches are feasible: one is randomly selected and the corresponding constraint is solved.
This illustrates an important principle of the language, the reactivity, principle, which states that a program may only deadlock, if all its possible behaviors deadlock.
Traces, termination, and deadlocks
Because of nondeterminism, a behavior has in general several possible first reactions (constraints).According to the reactivity principle, it deadlocks only if all those constraints are not satisfiable.If at least one reaction is satisfiable, it must "do something;" we say that it is startable.
Termination, startability, and deadlocks are important concepts of the language; here is a more precise definition of the basic statements according to these concepts.
(i) A constraint c, if it is satisfiable, generates a particular solution and terminates, otherwise it deadlocks.
(ii) st1 fby st2 executes st1, and if and when it terminates, it executes st2.If st1 deadlocks, the whole statement deadlocks.(iii) Loop st, if st is startable, behaves as st fby loop st, otherwise it terminates.Indeed, once started, st fby loop st may deadlock if the first st, and so on.Intuitively, the meaning is "loop as long as starting a step is possible."(iv) the execution of st1 (not only at the first step).In case of deadlock, the control passes to st2.
Well-founded loops
Let us denote by ε the identity element for fby (i.e., the unique behavior such that bfby ε = ε fby b = b).Although this "empty" behavior is not provided by the language, it is helpful for illustrating a problem raised by nested loops.As a matter of fact, the simplest way to define the loop is to state that "loop c" is equivalent to "c fby loop c | > ε", that is, try in priority to perform one iteration and if it fails, stop.According to this definition, nested loops may generate infinite and instantaneous loops, as shown in the following example.
Example 4.
loop{loop c}. ( Performing an iteration of the outer loop consists in executing the inner loop{loop c}.If c is not currently satisfiable, loop c terminates immediately and thus, the iteration is actually "empty"-it generates no reaction.However, since it is not a deadlock, this strange behavior is considered by the outer loop as a normal iteration.As a consequence, another iteration is performed, which is also empty, and so on.The outer loop keeps the control forever but does nothing. One solution is to reject such programs.Statically checking whether a program will infinitely loop or not is undecidable, it may depend on arbitrarily complex conditions.Some over-approximation is necessary, which will (hopefully) reject all the incorrect programs, but also lots of correct ones.For instance, a program as simple as "loop {{loop a} fby {loop b}}" will certainly be rejected as potentially incorrect.
We think that such a solution is too much restrictive and tedious for the user and we prefer to slightly modify the semantics of the loop.The solution retained is to introduce the well-founded loop principle; a loop statement may stop or continue, but if it continues it must do something.In other terms, empty iterations are dynamically forbidden.
The simplest way to explain this principle is to introduce an auxiliary operator st \ ε .If st terminates immediately, st \ ε deadlocks, otherwise it behaves as st.The correct definition ofloop st follows: (i) if st \ ε is startable, it behaves as st \ ε fby loop st, (ii) otherwise loop st terminates.
Influencing non-determinism
When executing a nondeterministic statement, the problem of which choice should be preferred arises.The solution retained is that, if k out of the n choices are startable, each of them is chosen with a probability 1/k.
In order to influence this choice, the language provides the concept of relative weights: Weights are basically integer constants and their interpretation is straightforward.A branch with a weight 2 has twice the chance to be tried than a branch with weight 1.More generally, a weight can depend on the environment and on the past; it is given as an integer expression depending on uncontrollable variables.In this case, weight expressions are evaluated at runtime before performing the choice.
Example 5.In a first version (Example 3), our example system may ignore the command c with a probability 1/2.This case can be made less probable by using weights (when omitted, a weight is implicitly 1): loop{ (c and (x = 0.9 * (pre x) + 0.1 * t)) weight 9 | ((x = 0.9 * (pre x)) } (7) In this new version, a true occurrence ofc is missed with the probability 1/10.Note that, weights are not only directives.Even with a big weight, a nonstartable branch has a null probability to be chosen, which is the case in the example whencis false.
Random loops
We want to define some loop structure, where the number of iterations is not fully determined by deadlocks.Such a construct can be based on weighted choices, since a loop is nothing but a binary choice between stopping and continuing.However, it seems more natural to define it in terms of expected number of iterations.Two loop "profiles" are provided as follows.
(i) loop[min, max]: the number of iterations should be between the constants min and max.(ii) loop ∼ av : sd : the average number of iteration should be av, with a standard deviation sd.
Note that random loops, just like other nondeterministic choices, follow the reactivity principle; depending on deadlocks, looping may sometimes be required or impossible.As a consequence, during an execution, the actual number of iterations may significantly differ from the "expected" one (see Sections 4 and 5.3).Moreover, just like the basic loop, they follow the wellfounded loop principle, which means that, even if the core contains nested loops, it is impossible to perform "empty" iterations.
Parallel composition
The parallel composition of Lutin is synchronous; each branch produces, at the same time, its local constraints.The global reaction must satisfy the conjunction of all those local constraints.This approach is similar to the one of temporal concurrent constraint programming [12].
A parallel composition may deadlock for the following two reasons.
(i) Obviously, if one or more branches deadlock, the whole statement aborts.(ii) It may also appear that each individual statement has one or more possible behaviours, but that none of the conjunctions are satisfiable, in which case the whole statement aborts.
If no deadlock occurs, the concurrent execution terminates, if and when all the branches have terminated (just like in the Esterel Language).
One can perform a parallel composition of several statements as follows: The concrete syntax suggests a noncommutative operator; this choice is explained in the next section.
Parallel composition versus stochastic directives
It is impossible to define a parallel composition which is fair according to the stochastic directives (weights), as illustrated in the following example.
The higher priority can be given to (i) X ∧ B, but it would not respect the stochastic directive of the second branch; (ii) A ∧ Y , but it would not respect the stochastic directive of the first branch; In order to deal with this issue, the stochastic directives are not treated in parallel, but in sequence, from left to right.
(i) The first branch "plays" first, according to its local stochastic directives.(ii) The next ones make their choice according to what has been chosen for the previous ones.
In the example, the priority is then given to X ∧ B.
Pascal Raymond et al.
5
The concrete syntax (& >) has been chosen to reflect the fact that the operation is not commutative.The treatment is parallel for the constraints (conjunction), but sequential for stochastic directives (weights).
Exceptions
User-defined exceptions are mainly means for by-passing the normal control flow.They are inspired by exceptions in classical languages (Ocaml, Java, Ada) and also by the trap signals of Esterel.
Exceptions can be globally declared outside a system (exception ident) or locally within a statement, in which case the standard binding rules hold exception ident in st. ( An existing exception ident can be raised with the statement: and caught with the statement: If the exception is raised in st1, the control immediately passes to st2.The do part may be omitted, in which case the control passes in sequence.
Modularity
An important point is that the notion of system is not a sufficient modular abstraction.In some sense, systems are similar to main programs in classical languages.They are entry point for the execution but are not suitable for defining "pieces" of behaviors.
Data combinators
A good modular abstraction would be one that allows to enrich the set of combinators.Allowing the definition of data combinators is achieved by providing a functional-like level in the language.For instance, one can program the useful "within an interval" constraint; let within (x, min, max : real) : bool = (x >= min) and (x <= max).
Once defined, this combinator can be instantiated, for instance, within (a, 0.8, 0.9) (14) or Note that, such a combinator is definitively not a function in the sense of computer science-it actually computes nothing.It is rather a well-typed macro defining how to build a Boolean expression with three real expressions.
Reference arguments
Some combinators specifically require support variables as argument (input, output, local).This is the case for the operator pre, and as a consequence, for any combinator using a pre .This situation is very similar to the distinction between "by reference" and "by value" parameters in imperative languages.Therefore, we solve the problem in a similar manner by adding the flag ref to the type of such parameters.(
Local combinators
A macro can be declared within a statement, in which case the usual binding rules hold; in particular, a combinator may have no parameter at all; Example 9. We can now write more elaborated scenarios for the system of Example 3.For the very first reaction (line 2), the output is randomly chosen between −100 and +100, then the system enters its standard behavior (lines 3 to 14).A local variablea is declared, which will be used to store the current gain (line 3).An intermediate behavior (lines 4 to 6) is declared, which defines how the gain evolves; it is randomly chosen between 0.8 and 0.9, then it remains constant during 30 to 40 steps, and so on.Note that, this combinator has no parameter since it directly refers to the variable a. Lines 7 to 14 define the actual behavior; the userdefined combinator as long as runs in parallel the behavior gen gain (line 8) with the normal behavior (9 to 11).In the normal behavior, the system works almost properly for about 1000 reactions; if c is true, x tends to t 9 times out of 10 (line 10), otherwise it tends to 0 (line 11).As soon as the normal behavior terminates, the whole parallel composition terminates (definition of as long as).Then, the system breaks down andx quickly tends to 0 (line 14).
Figure 2 shows the timing diagram, a particular execution of this program.Input values are provided by the environment (i.e., us) according to the following specification, the input t remains constant (150) and the command c toggles each about 100 steps.
SYNTAX SUMMARY
Figure 3 summarizes the concrete syntax of Lutin.The detailed syntax for expression is omitted.They are made of classical algebraic expressions with numerical and logical operators, plus the special operator pre.The supported type identifiers are currently bool, int, and real.
We do not present the details of the type checking, which is classical and straightforward.The only original check concerns the arguments of the loop profiles and of the weight directive, that must be uncontrollable expressions (not depending on output or local variables).
Abstract syntax
We consider here a type checked Lutin program.For the sake of simplicity, the semantics is given on the flat language.User-defined macros are inlined, and local variables are made global through some correct renaming of identifiers.As a consequence, an abstract system is simply a collection of variables (inputs, outputs, and locals) and a single abstract statement.
We use the following abstract syntax for statements, where the intuitive meaning of each construct is given between parenthesis:
|& n i=1 t i (parallel). ( This abstract syntax slightly differs from the concrete one on the following points. (i) The empty behavior (ε) and the empty behavior filter (t \ ε) are internal constructs that will ease the definition of the semantics.(ii) Random loops are normalized by making explicit their weight functions: (a) the stop function ω s takes the number of already performed iterations and returns the relative weight of the "stop" choice; (b) the continue function ω c takes the number of already performed iterations and returns the relative weight of the "continue" choice.
These functions are completely determined by the loop profile in the concrete program (interval or average, together with the corresponding static arguments).See Section 5.3 for a precise definition of these weight functions.(iii) The number of already performed iterations (k) is syntactically attached to the loop; this is convenient to define the semantics in terms of rewriting (in the initial program, this number is obviously set to 0).Definition 1. T denotes the set of trace expressions (as defined above) and C denotes the set of constraints.
The execution environment
The execution takes place within an environment which stores the variable values (inputs and memories).Constraint resolution, weight evaluation, and random selection are also performed by the environment.We keep this environment abstract.As a matter of fact, resolution capabilities and (pseudo)random generation may vary from one implementation to another and they are not part of the reference semantics.
The semantics is given in term of constraints generator.In order to generate constraints, the environment should provide the two following procedures.
Satisfiability
the predicate e |= c is true if and only if the constraint c is satisfiable in the environment e.
Priority sort
Executing choices first requires to evaluate the weights in the environment.This is possible (and straightforward) because weights may dynamically depends on uncontrollable variables (memories, inputs), but not on controllable variables (outputs, locals).Some weights may be evaluated to 0, in which case the corresponding choice is forbidden.Then a random selection is made, according to the actual weights, to determine a total order between the choices.
For instance, consider the following list of pairs (trace/ weight), where x and y are uncontrollable variables, (t 1 /x + y), (t 2 /1), (t 3 /y), (t 4 /2). (20) In an environment, where x = 3 and y = 0, weights are evaluated to The choice t 3 is erased and the remaining choices are randomly sorted according to their weights.The resulting (total) order may be All these treatments are "hidden" within the function Sort e which takes a list of pairs (choice/weights) and returns a totally ordered list of choices.
An execution step is performed by the function
Step(e, t) taking an environment e and a trace expression t.It returns an action which is either (i) a transition c → n, which means that t produces a satisfiable constraint c and rewrite itself in the (next) trace n, (ii) a termination x →, where x is a termination flag which is either ε (normal termination), δ (deadlock) or some user-defined exception.Definition 2. A denotes the set of actions and X denotes the set of termination flags.
The step function
Step(e, t) is defined via a recursive function S e (t, g, s), where the parameters g and s are continuation functions returning actions.
(i) g : C ×T → A is the goto function defining how a local transition should be treated according to the calling context.(ii) s : X → A is the stop function defining how a local termination should be treated according to the calling context.At the top-level, S e is called with the trivial continuations, Step (e, t) = S e (t, g, s)
Basic traces
The empty behavior raises the termination flag in the current context.A raise statement terminates with the corresponding flag.At last, a constraint generates a goto or raises a deadlock depending on its satisfiability;
Sequence
The rule is straightforward;
Priority choice
We only give the definition of the binary choice, since the operator is right-associative.This rule formalizes the reactivity principle.All possibilities in t must have failed before t is taken into account, where r = S e (t, g, s). (25)
Empty filter and priority loop
The empty filter intercepts the termination of t and replaces by a deadlock, S e (t \ ε, g, s) = S e (t, g, s ), s where s The semantics of the loop results from the equivalence
Catch
This internal operator ([n z → t]) covers the cases of try (z = δ) and catch (z is a user-defined exception) (28)
Parallel composition
We only give the definition of the binary case, since the operator is right-associative; where where
Weighted choice
The evaluation of the weights and the (random) total ordering of the branches are both performed by the function Sort e (cf., Section 4.2).
If Sort e t i /w i i=1
Random loop
We recall that this construct is labelled by two weight functions (ω c for continue, ω s for stop) and by the current number of already performed iterations (i).The weight functions are evaluated for i and the statement is then equivalent to a binary weighted choice, Note that, the semantics follows the well-founded loop principle.
Solving a constraint
The main role of the environment is to store the values of uncontrollable variables; it is a pair of stores "past values, input values."For such an environment e = (ρ, ι) and a satisfiable constraint c, we suppose given a procedure able to produce a particular solution of c : Solve ρ,ι (c) = γ (where γ is a store of controllable variables).We keep this Solve function abstract, since it may vary from one implementation to another (see Section 5).
At the end, we have (i) either k = n, which means that the execution has run to completion, (ii) or (ρ k+1 , ι k+1 ) : t k+1 x → which means that it has been aborted.
IMPLEMENTATION
A prototype has been developed in Ocaml.The constraint generator strictly implements the operational semantics presented in the previous section.The tool can do the following.
(i) Interpret/simulate Lutin programs in a file-to-file (or pipe-to-pipe) manner.This tool serves for simulation/prototyping; several Lutin simulation sessions can be combined with other reactive process in order to animate a complex system.(ii) Compile Lutin programs into the internal format of the testing tool Lurette.This format, called Lucky, is based on flat, explicit automata [14].In this case, Lutin serves as a high-level language for designing test scenarios.
Notes on constraint solvers
The core semantics only defines how constraints are generated, but not how they are solved.This choice is motivated by the fact that there is no "ideal" solver.
A required characteristic of such a solver is that it must provide a constructive, complete decision procedure; methods that can fail and/or that are not able to exhibit a particular solution are clearly not suitable.Basically, a constraint solver should provide the following.
(i) A syntactic analyzer for checking if the constraints are supported by the solver (e.g., linear arithmetics); this is necessary because the language syntax allows to write arbitrary constraints.(ii) A decision procedure for the class of constraints accepted by the checker.(iii) A precise definition of the election procedure which selects a particular solution (e.g., in terms of fairness).
Even with those restrictions, there is no obvious best solver as follows.(i) It may be efficient, but limited in terms of capabilities.(ii) It may be powerful, but likely to be very costly in terms of time and memory.
The idea is that the user may choose between several solvers (or several options of a same solver) the one which best fits his needs.The solver that is currently used is presented in the next section.
The Boolean/linear constraint solver
Actually, we use the solver [15] that have been developed for the testing tool Lurette [16,17].This solver is quite powerful, since it covers Boolean algebra and linear arithmetics.Concretely, constraints are solved by generating a normalized representation mixing binary decision diagrams and convex polyhedra.This constraint solver is sketched below and fully described in [15] First of all, each atomic numeric constraint (e.g., x + y > 1) is replaced by a fresh Boolean variable.Then, the resulting constraint is translated into a BDD. Figure 4 shows a graphical representation of a BDD; then (resp., else) branches are represented at the left-hand-side (resp., righthand-side) of the tree.This BDD contains 3 paths to the true leaf: ade, abce, and abd.When we say that the monomial (conjunction of literals) abce is a solution of the formula.It means that variables a and e should be false; variables b and c should be true; and variable d can be either true or false.The monomial abce, therefore, represents two solutions, whereas ade and abd represents 4 solutions each, since 2 variables are left unconstrained.
In Figure 4 and in the following, for the sake of simplicity, we draw trees instead of DAGs.The key reason why BDDs work well in practice is that in their implementations, common subtrees are shared.For example, only one node "true" would be necessary in that graph.Anyway, the algorithms work on DAGs the same way as they work on trees.
Random choice of Boolean values
The first step consists in selecting a Boolean solution.Once the constraint has been translated into a BDD, we have a (hopefully compact) representation of the set of solutions.
We first need to randomly choose a path into the BDD that leads to a true leaf.But if we naively performed a fair toss at each branch of the BDD during this traversal, we would be very unfair.Indeed, consider the BDD of Figure 4; the monomial ade has 50% of chances to be tried, whereas abce and abd have 25% each.One can easily imagine situation, where the situation is even worse.This is the reason why counting the solutions before drawing them is necessary.
Once each branch of the BDD is decorated with its solution number performing a fair choice among Boolean solutions is straightforward.
Random choice of numeric values
From the BDD point of view, numeric constraints are just Boolean variables.Therefore, we have to know if the obtained set of atomic numeric constraints is satisfiable.For that purpose, we use a convex polyhedron library [18].
However, a solution from the logical variables point of view may lead to an empty set of solutions for numeric variables .In order to chose a Boolean monomial that is valid with respect to numerics, a (inefficient) method would be to select at random a path in the BDD until that selection corresponds to a satisfiable problem for the numeric constraints.The actual algorithm is more sophisticated [15], but the resulting solution is the same.
When there are solutions to the set of numeric constraints, the convex polyhedron library returns a set of generators (the vertices of the polyhedron representing the set of solutions).Using those generators, it is quite easy to choose point inside (or more interestingly, at edges or at vertices) the polyhedron.
Using polyhedra is very powerful, but also very costly.However the solver benefits from several years of experimentation and optimizations (partitioning, switch from polyhedra to intervals, whenever it is possible).
Notes on predefined loop profiles
In the operational semantics, loops with iteration profile are translated into binary weighted choices.Those weights are dynamic; they depend on the number of (already) performed iterations k.
Interval loops
For the "interval" profile, those weights functions are formally defined and thus, they could take place in the reference semantics of the language.For a given pair of integers (min, max) such that 0 ≤ min ≤ max and a number k of already performed iterations, we have the following: (i) if k < min, then ω s (k) = 0 and ω c (k) = 1 (loop is mandatory); (ii) if k ≥ max, then ω s (k) = 1 and ω c (k) = 0 (stop is mandatory); (iii) if min ≤ k < max, then ω s (k) = 1 and ω c (k) = 1 + max −k.
Average loops
There is no obvious solution for implementing the "average" profile in terms of weightss.A more or less sophisticated (and accurate) solution should be retained, depending on the expected precision.
In the actual implementation, for an average value av and a standard variation sv, we use a relatively simple approximation as follows.
(i) First of all, the underlying discrete repartition law is approximated by a continuous (Gaussian) law.As a consequence, the result will not be accurate if av is too close to 0 and/or if st is too big comparing to av.
Concretely, we must have 10 < 4 * sv < av.(ii) It is well known that no algebraic definition for the Gaussian repartition function exists.This function is then classically approximated by using an interpolation table (512 samples with a fixed precision of 4 digits).
CONCLUSION
We propose a language for describing constrained-random reactive systems.Its first purpose is to describe test scenarios, but it may also be useful for prototyping and simulation.
We have developed a compiler/interpreter which strictly implements the operational semantics presented here.Thanks to this tool, the language is integrated into the framework of the Lurette tool, where it is used to describe test scenarios.Further works concerns the integration of the language within a more general prototyping framework.
Other works concern the evolution of the language.We plan to introduce a notion of signal (i.e., event) which is useful for describing values that are not always available (this is related to the notion of clocks in synchronous languages).We also plan to allow the definition of (mutually) tail-recursive traces.Concretely, that means that a new programming style would be allowed, based on explicit concurrent, hierarchic automsata.
Example 7 .Example 8 .
The following combinator defines the generic first-order filter constraint.The parameter y must be a real support variable (real ref) since its previous value is required.The other parameters can be any expressions of type real.Let fof (y : real; gain, x : real) : bool = (y = gain * (pre y)+ (1.0 − gain) * x).(16)Trace combinatorsUser-defined temporal combinators are simply macros of type trace.The following combinator is a binary parallel composition, where the termination is enforced when the second argument terminates.Let, as long as, (X, Y : trace) : trace = exception Stop in catch Stop in{ X & > {Y fby raise Stop} }. | 8,317.4 | 2008-01-24T00:00:00.000 | [
"Computer Science"
] |
Performance analysis of all optical-based quantum internet circuits
A Quantum dot implanted within a double-sided optical microcavity is considered and investigated as a critical component for all optical-based quantum internets. Due to the duality as the photonic quantum transistor and gate, the QD cavity system repsents a sturdy base for the future photonic quantum network. With the help of the analytical investigation and review presented in this paper, a quantum dot cavity unit can be developed for implementing deterministic quantum gates and transistors. The maximum fidelity observed for quantum diode, router, and storage is 95.24, 62.06, and 90.42%without considering a noisy environment, respectively, and 62.12, 60.36, and 43.66% under a noisy environment, respectively. Fidelity is also calculated with varying coupling conditions (strong and weak coupling), and optimized cavity parameters are calculated.
Introduction
Quantum machines have an implausible computational perspective that can address many practical applications such as integer factorization, quantum cryptography, modeling of complex quantum structure, quantum teleportation, and database search (Grover's algorithm) [1]. The Quantum gate is the fundamental element for quantum computers. SingleQubit unitary gates combined withCNOT gate is the universal set of quantum algorithm realization [2].
Photons are advanced to other Qubit options (atoms, electrons, etc.), as they offer enormous benefits (decoherence and fastest speed). Well-developed optical components are on hand for processing photonic data. Photonic devices are compatible with well-developed semiconductor technologies [3]. In the literature, it has been discussed that single Qubit unitary operations can be implemented efficient way using linear optical elements. In 1998 Cerf et al. demonstrated the design of a CNOT gate using optical components such as phase shifters and beam splitters. In 2001 Grover's algorithm was implemented using this scheme. A downside of this technique is the exponential increase of the photonic components with the size of the system [4]. The main difficulty in designing a multi-Qubit quantum circuit is the noninteracting nature of photons.
Earlier it was believed that without using nonlinearities, it was not possible to design two Qubit quantum circuits. However, the Kerr-based two Qubit quantum circuits are unfeasible in a single-photon state. In literature, many approaches (2001( Knill et al., 2002 Scott Glancy et al.) for photon interaction have been introduced, but these are nondeterministic approaches. Knill et al. designed a CZ gate using this scheme with a 0.25 success probability [5].
In 2009 Hu et al. proposed a spin-dependent beam splitter using a QD cavity arrangement. The photon interaction problem had been addressed in a deterministic way which was the main challenge for a two-photon quantum gate design. It opens the door to deterministic photonic quantum circuit design employing linear optical elements [6]. 2009 onwards QD cavity system has been investigated for multi-Qubit photonic quantum gate implementation. Cristian Bonato et al. [7] proposed a QD cavity model for providing an interface between photon and the spin of excess electron of QD inside the microcavity under weak coupling conditions.
In 2013 Hong-Fu Wang et al. [8] proposed a circuit for the teleportation of a controlled-NOT gate (spin Qubit based) using a QD cavity unit. In 2013 Hong-Fu Wang et al. demonstrated a photonic CNOT gate using a QD cavity unit. This was the critical research for fully photonics-based quantum data processing systems [8]. In 2013Wei et al. investigated the prospect of realizing scalable photonic quantum computing using a QD cavity unit. In 2013 Wei et al. [9] presented electron-spin Qubit-based compact quantum circuits for deterministic quantum computing employing a QD cavity unit. In 2013 Hong-Fu Wang et al. [10] proposed a scheme for teleportation of CNOT gate using QD cavity for spin Qubits. In 2014 Wei et al. [11] demonstrated universal quantum gates based on spin Qubits using QD cavity units. Hu et al. [12] presented the saturation effects around the cavity resonance. Tao Li et al. [13] proposed Heralded quantum repeater protocol for a quantum network based on a QD cavity unit. JinoHeo et al. [14] proposed a scheme based on a QD cavity unit for simultaneously transferring and teleporting an unknown state (electron spin) between two users. This scheme is teleporting spin qubits by taking the help of photonic qubits.
Amor Gueddana et al. [15] proposed a QD cavity unitbased model to realize a photonic CNOT gate using quantum cloner. For this scheme, electron spin measurements and ancillary Qubits are not required. This is not heralded scheme, so practically; fidelity will increase. Min-Sung Kang et al. [16] proposed a deterministic Fredkin gate using a QD cavity unit, which can perform a controlled swap operation between three Qubits. The performance of the designed gate was calculated under noisy conditions and concluded that the designed gate can be implemented experimentally with high efficiency.
For implementing quantum internet, universal quantum gates, memory, and quantum switching (control the flow of data)are vital components. Universal quantum gates have been explored by researchers with high fidelity and efficiency, but still, quantum switching elements were not investigated. Hu [17] explained that an efficient photonic transistor can be designed using a spin cavity unit. It was a milestone in the roadmap of integrated photonic quantum internet. A quantum dot cavity unit has been demonstrated for the physical realization of these quantum components with high feasibility and efficiency. It needs more research to develop a QD cavity unit-based large-scale integrated photonic quantum network.
The work reported in this paper will help researchers to explore the field of all optical-based quantum communication and computing. The article is organized as follows. In Sect. 2 Quantum dot cavity system and interaction mechanism between photon and quantum dot spin are discussed. In Sect. 3 QD cavity unit is demonstrated as a photonic quantum switching component (diode and router) and quantum memory. The performance of quantum circuits is analytically investigated, and optimized cavity parameters are calculated. Finally, the paper is concluded.
QD cavity systems
In the process of gate functioning cavity works as cold and hot depending on control and target photonic Qubits. Left circularly polarized (LCP) and right circularly polarized (RCP) are used to define input photon as Qubit. The hot or cold cavity depends on the polarization of the input photon, the propagation direction of a photon inside the QD cavity unit, and the spin of the QD electron. The hot cavity or cold cavity is part of the process of designing a quantum gate using a QD cavity unit. It's not possible to design a quantum gate using only one cavity case (hot or cold cavity). For designing a quantum gate, the QD cavity should work in both hot and cold cavity mode depending on the input photonic Qubits. The quantum dot cavity unit is depicted in Fig. 1 [8].
The optical dipole transition (Energy level diagram) of a QD spin and photon interaction is depicted in Fig. 2 .
When s z = +1 R ↑ and L ↓ photon interacts with QD which is in up spin state (|↑> ), the cavity works as a hot cavity. In hot cavity condition, Qubit couples with cavity and both direction of propagation and polarization of photon change. When s z = −1 R ↓ and L ↑ photon, interact with QD which is in the down spin state (|↓> ), the cavity works as the cold cavity. In the cold cavity condition, Qubit does not couple with cavity and only 180 • phaseshift is added to Qubit. For hot cavity regime ( g ≠ 0 ), interaction between the QD spin and photon within a practical cavity arrangement can be articulated as Eqs. (1-4) [8]: Sideband leakage ( S( ) ), Transmittance ( T( ) ), noise ( N( ))and reflectance ( R( ) ) factors for hot cavity ( g ≠ 0 ) condition are expressed by Eqs. (9)(10)(11)(12) [8]. Hot cavity case can be understood from Eq. (1) when R photon propagating in up direction inside QD cavity ( R ↑ ) interacts with QD excess electron spin in upstate ( ↑ ). Ideally in hot cavity case R( ) = 1 andT( ) = 0 . After interaction under ideal condition R photon will become L photon and propagation direction will be downwards inside QD cavity. But in practical condition R( ) ≠ 1 andT( ) ≠ 0 . So some portion of photon will transmit through cavity without interaction ( |R ↑ , ↑> ) and some portion will reflect with change in polarisation and direction of propagation ( |L ↓ , ↑> ). Similarly, interaction dynamics for the cold cavity regime ( g = 0 ) are described by Eqs. (5-8) [8]: Cold cavity case can be understood from Eq. (5), when L photon propagating downside in QD cavity ( L ↓ ) interacts with QD excess electron spin in down state ( ↓ ). Ideally T 0 ( ) = 1 and R 0 ( ) = 0 . Under ideal condition L photon will transmit through cavity without change in polarization and direction of propagation. But in practical condition T 0 ( ) ≠ 1 and R 0 ( ) ≠ 0 , so some portion of input photon will reflect ( |R ↑ , ↓> ) and some portion will transmit through cavity ( |L ↓ , ↓>).
Where c , X − and are the frequencies of cavity, transition and the incoming photon respectively.k s , k and g are the side leakage rate of the cavity, field decay rate and the coupling strength respectively. γ/2 is QD dipole decay rate. Similarly, sideband leakage ( S 0 ( ) ), transmittance T 0 ( ) , noise N 0 ( ) and reflectance R 0 ( ) factors for cold cavity ( g = 0 ) conditions can be expressed [8]. If we consider a noisy environment, then transmission and reflection factorsr 0 ,t 0 ,r , and t are given by Eqs. (13)(14)(15)(16) Without noisy environment reflection and transmission factors, r 0 ,t 0 ,r , and t are the same asR 0 ( ) , T 0 ( ) , R( ) , and T( ) . Quantum dot cavity arrangement can be explored to implement photonic quantum switches, routers, and DRAM.
Transmission and reflection coefficient are plotted with detuning and different coupling conditions as shown in Figs. 3 and 4. Dotted and solid lines are corresponding to cold and hot cavity conditions and colors (red, blue and black) are for different coupling conditions. The amplitude and phase of transmission and reflection coefficient are depending on frequency of input photonic Qubit. At ( − c ) = 0 , it can be observed that for hot cavity, reflection coefficient is maximum and for cold cavity, reflection coefficient is minimum. The requirement for hot cavity conditions is that the photon should interact with QD excess electron and reflect. Similarly at zero detuning, transmission coefficient for cold cavity is maximum and for hot cavity transmission coefficient is minimum. The requirement of cold cavity is that the photon should pass through QD cavity without any interaction.Transmission and reflection coefficients are also depending on coupling strength (black, blue and red lines are corresponding to different coupling conditions). so for correct functioning of QD cavity unit-based quantum circuits, coupling strength and detuning should be chosen in such a way that under cold cavity transmission coefficient should be maximum and under hot cavity condition, reflection coefficient should be maximum.
Similarly, noise and side band leakage factors are plotted in Figs. 5 and 6 with detuning and different coupling conditions. Noise is zero under cold cavity condition, so noise factor plots are only for hot cavity condition. At zero detuning side band leakage is minimum for hot cavity and maximum for cold cavity conditions. Noise and sideband leakage should be as less as possible for better functioning of quantum circuit. So detuning and coupling should be chosen in such a way that for both under hot and cold cavity cases these factors (noise and side band leakage) should be minimum. It is noted from Figs. 3, 4, 5 and 6 that these coefficients are strongly interrelated withcoupling and detuning. Thus the performance of the QD cavity-based quantum circuits depends on cavity parameters (noise, coupling strength, sideband leakage rate, and cavity mode decay rate) and input Qubits. To measure the performance of the quantum circuit, the average of fidelity (AOF) can be expressed as Eq. (17) [18]. The average of fidelity is a parameter that defines the closeness of two quantum states. The fidelity is used to decide how noisy a quantum circuit is.
The output of the ideal quantum circuit is |ψ s > , and ψ t is the final state of the quantum circuit under practical conditions. The final state |ψ t > is found analytically using the interaction mechanism of a photon and a quantum dot inside a double-sided optical cavity.
QD cavity systems as quantum Diode
The diode is the basic component for quantum switching. According to Pauli's exclusion principle (spin selection rule), the transmission or reflection of a photon depends on the spin of QD excess electron in the quantum dot cavity unit. A photonic diode can be implemented by using a QD cavity unit, as shown in Fig. 7. According to the spin selection rule of a QD cavity unit, the right circularly photon will transmit through the cavity if QD spin is up and reflect from the cavity if QD spin is down. Similarly, it can be understood for the left circularly polarized photon. Figure 7is the case when the diode is on(the device will pass photon. So by controlling the spin of QD excess electron, the diode can be set ON or OFF for incoming photons [17]. If (cosθ 1 |R > + sinθ 1 |L >) is applied to the quantum diode circuit as shown in Fig. 7. The final output state is found using Eqs. (5)(6)(7)(8) and expressed as The Fidelity of the quantum diode has been calculated using Eqs. (17 and 18) and plotted with varying cavity parameters, as shown in Fig. 8, with and without considering noisy conditions. Fidelity is strongly dependent on the sideband leakage rate. With increasing sideband, leakage factor fidelity will decrease. Fidelity also depends on noise. The maximum fidelity achieved is62.12 at g∕k = 4 and k s ∕k = 0.1 with a noisy environment and 95.24 at g∕k = 4 (18) | t >= t 0 cos 1 |R > +t 0 sin 1 |L > and k s ∕k = 0.1 without a noisy environment, respectively. Fidelity is not much dependent on coupling conditions (strong and weak coupling regime; instead, it depends on sideband leakage.
QD cavity systems as a quantum router
Quantum routers control the flow of information (photonic Qubits) on a quantum network. Quantum routers can be implemented using the QD cavity unit, as shown in Fig. 9a. When |ψ >= δ|R > +γ|L > signal is applied at the input port of the QD cavity unit, the R photon will transmit through the cavity, and the L photon will reflect from the cavity. L photon is passed through switches and applied to c-PBS (circularly polarizing beam splitter), and for synchronizing, L photon first passes through a delay line (D) and then applied to c-PBS. C-PBS is designed in such a way that it passes the L photon and reflects the R photon. So the output of the system is provided at port J. Similarly system designed in Fig. 9b can be explained, and the output is provided at port K. So, the input signal can be routed at port J or port K by controlling the spin of the quantum dot.
If (cosθ 1 |R > + sinθ 1 |L >) is applied to the quantum router circuit as shown in Fig. 9a. The final output state is found at port J using Eq. (5-8) and expressed as The Fidelity of the quantum router has been calculated using Eqs. (17 and 19) and plotted with varying cavity 6 Noise N(ω) v/s detuning ( − c )∕k Fig. 7 Photonic Diode implemented using QD cavity (c-PBS is circularly polarizing beam splitter, it passes R photon and reflects L photon) parameters, as shown in Fig. 10, with and without considering noisy conditions. It is observed from Fig. 10 that fidelity is strongly correlated with coupling and leakage rates. The computed maximum fidelity of the photonic quantum router is 60.36 at g∕k = 0.3 and k s ∕k = 0.1 withnoisy conditions and 62.06 at g∕k = 4 and k s ∕k = 0.1 without noisy conditions, respectively. Maximum fidelity is achieved in strong coupling regime without noisy conditions and in a weak coupling condition under noisy environment.
QD cavity systems as quantum memory
Quantum memories are used to store quantum information (photonic Qubit). Quantum memories can be realized using a QDcavity unit, as shown in Fig. 11. Initially, if the L photon is applied to the QD cavity unit, and the spin of the QD excess electron is set to down (↓)spin. So, according to paulli's exclusion, the photon doesn't couple with the cavity (cold cavity), and it is transmitted in the second cavity (downside mirror of the second cavity is fully reflective). Then spin of QD excess electron is changed to up (↑) spin. Now cavity acts as a hot cavity and reflects the photon. Photon resonates and stored in the second cavity until QD electron spin is changed. If QD spin is changed to upstate QD cavity unit acts as cold and transmits the photon, which can be read. Similarly, it can be explained for R photon also.
If (cosθ1|R > + sinθ1|L >) is applied to the quantum memory circuit, as shown in Fig. 11. R photon is reflected and L photon is transmitted from c-PBS. R photon will interact with first QD cavity unit and L photon interacts with second QD cavity unit. To achieve the storage process, quantum dot spin is changed so the cavity will act as a hot cavity, and photons will be reflected back in the lower cavity. If we consider one-time reflection or hot cavity condition. So if storage time increases, more reflection coefficients occur in Photonic Router realized using QD cavity [17] the final state. Now for the read operation again, quantum dot spin is changed, so the quantum dot cavity unit acts as a cold cavity, and the photon is available for reading. The final output state is found using Eq. (5-8) and expressed as It is noted from Fig. 12 that the maximum fidelity achieved for the memory is 90.42 at g∕k = 4 and k s ∕k = 0.1 and 43.66 at g∕k = 0.5 and k s ∕k = 0.1 with and without noisy conditions, respectively. Maximum fidelity is achieved in a strong coupling regime in both cases. Maximum fidelity is greatly affected by sideband leakage and noise. Read process fidelity is also depending on storage time. If information is read after significant time, fidelity of the reading process will decrease.
Conclusion
This paper analytically investigates the photonic quantum diode, router, and memory designed using QD cavity system and linear optics. The performance parameter (fidelity) is analytically calculated. It has been noted that fidelity strongly depends on the coupling regime (strong and (20) | t >= t 0 t 0 rcos 1 |R > +t 0 t 0 rsin 1 |L > Fig. 10 Fidelity versus g and k S of a quantum router a without noisy environment b with the noisy environment Fig. 11 Photonic Memory implemented using QD cavity (R right circular polarized photon, the top cavity is first cavity and bottom is the second cavity) [17] Fig. 12 Fidelity versus g and k S of a quantum memory read a without noisy environment b with the noisy environment weak). The performance of quantum switching circuits is greatly affected by quantum noise and sideband leakage. The optimum performance parameter of QD cavity-based photonic quantum circuits can be found using the analytical investigation presented in this paper. The Quantum cloner model can be used to further improve the performance of quantum circuits. Physically scalable multi-Qubit quantum circuit design, Quantum Simulators, quantum algorithms to advance state-of-the-art quantum computers, and quantum error correction codes are some areas that can be explored to implement quantum internet.
Funding None.
Data availability Enquiries about data availability should be directed to the authors. | 4,744.6 | 2022-08-09T00:00:00.000 | [
"Physics",
"Engineering",
"Computer Science"
] |
Enthalpies of Formation of L 12 Intermetallics Derived from Heats of Reordering
A new method is proposed for estimating the enthalpies of formation of L12 (fcc-ordered) intermetallics from the heat release measured during ordering of their disordered polymorphs. The method is applied to Cu3Au, Ni3Al, and Ni3Si. The resulting estimates of enthalpies of formation are close to values obtained by high temperature dissolution calorimetry. They also appear to be more precise than estimates based on Miedema’s correlations provided that care is taken to account properly for the magnetic and lattice stability contributions to the formation enthalpies in the ordered and disordered states. [S0031-9007(97)03482-0]
The stability of various phases depends on their thermodynamic potentials such as the Gibbs free energy G which depends on concentration and external variables such as temperature.At constant pressure, the molar free energy of a chemically ordered or disordered alloy structure is determined by the enthalpy DH formation and the entropy DS formation change that accompanies its formation from the pure constituents.While the formation entropy DS formation can often be approximated by the well-known configurational entropy associated with the combinatorics of arranging atoms on the lattice sites of the chosen structure with a given state of chemical order [1], smaller contributions such as the vibrational entropy and the magnetic entropy also depend on the state of chemical order.For example, it has been found [2] that for Ni 3 Al, the vibrational entropy difference between the disordered-fcc and ordered-fcc ͑L1 2 ͒ states is of the order of a third of the configurational entropy.
In the present work, we are concerned with the determination of the enthalpy of formation DH formation of ordered intermetallics, which is difficult to obtain and is usually measured by high temperature dissolution calorimetry.However, for most intermetallics such data are not available.In the absence of such data, a quick estimate of DH formation can be obtained using the correlations of Miedema [3].We will show that for certain ordered intermetallics, DH for mation can be estimated with good accuracy, from the easily measurable enthalpy difference DH ordering between the disordered and ordered states.
The thermodynamic modeling of ordered intermetallic phases is usually performed using a sublattice model and has a long history [4,5].It has been more recently described by the calorimetry and phase diagrams (CAL-PHAD) method [6,7].For a binary ordered alloy of the type ͑A y 0 B 12y 0 ͒ p͑A y 00 B 12y 00 ͒ q with the first sublattice preferentially occupied by A atoms and the second preferentially occupied by B atoms, the enthalpy per mole of the phase is usually written as where the first term refers to the enthalpy of formation of the stoichiometric (perfectly ordered) state.For site fractions y 0 A and y 00 A of A atoms and y 0 B and y 00 B of B atoms on the two sublattices, where ± H ApAq and ± H BpBq represent the enthalpies of the constituent elements A and B in the same crystal structure and ± H ApBq and ± H BpAq the enthalpies of the stoichiometric compounds A p B q and B p A q , and we assumed that y 0 A 1 y 0 B y 00 A 1 y 00 B 1 (no vacancies).The second term, H ex , expresses enthalpy changes due to deviations from stoichiometry and is written as where L i,j:i and L i:i,j are interaction parameters between atoms on a sublattice for a given site-occupancy of the other.However, in fully ordered stoichiometric compositions with no antisite defects (A and B atoms on their respective sublattices only), y 0 A y 00 B 1 and y 0 B y 00 A 0, and so is H ex of Eq. ( 2), as expected.Furthermore, H ref becomes just the enthalpy of formation ± H ApBq of the stoichiometric compound, which is usually measured by dissolution calorimetry.
In what follows we will show that in the particular case of stoichiometric binary phases in which all the nearest neighbors of one of the two sublattices (say the q sublattice) are on the other sublattice, the enthalpy of formation of the intermetallic phase ± H ApBq can be obtained from the "enthalpy of ordering" of its disordered state.This procedure, which to our knowledge has not been previously used, is of practical importance because it allows ordering enthalpies to be obtained easily from differential scanning calorimetric analysis instead of more difficult methods such as dissolution calorimetry.Furthermore, the method allows the determination of the enthalpies of formation of permanently ordered intermetallics such as Ni 3 Al from the reordering enthalpies of their metastable disordered polymorphs obtained by simple methods such as ball milling.
Consider a phase A p B q , of the type Cu 3 Au-L1 2 (ordered fcc) structure in which the B atoms occupy the sublattice of the cube-edge atoms ͑ y 00 ͒ of the fcc unit cell and the A atoms are on the sublattice of cube-face sites ͑ y 0 ͒.In this structure, which can be referred to as A p B q with q 1 4 and p 3 4 , the B atoms of the y 00 sites have all their Z 12 nearest neighbors (nn) on the p sublattice corresponding to the absence of any BB nearest neighbors (nn) while atoms on the y 0 sites have only bZ ͑q͞p͒Z 4 on the q sublattice (AB nn) and ͑1 2 b͒Z 8 on their own p sublattice (AA nn).If we approximate the enthalpy of such a structure in terms of the contributions to the internal energy, of the various nearest-neighbor pairs E AA , E AB , and E BB (and neglecting next nn effects), the enthalpy per mole (N A atoms) takes the form: where the factor ͑ 1 2 ͒ serves to avoid counting the A atoms on p sites twice.In order to get the enthalpy of formation DH for mation , we must subtract the enthalpies of the pure constituents pH A 1 qH B in the same crystal structure: and Consider now the enthalpy H͑solid-sol.͒ of a disordered solid-solution of composition identical to that of the intermetallic A p B q .In this case the two sublattices disappear as the site occupancy will be random for both A and B atoms.Assuming a similar nn approximation for estimating the enthalpy with AA, BB, and AB bond energies the same as in the ordered state, bond counting as in Eq. ( 4) yields and the formation enthalpy DH formation ͑solid-sol.͒ is obtained after deduction of the pure constituent enthalpies of Eq. ( 5): which is commonly known as the regular solution expression and where we have used p 1 q 1.The ordering enthalpy is then given by Using Eqs. ( 6) and ( 10), the intermetallic's enthalpy of formation can be simply written as thus allowing its derivation from the measurement of the ordering enthalpy.It must be emphasized that the simple form of relation (10) is not only due to the assumption of pair-wise interactions and equality of "bond energies" E AA , E BB , and E AB in the ordered and disordered structures, but also due to the absence of B-B nearest neighbors in the stoichiometric compound.(We will see later that if A, B, and A p B q do not all have the same fundamental crystal structure or are magnetic, additional terms must be considered).
As a first example of application of Eq. ( 10), we consider the Cu 3 Au-L1 2 structure.This intermetallic has been extensively studied because it is fairly simple (the constituent elements and the intermetallic are all fcc) and undergoes an order !disorder transformation near 500 K, and is thus available both in the intermetallic and the solid-solution (disordered) states corresponding to Eqs. ( 6) and (8).
In the case of isostructural Ni 3 Al, the intermetallic decomposes by a peritectic reaction from its ordered state and the disordered state of Eq. ( 8) can be obtained only by nonequilibrium processing (such as ball milling [8,9], vapor deposition [10], and to some degree by rapidsolidification [11]).In this case a magnetic contribution to the formation enthalpy must be included (because the Ni constituent is ferromagnetic while the intermetallic and its disordered state are paramagnetic down to 60 K or below, depending on purity) The magnetic enthalpy change upon alloying, DH mag , is calculated using experimentally measured magnetic moments B i 0 (in m B Bohr magnetons per atom) and Curie temperatures T i c of each phase following a method proposed by Hillert and Inden [12,13] which fits well the data for pure elements or alloys with a single magnetic component following the simple dilution law: and , where B i 0 and T i c are, respectively, the magnetization (in m B per atom) and the Curie temperature of phase i, R is the gas constant, P 0.28 and D 2.34 for fcc lattices.H mag is negligible for T ͞T c . 1.
As another example, we consider the L1 2 state of the intermetallic Ni 3 Si which also can be disordered by heavy deformation (milling) as reported by Shou and Bakker [14].While for Cu 3 Au and Ni 3 Al, all the constituent elements have fcc structures, as in the case of Ni 3 Si, Si goes from the diamond cubic structure to fcc upon alloying.Usually when elements A with crystal structure a and B with crystal structure b mix to form an alloy with crystal lattice g at a given temperature, the enthalpy of formation of the alloy includes a contribution given by where H LS A ͑g2a͒ and H LS B ͑g2b͒ are the so-called lattice stability terms associated with enthalpy differences between the intermetallic lattice g and the equilibrium room-temperature lattices a and b of the pure elements.They are experimentally available or calculated with good precision and are given in internationally compiled data bases such as [15].Such a contribution must be included for the Si constituent in the formation enthalpy Ni 3 Si.Thus, globally, Eq. ( 10) becomes DH for mation ͑A p B q ͒ DH ordering ͞q 1 DH mag 1 DH LS . ( Ni 3 Al and Ni 3 Si were disordered by heavy deformation (ball milling).Disordering was followed by the gradual disappearance of superstructure Bragg peaks from the x-ray diffraction spectra and by low temperature susceptibility measurements.The heat release that accompanies reordering DH ordering was then measured by differential scanning calorimetry [14,16].Many others also reported such experiments on Ni 3 Al [8,9].However, Ni 3 Al is mechanically very hard and gets contaminated by fragments from the milling device.We therefore use the results of Zhou and Bakker [14] (as given in Table I) who disordered their intermetallics in a device made of tungstoncarbide, a material harder than Ni 3 Al.Table I also gives the lattice stability, and magnetic contributions to the enthalpy of formation of the intermetallics.Since both the disordered and ordered states of Ni 3 Al and Ni 3 Si are paramagnetic at room temperature and above, they have equal magnetic enthalpy and there is no magnetic contribution to the enthalpy of ordering.However, DH mag of the intermetallics as well as the lattice stability contributions must be included in their formation enthalpies as in Eq. ( 14).The lattice stability terms have been calculated using standard CALPHAD equations [15] at 500 K and the magnetic terms using Eqs.(11) and (12) with T Ni c 633 K and B Ni 0 ͑500 K͒ ഠ 0.4m B .T 500 K was selected because the transformations back to the ordered state during annealing were found to occur near this temperature, depending on the composition and heating rate (the lattice stabilities are not strongly temperature sensitive).Using the values of Table I together with Eq. ( 14), we derive DH for mation and compare it in Table II to estimations with the Miedema model and DH formation values obtained from other experimental results (high temperature dissolution calorimetry).It can be seen that agreement between enthalpies of formation derived from the heats of reordering measured by differential scanning calorimetry are in good agreement with independently obtained experimental data.Our new procedure seems to give somewhat better results than Miedema's semiempirical method.Depending on the grain size, the measured heat release may include a contribution from grain growth occurring simultaneously with reordering where disordering has been accompanied by extreme grain refinement.This would lead to an overestimation of the formation enthalpy using Eq. ( 14).For example, in an intermetallic disordered by milling with nanocrystalline grain size, the grains were found to grow from a diameter of about 13 nm in the as-milled disordered state to 18 nm during ordering near 500 K (see [18]) corresponding to a grainboundary specific surface reduction of 10 7 cm 2 ͞mole of atoms with an expected heat release of less than 1 kJ͞mole to be compared to the ordering enthalpies of ball-milled Ni 3 Al and Ni 3 Si in Table I.On the other hand, Okamoto et al., using extended electron energy-loss finestructure spectra (EXELFS), found that disordered Ni 3 Al films developed some short-range chemical ordering below temperatures at which atoms can reorder back to the equilibrium L1 2 phase with long-range order (superstructure) [19].They estimated that up to 20% of the ordering enthalpy could be released by short-range ordering.While such a contribution is usually convoluted with broad DSC (differential scanning calorimetry) exotherms measured for disordered Ni 3 Al during reordering, in certain cases some short-range ordering may occur prior to calorimetric measurements, thus leading to an underestimation of the total ordering enthalpy and the calculated formation enthalpy.These considerations led us to the maximum error margins given with our values of DH formation in Table II.
In conclusion, we have shown using a simple relation [Eq.( 14)] based on nearest-neighbor pairwise interactions, that the heat of reordering of disordered stoichiometric L1 2 compounds can be used to estimate the enthalpies of formation with good precision.This simple relation between the enthalpies of formation and of reordering is obtained for the A 3 B-type L1 2 structures because they do not contain any B-B nearest neighbors.In our development, all of the measured heat release during reordering has been attributed to changes in the number of A-A, A-B, and B-B nearest-neighbors pairs.While a fraction of the order of 10% or 20% of the binding energy may be expected to be due to next nearest-neighbor interactions and other contributions in a superlattice with long-range chemical order [20], the good agreement obtained using Eq. ( 14) is indica-tive of the dominant role of nearest-neighbor interactions in ordered fcc intermetallics.More generally, the approach is likely to be limited to cubic superstructures because the use of the assumption of equality of pairwise interaction energies in the disordered and ordered states requires little or no change in nn distances upon disordering.The method can be used with sputtered or ball-milled samples together with differential thermal analysis (DSC or DTA), thus avoiding high temperatures.
The authors are grateful to I. Ansara, Director of Research at the CNRS, for precious advice.This work was funded by the European Union HRM network coordinated by R. W. Cahn.
TABLE I .
Lattice stability, magnetic and experimental ordering enthalpies of L1 2 -type Cu 3 Au, Ni 3 Al, and Ni 3 Si (in kJ͞mole of atoms).
TABLE II .
Enthalpies of formation of L1 2 -type Cu 3 Au, Ni 3 Al, and Ni 3 Si derived from their heats of reordering using Eq.(14) as compared to values obtained by dissolution calorimetry and by Miedema's method (all in kJ͞mole of atoms). | 3,635.8 | 1997-06-30T00:00:00.000 | [
"Materials Science"
] |
Exploiting the Natural Diversity of RhlA Acyltransferases for the Synthesis of the Rhamnolipid Precursor 3-(3-Hydroxyalkanoyloxy)Alkanoic Acid
The RhlA specificity explains the observed differences in 3-(3-hydroxyalkanoyloxy)alkanoic acid (HAA) congeners. Whole-cell catalysts can now be designed for the synthesis of different congener mixtures of HAAs and rhamnolipids, thereby contributing to the envisaged synthesis of designer HAAs.
S urfactants are amphiphilic molecules that reduce surface and interfacial tensions, which allows them to accumulate at interfaces and form emulsions. These properties are of industrial interest and are exploited in multiple applications in such different fields as pharmaceuticals, agriculture, food, detergents, and cosmetics (1-3). Biosurfactants are surfactants of biological origin and are a promising alternative to synthetic surfactants, as they are nontoxic, biodegradable, and produced from renewable feedstocks. Their application window is extensive, as they might be effective in environments with extreme pH, temperature, or salinity (4)(5)(6).
The biosurfactant 3-(3-hydroxyalkanoyloxy)alkanoic acid (HAA) is the hydrophobic moiety of rhamnolipids and most often consists of two hydroxy fatty acids linked by an ester bond (4, 7-10) (Fig. 1). Indeed, HAAs are not reported as typical products of microorganisms but, rather, were reported in trace amounts during rhamnolipid formation (11).
The carbon chain lengths of HAAs determine their physical properties, such as their abilities to foam and emulsify, and their critical micelle concentration (CMC). Their chain lengths are strongly hinted to be determined by RhlA, an acyltransferase containing an ␣-/-hydrolase domain that catalyzes the esterification of two activated hydroxy fatty acids to HAA (32). In in situ experiments, it has been shown that acyl-carrier protein (ACP)-activated hydroxy fatty acids are the preferred substrate for RhlA (8), while it has been shown in vivo in P. aeruginosa that CoA-activated hydroxyl fatty acids are incorporated preferably into the HAA molecule (33). Within the Gammaproteobacteria, Pseudomonas, Acinetobacter, Enterobacter (17,18), and Pantoea (34) species produce mono-or diglycolipids. Their chain lengths vary, while the most common HAAs have 10 carbon atoms in both hydroxy fatty acids and are thus denoted C 10 -C 10 . In contrast, representatives of the Betaproteobacteria, namely, Burkholderia species, predominantly produce HAAs with chain lengths of 14 carbon atoms (Fig. 2). A few species do not follow this general categorization. Pseudomonas chlororaphis, e.g., produces rhamnolipids with one fatty acid chain of 10 carbon atoms and one of 12, resulting in the designation Rha-C 10 -C 12 when these chains are fully saturated and Rha-C 10 -C 12:1 when FIG 1 Molecular structure of a rhamnolipid molecule. The chain lengths of the hydroxy fatty acids vary, resulting in different congeners. The main congener produced by P. aeruginosa contains 10 carbon atoms in both hydroxy fatty acid derivatives. Without the two rhamnose units, the molecule is a 3-(3hydroxyalkanoyloxy)alkanoic acid (HAA). The synthesis of an HAA molecule is catalyzed by RhlA, which fuses two hydroxy fatty acids. RhlB links an activated dTDP-rhamnose to an HAA, resulting in a mono-rhamnolipid, which is the substrate that is transformed by RhlC, the second rhamnosyltransferase, into a di-rhamnolipid. the C 12 chain is unsaturated in one position (15,35,36). In contrast, Burkholderia kururiensis KP23 produces Gammaproteobacteria-like rhamnolipids containing mainly C 10 -C 10 residues (24).
Rhamnolipid production has not been extensively explored in species of the phyla Firmicutes, Deinococcus-Thermus, Actinobacteria, and Ascomycota. Most promising are the results presented for Thermus species belonging to the phylum Deinococcus-Thermus. Pantazaki et al. (29) produced HAAs and rhamnolipids with chain lengths of 8 to 14 carbon atoms with Thermus thermophilus HB8. Rezanka et al. (30) reported the production of rhamnolipids by Thermus sp. strain CCM 2842, mainly containing the C 16 -C 16 HAA congener, which has not been previously reported. Both groups used selective mass spectrometric methods.
A number of papers in the scientific literature report the synthesis of novel rhamnolipids with novel hosts, which we could not confirm, revealing the need for standardization and guidelines for determination of rhamnolipid and HAA structures. In contrast to rhamnolipids, only a few methods also cover HAAs. Again, HPLC-MS/MS is the method of choice to cover both rhamnolipids and HAAs (37,38). The most comprehensive HPLC-MS/MS method focusing on HAA was presented by Lépine et al. (39). Therefore, our approach was to apply known and potential rhlA genes, express them recombinantly in Escherichia coli, and subject the resulting HAAs to a tailored HPLC-MS/MS analysis for confirmation.
The focus of our study was to explore the diversity of RhlAs and their potential to produce "designer HAAs." The results are discussed in a phylogenetic context.
RESULTS
The natural diversity of RhlA, the acyltransferase of the rhamnolipid synthesis pathway, was investigated and exploited for the synthesis of the lipophilic intermediate HAA. We cloned eight rhlA homologs drawn from the full phylogenetic range of Proteobacteria into the Escherichia coli expression vector pET28a. Alternative RhlAs allowed the synthesis of different HAA congeners.
Phylogeny of RhlA. It has been shown that HAA synthesis in E. coli relies only on a recombinantly synthesized RhlA from P. aeruginosa (8,32). Further, the experimental evidence strongly supports that RhlA selectively determines the -hydroxy fatty acid chain lengths in HAAs (20). As a first step toward tailor-made HAAs, the natural genetic diversity of RhlA was investigated. Representative RhlA protein sequences for all phyla that were detectable by homology searches in GenBank and KEGG were collected. First, the RhlA of P. aeruginosa was used as a template. As the RhlAs from, for example, Pantoea species have limited homology with the protein from P. aeruginosa, homology searches with these sequences were also performed. All identified RhlA proteins are from the classes Betaproteobacteria and Gammaproteobacteria (Fig. 3). Strains from other phyla that are reported to produce rhamnolipids have not been sequenced, and the genes encoding their rhamnolipid synthesis pathways are not known, with two exceptions; an RhlA (GenBank accession number KP202092) was found in the Actinobacteria strain Dietzia maris As-13-3 (28), and the genome sequence of the Deinococcus-Thermus strain T. thermophilus HB8 (29) is known. However, in the latter genome, no rhlA homolog was found.
In general, the identified RhlAs can be divided into three main branches of a currently sparse phylogenetic tree (Fig. 3). In the first branch, the representatives of the genus Pseudomonas form a monophyletic lineage. In the P. aeruginosa strains, represented by strain PA01, two operons containing structural genes for rhamnolipid synthesis are known. In the first of these operons, rhlA and rhlB, the relevant genes for mono-rhamnolipid synthesis, are clustered with a regulator and inducer for quorum sensing, while rhlC, which enables the strain to produce di-rhamnolipids, is located in a different operon and is clustered with a putative transporter (40). Surprisingly, an analysis of the genetic environment of rhlA homologs detected using BLAST in the Pseudomonas fluorescens group showed two possible locations. Besides the colocalization with rhlB, an rhlA homolog is found in synteny with a putative transporter. In P. fluorescens strain A506, rhlA genes are present in both loci, while in P. fluorescens strain Operons associated with rhamnolipid formation are drawn next to the organism names, and genes are labeled with their gene locus or protein accession number. Organisms chosen for HAA production in this study are highlighted in green, while elsewhere-confirmed RhlAs are marked in bold. Others were chosen based on homology searches. S. plymuthica is marked in gray, as we could not confirm an RhlA activity. Double slashes depict independent genomic locations. In Dietzia maris, the synteny of rhlABC is not published. The strains P. fluorescens LMG05825 and P. chlororaphis NRRL-B-30761 are not genome sequenced; therefore, the putative homologous genes are indicated by question marks. The genes for rhamnolipid formation in P. aeruginosa are typically organized in two operons; rhlB (red) is located downstream of rhlA (green) and encodes rhamnosyltransferase I, which is necessary for mono-rhamnolipid formation. The genes rhlA and rhlB are colocalized with the regulator and inducer genes (rhlRI, white) that are involved in regulation via quorum sensing. In a second operon, rhlC (orange), the gene coding for rhamnosyltransferase II, is clustered with a putative transporter (light blue) gene. In the strains P. fluorescens LMG05835 and P. chlororaphis, only the genes shown are sequenced. In the P. ananatis LMG20103 operon containing rhlAB homologs, three genes are present that code for a methyl-accepting chemotaxis citrate transducer (tcp), a putative inner membrane protein (ygbK), and a 2-keto-3-deoxygluconate permease (kgdT1). In the Burkholderia species, the structural genes for di-rhamnolipid formation are organized in a single operon that further includes the genes nodT and hylD, which are potentially involved in the drug resistance systems of the cell. The tree was constructed using the neighbor-joining method in MEGA7 with default settings. Branch lengths shorter than 0.02 are omitted. SBW25, only the latter location and no rhlB homolog can be found. Most P. fluorescens strains do not carry the genes for rhamnolipid synthesis (rhlA in synteny with rhlB, data not shown).
In the second branch, all representatives of the Burkholderia genus and the only Actinobacteria species, D. maris As-13-3 (28), are present. However, the RhlA of D. maris As-13-3 is reported to share 96% sequence identity with a Burkholderia cenocepacia protein, indicating that horizontal gene transfer is a probable explanation for its occurrence. In general, in Burkholderia, rhlAB are located on chromosome II and are in synteny with the putative transporter gene and rhlC. Furthermore, nodT and hylD, coding for enzymes related to efflux and secretion processes, are colocated. In B. cenocepacia and Burkholderia ambifaria, an open reading frame encoding a methyl transferase is placed between rhlA and rhlB. A second operon for rhamnolipid formation exists in Burkholderia pseudomallei and Burkholderia thailandensis on chromosome I (not shown).
The third branch includes homologous proteins from representatives of the orders Enterobacterales and Oceanospirillales, the latter with the only representative being Halomonas. In general, in this branch, the homology of the RhlA proteins is more divergent than in the Pseudomonas and Burkholderia branches. An rhlAB-like operon is found only in Pantoea strains (34) and Lonsdalea britanica, while rhlA homologs are found in Serratia and Dickeya strains but not in synteny with an rhlB homolog. No experimental evidence for HAA or rhamnolipid formation exists for the organisms in this branch, with the exception of Pantoea ananatis BRT175 (P. ananatis) producing the glucolipid ananatoside A, the hydrophobic part of which is an HAA molecule (34,41). In P. ananatis LMG20103, the three genes tcp, ygbK, and kgdT1, which code for a putative methyl-accepting chemotaxis citrate transducer, an effector protein, and a 2-keto-3-deoxygluconate permease, respectively, are encoded in one common operon with the rhlAB homologs.
Determining the synteny of sequences identified by BLAST analyses using an RhlA query requires detailed analysis to distinguish RhlA from the transacylase PhaG, an enzyme that links de novo fatty acid and polyhydroxyalkanoate (PHA) biosynthesis (42)(43)(44) by catalyzing the reesterification from acyl carrier protein (ACP) to CoA. In P. aeruginosa, the protein sequences of RhlA and PhaG have a 44% sequence identity (44), which is similar to the 44 to 48% identity between Burkholderia RhlAs and RhlAs of P. aeruginosa. Fig. 3 shows that rhlA in Pseudomonas, Burkholderia, and Pantoea is part of a glycolipid synthesis operon. In contrast, phaG is located upstream of a tRNA gene, and furthermore, homologs of four of the six upstream genes of phaG in Pseudomonas putida can also be found upstream of phaG in P. aeruginosa (Fig. 4). We used this difference in the synteny of rhlA and phaG as a criterion for the identification of rhamnolipid genes in the reported rhamnolipid producer Pseudomonas desmolyticum NCIM-2112. We were especially interested in this strain, as it was reported to produce rhamnolipids with chain lengths of six to eight carbon atoms (45), a congener range not The gene synteny of the phaG homolog in P. desmolyticum is the same as in P. putida. Homolog genes coding for a uracil-DNA glycosylase (UDG), a 3-hydroxyisobutyryl-CoA hydrolase (3-HIB-CoAH), a protein of unknown function (u.f.), and a ribosomal small subunit pseudouridine A (RsuA) located upstream of phaG can also be found in the upstream region of phaG in P. aeruginosa. A tRNA homolog is placed downstream. This difference in synteny can be used as a criterion to distinguish rhlA from phaG in Pseudomonas species.
confirmed yet for an isolated RhlA. Full genome sequencing allowed a BLAST search for RhlA; however, only the transacylase-encoding phaG was identified. A gene encoding RhlB was not found in the genome of P. desmolyticum (data not shown). To improve the authoritative value of rhamnolipid literature, genetic evidence could be, besides highquality analytics, a means to reduce or ideally avoid miscommunication of rhamnolipidproducing strains (46).
Considering gene synteny, the rhlA homologs identified by BLAST analysis of the Serratia, Dickeya, and Halomonas species are not well supported. Experimental evidence should confirm or disprove the RhlA activity.
HAA synthesis with recombinant E. coli. E. coli strains BL21(DE3) and C43(DE3), each equipped with the rhlA gene from P. aeruginosa (pPA2), were grown in LB medium. Defined glucose pulses were given to provide an additional carbon source. When applying E. coli C43(DE3) as the host, glucose addition caused a steep increase in HAA titers 2 h after induction, which subsequently stagnated as glucose was depleted (6 h) (Fig. 5). The high HAA formation and growth rates were restored after the second glucose pulse at 20 h. While the growth rate slowed down 2 h later, the HAA production rate remained high, pointing to the fact that resources were efficiently allocated to the HAA synthesis pathway and diverted from supplying the growth machinery. With this strategy, an HAA titer of 1.7 g/liter 30 h after induction was reached, which is the highest concentration reported so far using recombinant microorganisms for HAA synthesis. Using E. coli BL21(DE3) as the host, the glucose supplementation had no enhancing effect on HAA formation at any time but was used for biomass formation. In this host, only 0.4 g/liter was achieved 20 h after induction.
The main HAA congeners synthesized by E. coli C43(DE3) pPA2 (Table 1) were the same as those produced by a recombinant P. putida KT2440 using the P. aeruginosa rhlA (11,20) or the wild-type P. aeruginosa strain (12,(47)(48)(49). The HAA spectrum observed is, however, broader in E. coli, expanding to C 14 -containing congeners. The results support previous data showing that the RhlA enzyme is mainly responsible for HAA congener selectivity, while the host organisms play only a minor role.
Diversification of the HAA spectrum by exploiting natural genetic variance. In order to increase HAA congener diversity, seven additional rhlAs of species representing the identified evolutionary space were used.
The first obvious choice from the Betaproteobacteria was RhlA of Burkholderia plantarii PG1 (formerly Burkholderia glumae PG1), which synthesizes mainly C 14 rhamnolipids (20,21). We also chose RhlA of B. ambifaria, which was of particular interest, as the protein shares 91% identity with RhlA of D. maris, which was reported to produce the C 10 -C 10 congener (28). Our purpose was to verify this nontypical main congener for the Burkholderia genus with pAMB.
In contrast to the 16S rRNA phylogeny, in which Enterobacterales, Oceanospirillales, and Pseudomonas are classes of Gammaproteobacteria, the RhlA sequences of the Enterobacterales and the Oceanospirillales-representative Halomonas form a common third branch (Fig. 3). We thus selected RhlA from P. ananatis LMG20103 as the first representative from the Enterobacterales branch. This strain is fully genome sequenced, and the genes for glycolipid synthesis are present (34, 50). Rooney et al. (17) and Hošková et al. (18) reported that other Enterobacterales synthesize rhamnolipids with mainly C 10 -C 10 HAAs (Fig. 2). The N terminus of RhlA in P. ananatis is longer than those of other RhlA proteins, which might be due to automated annotation. For this reason, two versions of rhlA were cloned, one representing a normal-sized rhlA and the long rhlA version. Both rhlAs led to HAA production (data for the long version not shown), suggesting that the normal-sized RhlA is the native protein. Additionally, sequencing indicated a frameshift in the published sequence that led to 13 incorrectly annotated amino acids. A comparison with RhlA from, e.g., Pantoea stewartii A206 confirms this finding, and a corrected sequence was submitted to GenBank (accession number MF671909). As mentioned above, the gene synteny in Dickeya dadantii Ech586, Halomonas sp. R57-5, and Serratia plymuthica PRI-2C does not show colocalization with genes related to glycolipid synthesis, but the rhlA homologs are isolated in the genome.
To experimentally confirm the activity, we further investigated HAA formation using rhlA genes of these strains. Finally, we included the rhlA from P. fluorescens LMG 05825 (P. chlororaphis ATCC 17813), which is reported to be the same strain as P. chlororaphis NRRL B-30761 (35), a strain producing mainly C 10 -C 12 and C 10 -C 12:1 congeners (15). Solaiman et al. (36) found an operon containing rhlAB and the regulator gene rhlR (Fig. 3) in this strain. While we could confirm the previous results, the rhlA from strain LMG05825 carried two nucleotide changes resulting in one amino acid difference in RhlA.
E. coli strains were equipped with one of the eight rhlA genes and cultivated as described above. Glucose was fed 2 and 22 h after IPTG (isopropyl--D-thiogalactopyranoside) induction. Seven of the eight recombinant strains produced HAAs (Table 1); E. coli C43(DE3) pPLY was the exception. Again, while the main HAA congeners were highly similar to reported congener compositions of wild-type strains, the congener spectrum might be a bit wider, which however, could also be a result of the sensitive method used for identification in this study. By the combination of efficient chromatographic separation and structure informative tandem mass spectrometric detection, the resulting HPLC-MS/MS method enables selective and sensitive detection of HAAs. A limit of detection in the range of 0.1 mg/liter is achieved, and thus, HAA with a relative share of Ͻ0.1% can be detected ( Table 1).
As expected from mono-rhamnolipids produced by the wild-type strain P. chlororaphis NRRL-B-30761 (15), our congener determination with plasmid pFLU revealed a different main congener spectrum than with pPA2 from P. aeruginosa. Accordingly, we detected C 10 -C 12 and C 10 -C 12:1 to be among the main congeners, but additionally, we identified C 10 -C 14 and C 10 -C 14:1 to be present in even slightly larger fractions. The C 10 -C 14 congener was also detected by Gunther et al. (15), though in a smaller fraction. In contrast to pPA2, where the longest detected chain contained 14 carbon atoms, with pFLU, congeners containing C 16 , C 16:1 , or even C 18:1 chains were present.
The two RhlA proteins from the Betaproteobacteria branch (pBUG and pAMB) showed 14 carbon atoms in both chains. For pBUG, this was expected due to the phylogenetic classification with other Burkholderia strains, for which C 14 -C 14 rhamnolipid production has been reported in wild-type (19,(21)(22)(23) and recombinant strains (20). With pBUG, 16% of the HAAs incorporated at least one C 16 or C 16:1 fatty acid. In the phylogenetic tree shown in Fig. 3, pAMB is arranged with RhlA from Dietzia maris (28), shown to produce C 10 -C 10 -containing rhamnolipids. Therefore, the result with mainly C 14 chain lengths was unexpected. Besides the main fraction of C 14 chain lengths, we found with this plasmid the most significant fraction of unusual congeners containing chain lengths with odd numbers, namely, 12% containing at least one chain with 13 carbon atoms and in traces C 15 or C 15:1 . To further confirm the presence of these odd-numbered hydroxy fatty acids, we conducted LC-MS/MS measurements applying high-resolution MS. Besides high resolution, the instrument used also delivers high mass accuracy (Ͻ5 ppm relative mass deviation compared to the theoretical value). Hence, elemental compositions can be deduced not only from the intact HAA molecule but also for the fragments in MS/MS mode. Exemplar data are presented in Fig. 6A, where the high-resolution MS/MS mass spectrum of an HAA molecule contain- (39). Therefore, the detection of these two fragments also demonstrates that two congeners are contained, i.e., HAA C 13:0 -C 14:1 and HAA C 14:1 -C 13:0 . Furthermore, the presence of odd-chain hydroxy fatty acids was confirmed using complementary gas chromatography-mass spectrometry (GC-MS) analysis. HAA samples were hydrolyzed and derivatized to yield the corresponding methyl ester. Additional trimethylsilylation of the hydroxy group facilitated the assignment of chain length as well as position of the hydroxy group in the mass spectrum obtained by electron ionization (Fig. 6B).
Most surprising and divergent were the results we obtained with the plasmids from the Enterobacterales and Halomonas species. Though the RhlA homologs form their own branch (Fig. 3), the HAAs detected with the single plasmids do not show common characteristics. Again, for pPLY, HPLC-MS/MS did not confirm HAAs in the culture supernatant but did confirm other fatty acids. Notably, these free fatty acids experienced similar retention on the used HPLC column as HAAs. Using unspecific detection, such as charged aerosol detection, or by evaporative light scattering detection, false annotation cannot be ruled out. With plasmid pDAD, a comparable spectrum to pFLU was observed. Strikingly, the main fraction contained C 10 -C 14 or C 10 -C 14:1 (26 and 47%, respectively), indicating a high specificity for these congeners. With pANA, the main congeners contained C 10 -C 10 (31%), C 10 -C 12 or C 10 :C 12:1 (27%), and C 10 -C 14 or C 10 -C 14:1 (21%), which is comparable to the congeners found with pPA2 and pFLU. Plasmid pHAL, in contrast, showed congeners like the Burkholderia strains with saturated and monounsaturated C 14 chains.
The congeners that were produced covered the entire HAA spectrum known in wild-type Proteobacteria species. The congener C 8 -C 8 produced by P. aeruginosa 57RP (49, 52) was found in some, but not all, experiments with pPA2 and hence is not listed in Table 1.
DISCUSSION
The esterification of (hydroxy-) fatty acids as it is catalyzed by RhlA is a rare enzyme activity. A similar activity can be found in the black yeast fungus Aureobasidium pullulans. In this strain, liamocin, a glycolipid consisting of mannitol linked with three or four 3,5-dihydroxydecanoic ester groups, is produced (53). Our survey for RhlA in microorganisms showed its presence mainly in the Betaproteobacteria and Gammaproteobacteria phyla, with little evidence in other phyla. We exploited the natural diversity of RhlA, allowing the synthesis of distinct HAA congener mixtures using E. coli as a host. The confirmed substrate specificity of RhlA opens the door for the production of tailor-made HAAs. Fig. 2 shows that rhamnolipid producers are not restricted to representatives of the Betaproteobacteria and Gammaproteobacteria phyla. However, we and others (46,54) have experienced difficulties in reproducing and confirming previous studies showing rhamnolipid synthesis by bacteria of different phyla. In many cases, we did not detect rhamnolipid production and/or genetic evidence for rhamnolipid synthesis despite having cultivated and/or sequenced the reported rhamnolipid producers, respectively. Having had similar experiences, Irorere et al. (46) ascertained that unequivocal analytical techniques to determine rhamnolipid production were not used and concluded that particular reports might be erroneous. As mentioned above, Jadhav et al. (45), for example, reported that P. desmolyticum NCIM-2112 produced mono-rhamnolipids with fatty acid chain lengths of from six to eight carbon atoms. Our efforts to identify an rhlA homolog after genome sequencing failed. We cultivated the organism as described by the authors but detected no rhamnolipids using HPLC-MS/MS (data not shown). The question of whether P. desmolyticum encodes an enzyme with RhlA activity but of a different phylogeny remains, which is consistent with the observations of Kügler et al. (54) finding no evidence of reports of rhamnolipid production by Actinobacteria. Indeed, we found no RhlA with homology searches in non-Proteobacteria species, with the exception of the actinobacterium Dietzia maris AS-13-3. A detailed survey of the analytical methods applied for the identification of novel rhamnolipid-producing strains is necessary. In this regard, the reports of rhamnolipid production in Renibacterium salmoninarum 27BN (27), Tetragenococcus koreensis JS (25), and Aspergillus sp. strain MFS1 (31) do not fulfill the criteria for unequivocal rhamnolipid identification proposed by Irorere et al. (46).
The diversity of HAA congeners might be broadened by identifying RhlAs from Betaproteobacteria and Gammaproteobacteria. The evolutionary relationships between the known RhlAs (Fig. 3) are to a large extent consistent with the species phylogeny based on 16S rRNA gene sequences (Fig. 2). However, it is striking that the RhlA proteins within the genus Pseudomonas do not form a monophyletic lineage with the RhlAs from the Enterobacterales (Serratia, Dickeya, Lonsdalea, and Pantoea) as the 16S rRNA genes do. While pseudomonads and Enterobacterales are both Gammaproteobacteria, the RhlA proteins of the Enterobacterales are outgrouped, forming a separate branch.
Within the pseudomonads, species-specific HAA congeners could be produced. While pPA2 is the most prominent C 10 -C 10 producer, we detected with pFLU a C 12 or C 14 chain combined with a C 10 chain, which confirms the findings of Gunther et al. (15). However, Gunther found C 10 -C 12 or C 10 -C 12:1 as the main congener; in our study, C 10 -C 14 and C 10 -C 14:1 turned out to be even more prominent.
The Burkholderia species B. plantarii, B. thailandensis, and B. mallei synthesize mainly C 14 -C 14 rhamnolipids (19,(21)(22)(23)55). This was confirmed in this study. Using the B. plantarii RhlA, the average carbon chain was determined to have 14.0 carbon atoms. However, in terms of RhlA diversity, the phylogeny of the known RhlA proteins indicates that other Burkholderiaceae might produce HAAs with shorter fatty acids. B. kururiensis, belonging to the genus Paraburkholderia (24), is reported to mainly produce the C 10 -C 10 rhamnolipid congeners. Two explanations for this finding are possible. On the one hand, a C 10 -specific protein might have been transferred to B. kururiensis from, e.g., P. aeruginosa via horizontal gene transfer. On the other hand, the rhlA might have evolved from the original Burkholderia type rhlA to be more promiscuous toward shorter fatty acid chain lengths.
Most new congeners with odd chain lengths were produced when RhlA from B. ambifaria was applied, which we confirmed with GC analytics (Fig. 6B). It was shown that in contrast to acetyl-CoA, propionyl-CoA can be accepted by the enzyme FabH as a precursor to chain elongation, resulting in odd-chain-length fatty acids (56,57). FabH varies depending on its bacterial origin and accepts acetyl-CoA or propionyl-CoA in bacteria synthesizing straight-chain fatty acids, while in branched-chain fatty acid-producing bacteria, branched-chain acyl-CoAs serve as precursors for chain elongation (57). Our results showing straight C 13 chains when using pAMB in E. coli indicate that FabH of E. coli is of the straight-chain type delivering the substrate for the B. ambifaria RhlA.
Most interesting and representing the group with the most potential toward novel HAA congeners are the results we obtained with RhlA proteins from the Enterobacterales and Halomonas. Except for Pantoea and Lonsdalea, the genes coding for RhlAs in this group are not colocalized with other genes related to glycolipid formation and thus are difficult to distinguish from phaG. In contrast to the Pseudomonas and Burkholderia branches shown in Fig. 3, RhlA homologs from five genera are combined in the third branch. With only four RhlAs tested, we found a diversity within this group ranging from no HAAs with pPLY over similar congeners like in the pseudomonads (pDAD and pANA) to a Burkholderia-like spectrum with pHAL. So far, few results from Enterobacterales have been presented in the literature. Reports about rhamnolipid formation by the wild-type strains Enterobacter asburiae (17,18), Enterobacter hormaechei, P. stewartii (17) (Fig. 1), and P. ananatis BRT175, a strain producing a glycolipid with a sugar moiety other than rhamnose (34,41), show similar fatty acid chain lengths to those we detected with pANA and pDAD. Though the HAA spectrum from, e.g., pDAD and pFLU or pHAL and pBUG are similar, the RhlAs are only distantly related and not arranged in the same phylogenetic lineage. The diversity of HAAs within the Enterobacterales indicated by long branch lengths hints at the existence of more proteins with RhlA activity in this and other orders of the Gammaproteobacteria, such as the Oceanospirillales. With confirmed RhlA activity from numerous species, the sparse tree depicted in Fig. 3 might develop toward distinct branches related to genera. A tendency can already be seen with our data obtained with pHAL, pDAD, pPLY, and pANA. Eliminating the unconfirmed S. plymuthica strain, to date, three strains from Lonsdalea and Pantoea form their own lineage. In these species, a colocalization of the rhlA homologs with an rhlB homolog are found.
Our results indicate that the rhlA genes are conserved within microbial genera. Because RhlA mainly determines substrate specificity in the rhamnolipid synthesis pathway, the main fatty acid congener of HAAs and rhamnolipids can be inferred from knowledge of the species of the producing organism. Insights into the correlation between microbial and RhlA phylogeny and RhlA specificity may be fostered by additional genomic and production data from rhamnolipid producers, ideally increasing the number of HAA-producing species and genera for which genetic evidence for rhlA genes exists.
HAAs synthesized by non-Proteobacteria. Toribio et al. (58) argued in 2010, when hundreds of genomes were already available in databases, that the rare occurrence of rhlA homologs outside of the Betaproteobacteria and Gammaproteobacteria species suggested that horizontal gene transfer occurs only in rare circumstances. This conclusion agrees with the results for D. maris presented here (28). Although 360,000 genomes are currently available in the Genomes On Line Database (GOLD), and many more are available in others, the early observation by Toribio et al. (58) is still valid. Although no evidence is presented here, a massive gene loss in most other phyla and genera cannot be excluded. With BLAST searches, RhlAs cannot be detected in, e.g., Thermus. Despite having a common ancestor, it is possible that the phylogenetic distance increased during evolution. This hypothesis is supported by the fact that the Betaproteobacteria RhlA proteins from P. aeruginosa and P. ananatis, which show a similar HAA spectrum, share a mere 35% identity or, to name another example, proteins from B. plantarii PG1 and Halomonas spp. show only 50% identical positions (data not shown). Alternatively, rhamnolipids might be synthesized by proteins that do not share an evolutionary origin with RhlA. Some evidence exists for alternative genes, especially for strains of the genus Thermus, which is encouraging. Pantazaki et al. (29) reported the production of HAAs and rhamnolipids with chain lengths of 8 to 14 carbon atoms using the fully sequenced strain T. thermophilus HB8. No homologs of rhlA and rhlB were found in the genome using conventional BLAST approaches. The main congener detected in Thermus sp. CCM 2842 was Rha-C 16 -C 16 , and fatty acid chain lengths of up to 24 occur in small fractions (30), indicating that an RhlA with different substrate specificity exists; again, no genetic evidence is available. The numerous reports of HAA and rhamnolipid synthesis by species not belonging to Betaproteobacteria and Gammaproteobacteria remain something of a mystery, with explanations as divergent as erroneous analytics, horizontal gene transfer, massive gene loss, and parallel evolution. The challenge to identifying the genetic origin of rhamnolipid synthesis in phyla such as Firmicutes, Actinobacteria, and Deinococcus-Thermus thus remains.
detection was carried out with electrospray ionization in negative ionization mode. Structural information was provided by performing additional MS/MS experiments on two different mass spectrometers as follows. Samples of E. coli C43(DE3) pPA2 and C43(DE3) pBUG were analyzed on a Micromass Quattro micro triple quadrupole mass spectrometer (product ion scans) as detailed previously (37). MS/MS characterization of extracts from E. coli C43(DE3) pANA, C43(DE3) pFLU, and BL21(DE3) pPA2 was carried out on a linear ion trap mass spectrometer (LTQ XL; Thermo Fisher Scientific, Inc., San Jose, CA, USA) under the conditions described by Behrens et al. (38).
Additional confirmatory experiments were conducted using high-resolution MS. The analytes were identified by their accurate masses detected on a QExactive hybrid quadrupole Orbitrap (Thermo Fisher Scientific, Waltham, MA, USA) mass spectrometer. The instrument was operated in negative electrospray ionization mode with the following parameters: spray voltage, 3.0 kV; sheath gas, 40 arbitrary units (AU); auxiliary gas, 10 AU; sweep gas, 1 AU; resolution, 140,000ϫ (full width at half maximum [FWHM] at m/z 200); and mass range, m/z 200 to 1,000.
The intact HAAs were detected as deprotonated molecules ([M-H] -); e.g., a peak at m/z 301 was observed for C 8 -C 8 HAA. MS/MS product ion spectra were dominated by the cleavage of the ester bond between the two -hydroxy fatty acids as described by Lépine et al. (39). The product ion spectrum of the parent ion at m/z 301 showed a major fragment at m/z 159, which corresponds to a C 8 fatty acid moiety, thus confirming the assignment of the parent as C 8 -C 8 HAA. Fragments with m/z 131 and 187 were also present. These ions indicate the presence of C 6 and C 10 fatty acid moieties, therefore confirming by LC-MS/MS that not only C 8 -C 8 HAA but also C 6 -C 10 and C 10 -C 6 HAAs were present.
Confirmation of hydroxy fatty acids by GC-MS. The HAAs were analyzed using gas chromatography-mass spectrometry (GC-MS). Therefore, an aliquot of each sample was dried under a gentle stream of nitrogen and hydrolyzed with 0.5 M NaOH in MeOH-H 2 O solution (9:1 [vol/vol], 2 ml, 70°C, 1 h). Afterward, the solution was acidified to pH 3 with 1 M HCl, and the fatty acids were extracted with chloroform (3 ϫ 3 ml). After removal of the solvent, fatty acid methyl esters (FAMEs) were prepared by adding 100 l of BF 3 -MeOH (14%, wt/vol) and heating (75°C, 1 h). Then, 2 ml of H 2 O was added, and the FAMEs were extracted with chloroform (3 ϫ 2 ml). The solvent was evaporated using a gentle stream of nitrogen. The residue was redissolved in 25 l of pyridine and 50 l of the silylating agent (BSTFA:TMCS [99:1, vol/vol]) and then heated (70°C, 1 h). Finally, the silylating agent was removed under a gentle stream of nitrogen, and the residue was rediluted in 0.2 ml n-hexane and used for GC-MS analysis.
After derivatization, samples were analyzed using a GCMS-QP-2020 equipped with a Nexis GC-2030 gas chromatograph (both Shimadzu, Kyoto, Japan). A 30-m, 0.25-mm-inside-diameter (i.d.), 0.25-mfilm-thickness DB-5MS column (J&W Scientific, Folsom, CA, USA) was used for the separation. Samples (1 l) were injected using an AOC-20i Plus autosampler (Shimadzu, Kyoto, Japan) and a programmed temperature vaporization (PTV) inlet (250°C) in splitless mode. Helium (5.0) was used as a carrier gas with a flow rate of 1.22 ml/min. The column oven was programmed as follows: starting at 50°C, the temperature was increased at a rate of 10°C/min to 300°C, which was held for 10 min. Mass spectra were obtained by electron ionization (EI; 70 eV). The temperatures of the ion source and interface were set to 250°C. Data were recorded from m/z 50 to 500 with a rate of 10 scans/s. For comparison of retention times and fragmentation patterns, a bacterial acid methyl ester standard solution (BAME) (47080-U; Sigma-Aldrich, Steinheim, Germany) was used (10-fold diluted with methyl tert᎑butyl ether).
Computational methods. The evolutionary history was inferred using the neighbor-joining method (62). Evolutionary analyses were conducted in MEGA7 (63).
Accession numbers. The corrected sequence of the rhlA gene in P. ananatis LMG 20103 and the sequence containing PhaG in P. desmolyticum were deposited under the GenBank accession numbers MF671909 and MG099922, respectively. The codon-optimized rhlA homologs for the construction of pHAL and pPLY are accessible under MN369027 and MN369028. | 8,267 | 2020-01-10T00:00:00.000 | [
"Biology"
] |
Simultaneous laser excitation of backward volume and perpendicular standing spin waves in full-Heusler Co2FeAl0.5Si0.5 films
Spin-wave dynamics in full-Heusler Co2FeAl0.5Si0.5 films are studied using all-optical pump-probe magneto-optical polar Kerr spectroscopy. Backward volume magnetostatic spin-wave (BVMSW) mode is observed in films with thickness ranging from 20 to 100 nm besides perpendicular standing spin-wave (PSSW) mode, and found to be excited more efficiently than the PSSW mode. The field dependence of the effective Gilbert damping parameter appears especial extrinsic origin. The relationship between the lifetime and the group velocity of BVMSW mode is revealed. The frequency of BVMSW mode does not obviously depend on the film thickness, but the lifetime and the effective damping appear to do so. The simultaneous excitation of BVMSW and PSSW in Heusler alloy films as well as the characterization of their dynamic behaviors may be of interest for magnonic and spintronic applications.
Results
Magnetization dynamics and FFT spectrum. The samples studied here are Co 2 FeAl 0.5 Si 0.5 films with different thickness. Spin-wave dynamics are excited and measured using a TR-MOKE configuration with an outof-plane external field applied. Figure 1(a) shows the excitation geometry. The precession of magnetization M is launched by the torque exerted on it as the femtosecond pumping laser transiently changes the orientation of effective field from H eff to ′ H eff 18 . The details of sample preparation, measurement configuration as well as excitation mechanism, can be found in Methods section. Figure 1(b) shows the laser-induced magnetization dynamics of the 60 nm thick sample, under different DC external field (H) and a constant pump fluence of 12.5 mJ/cm 2 . Obvious oscillations occur in all transient traces, and show the spin wave behaviors. The large amplitude of the oscillations with respect to the demagnetization indicates the efficient excitation of spin wave. The increase of demagnetization and oscillation in amplitude with H is attributed to the larger out-of-plane magnetization component under higher perpendicular field. One may note that the oscillations do not simply show a damped harmonic form, implying that the pump pulses simultaneously excite more than one SWM. To identify the SWMs, the spectrum of spin waves for different H is obtained by extracting the oscillatory components from the magnetization dynamics and then carrying out the fast Fourier transform (FFT). The remained non-oscillatory component is an exponential decay function, and depicts the recovery of laser-induced ultrafast demagnetization.
The field-dependent FFT spectrum is plotted in Fig. 1(c). In every spectrum, two peaks occur that both shift to increasing frequency with H, and represent two SWMs excited. In order to simplify the description below, they are referred to as low-frequency (LF) and high-frequency (HF) modes, respectively. The strength of the LF mode is greatly stronger than that of the HF mode. The field-dependent frequency (peak position in the FFT spectrum) of the two modes is plotted in Fig. 2(a) by open and filled circles respectively, and shows the dispersion of the spin waves which can be used to identify the type or mode of spin waves.
Dispersion analysis.
According to the theory established by Kalinikos and Slavin 19 , the approximate dispersion-relation of dipolar or exchange SWMs under arbitrary effective internal magnetic field can be deduced. In our experiment, the demagnetization field and the external field applied nearly perpendicular to the film plane leads to a slant orientation of the equilibrium effective field. Thus for the volume magnetostatic spin-wave (VMSW) mode dominated by the dipole interaction, the dispersion equation (lowest order) for angular frequency can be explicitly written as where ω H = γH sin φ/sin θ, and ω M = 4πγM s . Here k and d are the wavenumber of spin wave and the film thickness, respectively. γ is the gyromagnetic ratio, and M s is the saturation magnetization. θ and φ denote the angles of the equilibrium magnetization and external field with respect to the normal of film plane, respectively, as shown in Fig. 1(a). As k tends to zero, the VMSW mode tends to the uniform or Kittel mode, and Eq. (1) becomes to For the PSSW mode dominated by the exchange interaction, the dispersion equation is written as , A ex is the exchange constant, and n denotes the order of PSSW mode. Similarly, Eq. (3) reduces to Eq. (2) as n = 0.
The equilibrium magnetization orientation θ is changed with different H, and meets the following equation of minimum free energy, Because the frequency of the HF mode does not approach to zero with decreasing H, it is impossible for HF mode to be Kittel mode or VMSW mode. Considering its frequency values in the reasonable frequency range of the PSSW mode 3,8 , its dispersion is tried to fit with Eq. (3) plus a constraint of Eq. (4) by a least square optimization. M s is fixed to the measured value of 782 emu/cm 3 in the fitting process. The best fitting can be obtained as n = 1, as plotted in Fig. 2 For the LF mode, the frequency values seem to approach to zero with decreasing H. We first try to fit its dispersion using Eqs (2) and (4) with M s as a fitting parameter. The best fitting is plotted in Fig. 2(a) by dash line, and gives M s = 741 ± 7 emu/cm 3 . It seems to fit the experimental results well. However, we also try the best fitting with Eq. (1) instead of Eq. (2), as shown in Fig. 2(b) by dash line. It also agrees very well with the experiment frequency, giving M s = 779 ± 8 emu/cm 3 and k = 2.30 ± 0.15 rad/μm. In comparison with the fitting by Eq. (2), this fitting gives out M s closer to the measured value of 782 emu/cm 3 , while the fitting value of k is in the reasonable range of dipolar-interaction-dominated magnetostatic spin waves. For further showing the reasonability of M s = 779 ± 8 emu/cm 3 , the fitting line of HF mode using M s = 741 emu/cm 3 is also plotted in Fig. 2(a) by green solid line, and shows worse agreement with the experimental frequency of HF mode, especially in the low field range (see the inset).
To demonstrate the effect of film thickness on the SWMs excited in the experiment, laser-induced magnetization dynamics of the samples with thickness of 20 and 100 nm are also studied. The HF mode is found to exist only in the 60 and 100 nm thick samples, while the LF mode exists in all samples. Frequency of the HF mode is significantly dependent on the film thickness but that of LF mode does not. Dispersion analysis as above is carried out. Figure 2(c and d) show the dispersion fitting of the 20 and 100 nm thick samples, respectively. For the HF mode excited in the 100 nm sample, Eqs (3 and 4) provide a good fit, giving the same n = 1 and A ex = 3.16 ± 0.11 μerg/cm. Thus, we ascertain the HF mode as the first-order PSSW mode. PSSW can be usually coherently excited in ferromagnetic films with thickness of at least few tens of nanometers 21 . The calculated PSSW frequency with 20 nm thickness and A ex = 2.83 μerg/cm is up to 60 GHz, much higher than the values generally reported, implying that PSSW in thinner film is difficult to be excited and measured due to higher frequency. For the LF mode excited in the two samples, their frequency dispersion data can be fit very well with Eqs (1 and 4), giving M s = 787 ± 13 and 784 ± 11 emu/cm 3 , k = 4.88 ± 0.41 and 2.12 ± 0.26 rad/μm, respectively for the samples with thickness of 20 and 100 nm. While the fittings with Eqs (2 and 4) appear to be good, but the values of M s = 757 ± 10 and 731 ± 11 emu/cm 3 given by those fitting show lager deviations from the measured value than M s given by the fittings with Eqs (1 and 4). Based on the above comparative fitting analysis of LF mode dispersion with Eqs (1 and 2), we tend to assign the LF mode to VMSW mode. Further evidence for VMSW mode will be provided below.
Lifetime and damping. The lifetime reveals the energy-dissipation rate of a spin wave and is an important parameter for magnonic applications. To achieve it, the oscillatory components in the magnetization dynamics are fitted by using the following damped harmonic sum function, where A i , τ αi , ν i and ϕ i are the amplitude, lifetime, frequency, and initial phase of the i-th SWM, respectively. Figure 3(a) shows the best fittings (solid lines) of the oscillatory components (open circles) extracted from the magnetization dynamics of the 60 nm sample shown in Fig. 1(b). The frequency of the two modes, ν 1 (H) and ν 2 (H) given by the best fitting is almost identical to that obtained by FFT spectrum. The lifetimes τ α1 for three samples are plotted in Fig. 3 Gilbert damping is also a vital parameter attracted much attention. For the VMSW mode, the relation between Gilbert damping factor α and lifetime τ α is determined as ref. 11: s kd 2 Figure 4 shows the effective Gilbert damping α of three samples obtained from τ α1 by Eq. (6). For a comparison, α of the 60 nm sample is also calculated using the following damping relation for Kittel mode (plotted by filled circles to distinguish) 18 : As shown in Fig. 4, the field dependence of α obtained by Eq. (6) (α VMSW ) and (7) (α Kittel ) is similar. α Kittel remarkably increases with H, showing apparently extrinsic feature. Magnetic inhomogeneity is a main contribution to the extrinsic damping for Kittel mode spin-wave 18,21 . One of its characteristic is the competition between H and the distributed anisotropy field, leading to a reduction of damping with increasing H. Another mechanism contributed to the extrinsic damping is the two magnon scattering, which is expected to play a more remarkable role in the in-plane geometry than the perpendicular one 22,23 . In our experiment, because the external field H is applied nearly normal to the film plane, the out-of-plane angle of the equilibrium magnetization increases with increasing H. Thus, the possible contribution of the two magnon scattering to the extrinsic damping should decrease with increasing H. However, here α Kittel obviously shows an increase with external field, implying that the extrinsic component of damping cannot be mainly from either the magnetic inhomogeneity or the two magnon scattering. That further supports that the LF mode should not be Kittel mode. In other words, LF mode should be VMSW mode. However, for VMSW mode, what is the main extrinsic origin of α VMSW ? We will explore it below.
Discussion
Assuming an intrinsic α 0 = 0.01 (typical value of damping for Cobalt-based full-Heusler alloys 20 ), field dependent τ α0 for three samples are numerically calculated by Eq. (6) based on the parameters obtained from the dispersion fittings, and plotted in Fig. 5(a). One can note that the three τ α0 (H) are very similar, while the slight difference comes from the slightly different ω for three samples. All of them decrease with increasing H, but the variation trend with H is different from the experimental one shown in Fig. 3(b-d). In the low field range, the falling slope of the calculated τ α0 (H) is smaller than the experimental τ α1 (H); while in higher field range the calculated one is obviously larger. Then, what results in the field dependence of τ α1 ? VMSW mode is a propagating mode. The energy propagation along the film plane may influence the measured decay process of spin wave. Since the probing area in our experiment is located in the excited (pumping) area which can be regarded as the source of spin wave, the propagation can accelerate the decay of spin precession in the probing area 24 . Group velocity, v g = ∂ω/∂k, is just a key parameter to describe energy propagation rate. A larger |v g | may lead to a smaller τ α . Based on the parameters obtained from the dispersion fittings, v g of three samples are calculated and plotted as a function of H in Fig. 5(b). All three v g have negative values when H < ~10.5 kOe, implying that within this field range the group velocity is pointing in the opposite direction with the wavevector 24,25 , and the spin wave should be so-called BVMSW. While H > ~10.5 kOe, all three v g have positive values, the spin wave should be so-called forward volume magnetostatic spin wave (FVMSW). Typical excitation structure for BVMSW is associated with an effective field parallel to the film plane, while for FVMSW it is done with an effective field perpendicular to the film plane. In our experiment, the out-of-plane angle of the equilibrium magnetization increases with increasing H. Thus, BVMSW and FVMSW can be excited possibly with different value of H. However, within the field range of 0-8 kOe applied in our experiment, the effective-field orientation angle θ is always larger than π/4 so that the in-plane component of effective field is dominant. Thus, BVMSW is excited preferably.
The inset in Fig. 5(b) shows the enlargement of v g within H range of 0-10 kOe. |v g | of three samples present non-monotonous dependence on H, and reach maximums at ~4.2 kOe. Taking account for cooperative influence of intrinsic α 0 and v g on τ α , the field dependence of the experimental τ α1 in Fig. 3(b-d) is more easy to be understood, and can be regarded as a superimposed influence of these two factors. The calculated τ α0 [ Fig. 5(a)] decrease with H, though the decreasing rates are slower than those of the experimental τ α1 (H). Further taking |v g |(H) into account, the decreasing rates would become faster in low field. While |v g | are approaching to zero again in higher field range, τ α1 (H) present slight increase. The relation between lifetime and group velocity discussed above should be another evidence for assigning the LF mode to BVMSW. Moreover, α VMSW in Fig. 4 all initially increase with H for three samples, reaching a maximum and then decreasing, approximately matching the field-dependence characteristic of v g . That further supports the above inference. The minimum of α VMSW is 0.0085, 0.0137, 0.0176 as H = 0.8 kOe for the 20, 60 and 100 nm samples, respectively. Accordingly, the intrinsic damping for each sample should be respectively smaller than these values.
In conclusion, fs-laser induced spin-wave dynamics in full-Heusler Co 2 FeAl 0.5 Si 0.5 films are studied by employing all-optical pump-probe polar MOKE spectroscopy with an out-of-plane external field applied. Two SWMs are excited. A higher frequency mode observed in the 60 and 100 nm samples is identified to be first-order PSSW mode. The second mode with lower frequency observed in all samples is excited more efficiently and identified to be BVMSW mode whose field dependence of frequency is similar to one of Kittel mode. The Gilbert damping of BVMSW mode shows especial extrinsic feature. The relationship between lifetime and group velocity is revealed. It is found that the frequency of BVMSW mode does not obviously depend on the film thickness but the lifetime and the effective damping appear to do so. BVMSW and PSSW can be efficiently excited in our out-of-plane experimental geometry, where large-angle magnetization precession is easy to be generated. In this case, the intrinsic nonlinear of Landau-Lifshitz equation may be helpful to understand the energy transfer from pump into certain SWMs via nonlinear interaction 3,26 .
Methods
The samples studied here are Co 2 FeAl 0.5 Si 0.5 films deposited on glass substrate by magnetron sputtering in a uniform DC field at room temperature with a base pressure better than 3.0 × 10 −6 Pa. The thickness of the samples is 20, 60 and 100 nm, respectively. The deposition rate is ~0.6 Å/s and the Ar pressure is ~0.72 Pa. All the films were annealed at 300 °C. The crystal structure of Co 2 FeAl 0.5 Si 0.5 has been studied by grazing incidence X-ray diffraction in ref. 27. Fully ordered L2 1 , partly ordered B2, and disordered A2 structures coexist in the films. The measurement using vibration sample magnetometry (VSM) shows the in-plane magnetized feature of the samples due to the demagnetizing field, and gives the saturation magnetization of 782 ± 6 emu/cm 3 .
A time-resolved magneto-optical polar Kerr configuration is adopted to measure the spin wave dynamics. Linearly polarized laser pulse train from a Ti:sapphire regenerative amplifier with a duration of 150 fs and a repetition rate of 1 kHz at the central wavelength of 800 nm is split into pump and probe with a pump-to-probe fluence ratio larger than 30. Both the pump and probe beams are almost incident normally on the sample surface. The pump beam is focused to a spot of ~150 μm in diameter, while the probe spot is located at the center of the pump spot and with diameter of approximately half that of the pump. The polar Kerr rotation of the reflected probe beam is detected by an optical balanced bridge and measured through a lock-in amplifier synchronized to an optical chopper which modulates the pump beam. The detailed description on this time-resolved Kerr setup can be found elsewhere 28 . A variable magnetic field generated by an electromagnet is applied nearly normal to the sample plane to generate larger precession angle under the laser excitation. All measurements are performed at room temperature.
The excitation geometry is shown in Fig. 1(a). The pump pulse causes the ultrafast demagnetization and transiently modulates the magnetic anisotropy, leading to the initial equilibrium effective field H eff deviated to a new direction along ′ H eff . Then, a torque is exerted on the magnetization M, and hence launches the precession around ′ H eff 18 . The length of M and the magnetic anisotropy recover quickly due to the spin-lattice relaxation and heat diffusion 16 , but M keeps on precession in a much longer time scale until its orientation returns to that of H eff again. | 4,293.8 | 2017-02-14T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Intrinsic three-body nuclear interaction from a constituent quark model
We study the short distance part of the intrinsic three-nucleon interaction in a constituent quark model with color-spin interaction. For that purpose we first calculate the transformation coefficient between the tribaryon configuration and their corresponding three baryon basis. Using a formula for the intrinsic three-body interaction in terms of a tribaryon configuration, we find that after subtracting the corresponding two-baryon contributions, the intrinsic three-body interaction vanishes in flavor SU(3) symmetric limit for all quantum numbers for the three nucleon states. We further find that the intrinsic three-body interaction also vanishes for flavor-spin type of quark interaction.
I. INTRODUCTION
The short distance part of the baryon-baryon interaction is intricately related to the properties of dense nuclear matter. Historically, the approaches describing the baryon-baryon interaction evolved with our understanding of strong interaction. They range from the early nuclear potential models, such as the Paris potential [1] or the Bonn potential [2], quark cluster model [3,4], modern field theoretical approach based on chiral lagrangians [5][6][7][8], to the recent direct calculation of nuclear force from lattice QCD (LQCD) [9,10]. In particular, it is worth noting that the recent lattice calculations are based on flavor SU(3) non-symmetric case with almost physical pion mass [11]. The study of three-body nuclear force also has a long history starting from the pion mediated interaction [12] to modern day chiral effective field theory [13]. However, there are only a few studies using quark based approaches [14][15][16][17], which would become more relevant at short distance and hence in very dense nuclear matter.
Recently, there is a renewed interest in the nuclear three-body forces as they are related to solving the so called "Hyperon puzzle in neutron stars". One way to explain the mass of the recently observed neutron stars [18,19] that are larger than previous expectations is to introduce repulsive three-body interactions including hyperons in dense nuclear matter. Such forces will delay the appearance of hyperons to higher densities preventing the equation of state from becoming too soft. However it should be noted that the needed three-body repulsion is an intrinsic force and not the higher density effects coming from the accumulation of two-body interactions. Therefore, the same caution should be taken when we calculate the pure three-body interaction from a first principle calculation; namely the two-body force effects have to be eliminated. The intrinsic three-nucleon interactions have been calculated in LQCD [15,16], which find that the three-nucleon potentials are repulsive at short distance in the isospin 1 and 0 channel. Since the LQCD has reached the level of precision calculation for *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>the two-body nuclear force with realistic quark masses, it is still a challenge to analyze the three-body interactions for all possible quantum numbers with reliable precision.
In this work, we will present a constituent quark model calculation for the intrinsic three-nuclear interaction.
As for the two nucleon potential, it was first noted within the quark-cluster model that the short range interaction is predominantly determined by Pauli principle and color-spin interaction [3,4]. Recently, we have made the quark model conjecture more concrete by comparing and showing that the quantum number dependent short distance part of the baryon-baryon potential extracted from lattice QCD can be well understood in terms of the interquark interaction within a constituent quark model [20]. The color-spin-flavor structure with the color-spin interaction between quark pairs in the six quark state provides the mechanism for the repulsion or attraction with different flavor and spin quantum numbers. By analyzing the color-spin-flavor wave function and all possible diquark configuration contributing to a given six quark states, we have shown that the interaction energy ratios between different flavor sates calculated from a constituent quark model show good agreement with those in LQCD [21] in both flavor SU(3) symmetric and non-symmetric cases. These results suggest that the Pauli principle and color-spin interaction are key inputs responsible for the baryon-baryon interaction at short distance.
For the three-baryon interaction, in a previous work using the constituent quark model approach [17], we showed that the static three-baryon configuration are repulsive at short distance . However, the result includes all possible two-body interaction effects. Therefore, in this work, by fully subtracting out the two-baryon contributions we isolate the pure three-body interaction strength at short distance for all possible quantum numbers. For that purpose we extend the calculation for the transformation coefficient between the dibaryon configuration and the baryon-baryon basis obtained by Harvey [22,23] to all possible tribaryon configurations and calculate the coefficients between the tribaryon configuration and their corresponding three baryon basis. After subtracting the corresponding two-baryon contributions, we find that the intrinsic three-body interaction vanishes in flavor SU (3) symmetric limit. Additionally, we find that the intrinsic three-body interaction vanishes not only for color-spin interaction but also for flavor-spin interaction.
This paper is organized as follows. In sec.II, we classify all possible flavor states of three baryons in flavor SU(3) symmetry. In sec.III, we introduce the Jacobi coordinate for tribaryon configuration and represent the explicit form of relative kinetic energy by taking the Gaussian form for the spatial part of the total wave function of a tribaryon. In sec.IV, we represent the formula for the intrinsic three-body force in terms of a tribaryon configuration. In sec.V, we show the results for the transformation coefficients between the tribaryon configurations and their thee-baryon basis. Using these coefficients we calculate the intrinsic three-body interaction energy. Finally, sec.VI is devoted to summary and concluding remarks.
II. FLAVOR STATES OF THREE BARYONS
In flavor SU(3) symmetric limit, the possible flavor states of the dibaryon which can be constructed from two octet or decuplet baryons are as follows.
where m is the multiplicity. Similarly, we can consider the three-baryon interaction in terms of a compact tribaryon configuration and represent the possible flavor states as follows.
In our previous work [17], we classified the possible flavor and spin quantum numbers of the tribaryon states assuming the quark orbitals to be totally symmetric, and calculated their static interaction energy using color-spin interaction in both flavor SU(3) symmetric and breaking cases. When the spatial part of the wave function is symmetric, there are in total fifteen possible flavor and spin states, all of which can be shown to be highly repulsive except for the (F,S)=(1,9/2) state. However, the repulsion in a compact tribaryon configuration can also come from the sum of two-baryon interactions within the compact configurations. Therefore, to isolate the intrinsic three-baryon interaction one needs to subtract the contributions from two-nucleon interactions in the tribaryon configuration. In the following sections, we will first introduce the Jacobi coordinates and then present our method to define and isolate the intrinsic three-baryon interaction.
III. JACOBI COORDINATE FOR NINE QUARK SYSTEM IN THE THREE-BARYON CONFIGURATION
We can represent the Jacobi coordinate for tribaryon in three-baryon configuration in flavor SU(3) symmetric limit as follows.
Here x 1 -x 6 describe the three set of two coordinates for three baryons, whereas x 7 and x 8 represent the relative coordinates among the three baryons. Additionally, we can choose the following Gaussian function as totally symmetric spatial wave function.
where N is the normalization factor and a the variational parameter. Then, the relative baryon kinetic terms in the tribaryon associated to x 7 and x 8 are as follows.
where m q is the constituent quark mass. The starting non-relativistic Hamiltonian for quarks from which we can also obtain the relative kinetic terms are given as follows.
where N is the total number of quarks. The two-body color-color and color-spin interaction terms can be expressed as matrix elements times a potential function that depends on the relative distance between the two quarks [24].
Here, λ i , σ i are respectively the color and spin matrix of quark i, whereas f, g are potential functions. The masses of the baryon M B , dibaryon M D and tribaryon M T can be calculated using this hamiltonian with the trial wave function in Eq. (3). We can represent the two-body interaction as follows [20].
where M Di is the mass of dibaryon i, M Bi,1 and M Bi,2 are respectively the mass of baryon 1 and 2 in D i . In the example we have in Fig. 1, B 1,1 and B 1,2 correspond to B 2 and B 3 . K rel,Di is relative kinetic energy between two baryons within the dibaryon i.
We can decompose the mass of a tribaryon into three two-body interactions, intrinsic three-body interaction, the sum of three baryon masses and the additional kinetic terms as follows.
where T,B represent tribaryon and baryon, respectively. Then, we can represent the intrinsic three-body interaction as follows.
In the following, we will take the flavour SU(3) limit and further take the interquark distances inside the baryon, dibaryon and tribaryon to be the same. In a previous work [20], we have found that taking such a limit one is able to reproduce the lattice result for baryon-baryon potential at short distance in the SU(3) symmetric limit and as well as the lattice result at almost physical quark mass, which is not so different from the SU(3) symmetric limit. In such a limit, the sum of two-body color-color interaction within a dibaryon configuration will cancel those from the two threshold baryons, while the effects from the color-spin interactions remain. Therefore, one can conclude that the Pauli principle, taken into account by properly constructing the color-spin-flavor wave function, together with the color-spin interaction provide the mechanism for short distance baryon-baryon interaction.
For kinetic terms, there are in total 8 terms in a tribaryon, 5 terms in a dibaryon and 2 terms in a baryon, respectively. Because all kinetic terms are the same in flavor SU(3) symmetric limit, these terms cancel to each other in Eq.(10) as long as the quarks inside either the baryon, dibaryon or tribaryon occupy the same spatial size. Together with the previous argument on color-color interaction, one finds that only the color-spin interaction from the hadrons are relevant in the second equation in Eq. (10) Therefore, from now on, we will use the following formula for the intrinsic three-nucleon interaction, where we neglected the additional kinetic terms and use only the color-spin part of the respective hadron masses.
Here, M T,D,B 's contain only the contribution from the color-spin matrix element in Eq. (5) where the magnitude of the spatially integrated part of g in Eq. (7) will be common for all quark pairs in the tribaryon, dibaryon and baryon.
V. TRANSFORMATION COEFFICIENTS
As we can see in Eq. (1), all flavor states of the dibaryon except for the flavor singlet state contain two channels in terms of baryon-baryon flavor configuration. For example, the flavor octet dibaryon, contains both the 8 ⊗ 8 and 8 ⊗ 10. We can determine the fractions of different channels in a dibaryon configuration using transformation coefficients [22,23]. In Table I, we represent the transformation coefficient T 2 (D, B 1 ⊗ B 2 ) for the dibaryon in terms of normalized probability of baryon-baryon flavor configuration excluding the hidden color state. TABLE I. Transformation coefficients T2(D, B1 ⊗ B2) for dibaryon. These coefficients show the ratio among different baryon ⊗ baryon channels. The first row and column represent the corresponding baryon ⊗ baryon and dibaryon channels, respectively. Empty boxes represent Pauli forbidden states.
In a similar way, we can calculate the transformation coefficients for tribaryon decomposing into a baryon and a dibaryon, where the results for the normalized probability T 3 (T, B ⊗ D) are given in Table II. The transformation coefficients can be calculated using a baryon and a dibaryon basis. Using Young-Yamanouchi basis of S 9 symmetric group [25], we can construct the outer product state of a baryon and a dibaryon to satisfy certain flavor and spin symmetric property. Then, we can calculate the transformation coefficients T 3 (T, B ⊗ D) as follows.
Using these coefficients, we can transform the expression for the intrinsic three-body interaction strength given in Eq. (11) to the corresponding three-body interaction strength in a specific three-baryon channel as follows.
Here, the subscript k denotes the possible baryon flavor combinations contributing to a given flavor spin state in the tribaryon configuration given in Table III. Additionally, j corresponds to a possible dibaryon state in a corresponding tribaryon configuration, T 2 (D, B 1 ⊗ B 2 ) and T 3 (T, B ⊗ D) are transformation coefficients of dibaryon and tribaryon given in Table I,II, and P (T, (B 1 ⊗ B 2 ⊗ B 3 ) k ) is the probability of three-baryon channels for each tribaryon configuration given in Table III, which can be obtained by combining Table I,II. Additionally, since the color-spin-flavor wave function of a tribaryon is totally antisymmetric, the contributions coming from the three possible dibaryons are the same, leading to the factor 3 in the second term in Eq.(13) instead of the summation for dibaryons in Eq. (11).
Let us now calculate the intrinsic three-nucleon force based on Eq. (13). We first consider 8⊗8⊗8 interaction contributing to the flavor spin state (F,S)=(8, 1 2 ). Since there are five possible dibaryon⊗baryon states containing 8⊗8⊗8 in (F,S)=(8, 1 2 ) tribaryon, which are where I g is the expectation value for the spatial part of the color-spin interaction, which are common to all states.
Additionally, we can also consider the intrinsic threebody interaction strength including all possible threebaryon channels in tribaryon configurations with given quantum number. In order to calculate this, we need to determine the ratio among possible three-baryon channels for each tribaryon state. We represent the ratios in Table III which can be obtained using Table I,II. Then, the formula for three-body interaction is transformed as follows.
where (B 1 ⊗ B 2 ⊗ B 3 ) k denote the possible three-baryon channels representing the configurations from 8⊗8⊗8 to 10⊗10⊗10 through the different value of the k index.
Using this formula, we can also verify that the intrinsic three-body interaction for all tribaryon configurations are zero.
One can show that the intrinsic three-nucleon interaction also vanishes for flavor-spin type of two-quark interaction [26]. For flavor-spin interaction, we can use the following formula in flavor SU(3) symmetric limit.
where C F (C C ) is the first kind of Casimir operator of flavor (color) SU(3) [27]. We represent the expectation value of this flavor-spin factor for dibaryons and tribaryons in appendix A. Using the same transformation coefficients, we can calculate the intrinsic three-body force for flavor-spin interaction. Similar to the color-spin interaction case, we find that the intrinsic three-body interaction vanishes for all quantum numbers.
VI. CONCLUSION
In this work, by studying the compact tribaryon configurations in flavor SU(3) symmetric limit and subtracting out the contributions from the two-baryon interactions, we found that the intrinsic three-baryon interaction at short distance vanishes for all quantum numbers. Because we are using a constituent quark model based on two-body quark interactions, when we calculate the mass of a tribaryon, we automatically include quark based baryon-baryon interaction and interquark interactions within a baryon. The quark interaction in the tribaryon configuration cancels those from the dibaryon and baryon configuration when extracting the intrinsic threebaryon interaction based on Eq. (11). It is interesting to note that the number of quark interaction also cancels in Eq. (11). There are total 9 2 quark-quark interaction terms in a tribaryon configuration. When calculating the intrinsic three-body interaction we consider three possible dibaryon configurations in a tribaryon, so there are 3× 6 2 two-quark interaction, while there are three baryons, contributing 3× 3 2 terms. Therefore, considering Eq. (11), one notes that the number of two body terms cancel in the intrinsic three-body interaction because 36-45+9=0.
So far, our result was based on using two-body quark interactions. On the other hand, we can consider intrinsic three-body interaction using intrinsic three-quark interactions such as the f -type or d-type [17] interaction, which can not be decomposed into two-quark interactions. However, summing over all three quark interaction within a color singlet state composed of N quarks, one finds the following formula.
i =j =k where f, d are respectively the antisymmetric and symmetric constants for SU(3), and C 1 (q) is the first Casimir operator of each quarks. As we can see in Eq. (17), the f -type interaction always sums up to zero while the d type of interaction show linear dependence on the total number of quarks, which suggests that it cancels in Eq. (11) so that it does not affect the intrinsic threebaryon interaction. Therefore, we can conclude that the short distance part of the intrinsic three-body interaction vanishes in the flavor SU(3) symmetric limit. In a realistic flavor SU(3) breaking case, the cancellation will not be exact. Hence, we need to look at the intrinsic three-body force with realistic strange quark mass taking into account the spatial dependence that will be different for all quark pairs. However, it is known that the short distance part of the baryon-baryon potential calculated from lattice calculation for the flavor SU(3) breaking case is similar to those obtained in the flavor SU(3) symmetric limit [20]. Hence while a realistic calculation should be a work in the future, we believe that the dominant contribution cancels such that the intrinsic three-nucleon interaction will be small also in the flavor SU(3) broken case.
A. Matrix elements of color-spin and flavor-spin interaction
Here, we summarize the matrix elements of color-spin and flavor-spin interaction for dibaryon [28] and tribaryon [17] configurations in Table IV | 4,075.4 | 2019-08-22T00:00:00.000 | [
"Physics"
] |
Phase contrast reflectance confocal brain imaging at 1650 nm
Abstract. Significance The imaging depth of microscopy techniques is limited by the ability of light to penetrate biological tissue. Recent research has addressed this limitation by combining a reflectance confocal microscope with the NIR-II (or shortwave infrared) spectrum. This approach offers significant imaging depth, is straightforward in design, and remains cost-effective. However, the imaging system, which relies on intrinsic signals, could benefit from adjustments in its optical design and post-processing methods to differentiate cortical cells, such as neurons and small blood vessels. Aim We implemented a phase contrast detection scheme to a reflectance confocal microscope using NIR-II spectral range as illumination. Approach We analyzed the features retrieved in the images while testing the imaging depth. Moreover, we introduce an acquisition method for distinguishing dynamic signals from the background, allowing the creation of vascular maps similar to those produced by optical coherence tomography. Results The phase contrast implementation is successful to retrieve deep images in the cortex up to 800 μm using a cranial window. Vascular maps were retrieved at similar cortical depth and the possibility of combining multiple images can provide a vessel network. Conclusions Phase contrast reflectance confocal microscopy can improve the outlining of cortical cell bodies. With the presented framework, angiograms can be retrieved from the dynamic signal in the biological tissue. Our work presents an optical implementation and analysis techniques from a former microscope design.
Introduction
In biological tissue, water absorption and associated O-H vibrational states lead to increased attenuation of light around 1450 nm. 1 However, since the scattering of light is mostly inversely proportional to the wavelength, higher wavelengths (1600 to 1850 nm) have an increased effective attenuation length. 2 With the advent of new NIR-II sources and low noise detection in the same spectral range, difficulties associated with imaging in this range have been reduced and therefore increase the interest in high wavelength imaging in-vivo microscopy.Some notable work led to demonstrations of three photon imaging, 3 confocal fluorescence imaging via quantum dots excitation 4,5 and development of new probes targeting this spectral range. 2,6n recent work, development of a long-wavelength reflectance confocal microscope has demonstrated good endogenous imaging capabilities when exploiting the NIR-II band. 7olarization filtering was used to maximize signal to noise ratio (SNR) leading to good imaging depth in tissue despite using low excitation power.However, some ambiguity in the intrinsic signal may arise when comparing vascular structures to myelinated axons.Such ambivalence was resolved in the aforementioned work via combining with molecular imaging, such as third harmonic generation (THG).
There exists a huge body of work in ophthalmology imaging on the potential of phase contrast to help identify cellular interfaces.Building from the standard adaptive optics scanning laser opthalmoscopy (AOSLO) imaging setup, phase contrast annexed to an AOSLO system was introduced by Sulai et al. 8 The lateral separation of the microscope's point spread function enhanced the overall contrast and ability to detect micro-features of the system. 9A similar approach has not been investigated in the brain since.However, usage of NIR-II spectral range exhibits reduced scattering of light, which may facilitate the implementation of phase-contrast imaging and prove beneficial if applied to the reflectance confocal microscopy setup.
In the absence of a femtosecond source to generate THG, vascular angiography could benefit from a technique similar to speckle analysis in optical coherence tomography (OCT).Based on high-frequency temporal filtering of the signal, OCT is able to retrieve erythrocytes paths invivo. 10A similar approach to the NIR-II reflectance confocal microscope could help distinguish axons from blood vessels in cortical tissue.
In this study, we investigate whether the combination of phase contrast scheme with NIR-II reflectance confocal microscope can provide intrinsic contrast to cells, including erythrocytes in the lumen.The study will show that combining this imaging setup with high frequency temporal filtering proves an efficient framework to detect the micro-vascular network structure (or angioarchitecture) and differentiate dynamic elements with flow such as blood vessels from static ones in the cortex.Our report describes the imaging setup, methods for dynamic structures imaging and in-vivo test with mice's skull left intact to test the capabilities of the custom microscope.
Animal Groups and Surgery
The Animal Research Ethics Committee of the Montreal Heart Institute approved all procedures described here, in accordance with the Canadian Council on Animal Care recommendations and the protocol for this study was accepted under the ID 2023-3257.Mice underwent a diet of TEKLAD GlobaL 19% protein extruded rodent diet (Envigo) and were kept under a light/dark cycle of 12 h.Clean drinkable water was at all times available.We used n ¼ 9 C57BL/6 J mice (5M and 4F) kept in separate cages.Two distinct surgical procedures were used to enable cortical imaging with either intact skull or through a conventional craniotomy.For the craniotomy preparation, a cranial surgery was performed before the imaging sessions following a protocol identical to Lu et al. 11 During the surgery, body temperature, respiration rate and heart rate were monitored (LabeoTech, Canada).Lidocaine was injected onto the surface of the scalp for analgesia and the scalp was removed all under anesthesia (3% isoflurane in oxygen).For the conventional cranial window, a micro drill was used to open the skull.A quarter wave plate (QWP, WPH502, Thorlabs, Inc.), cut in four square pieces, were used as coverslips.UV sensitive dental cement was used to fix the QWP and UV light was shone for 20 s to cure the dental cement while a custom arm kept the cranial window slightly pressured on the cortical surface.For intact skull imaging, refractive index matching acrylic glue (zap-a-gap CA+) was used to fix the reshaped (QWP) directly on the skull.Finally, a titanium bar was glued onto the head to serve as an anchor point to maintain the mouse brain fixed during imaging sessions.The imaging sessions were then performed while under isoflurane anesthesia (reduced to 1% to 2%), on the heating pad, while keeping the head of the mouse immobile throughout the imaging process (see Fig. S1 in the Supplementary Material for graphical representation).All experiments were terminal.
Optical System
The optical system design is similar to what is described in Xia et al. 10 and is shown in Fig 1(a).A polarized superluminescent diode source (FPL1059P, Thorlabs) injects a 1650 nm light in the system with a collimator (ZC618SMA-C, Thorlabs).A first half-wave plate is used to optimize the transmitted light in a polarized beam splitter and a second half-wave plate changes the polarization of the excitation light to enable redirection of the collected light to the detectors through a polarization filtering scheme (see below).The raster scan setup is made from two galvanometric mirrors (Thorlabs) combined through two parabolic 90 deg off-axis mirrors (MPD129-M01, Thorlabs) to discard any beam walking effect.The excitation light is then expanded by a 6× beam expander to fill the back aperture at the objective plane.In order to dissociate the excitation light and the spurious reflections on the lenses from the collected light, a quarter-wave plate is inserted in the optical path after the microscope objective (XLPLN25XWMP2, Olympus) and serves as cranial window (see above).The back-and-forth passage in the later plate will cause a rotation of the polarized light, which will be refocused and descanned to be reflected at the polarization beam splitter.Moreover, a heavy water drop was used to couple the microscope objective to the cranial surface and to provide a liquid media, which has low water absorption at 1650 nm.A 60 mm lens is used to focus the detected light on the edge of the silver coated knife edge mirror (MRAK25-P01) and two other lenses are placed to collimate the separated beams back towards collimators (PAF2P-18C) fixed to fibers having a single-mode diameter of 10 μm.This configuration to enable phase contrast corresponds to a wave vector domain separation in two halves.Hence, reproducing the Fouceault effect in both beams sent to the fibered detectors.This decomposition of the detected beam in two parts similar to the positive and negative Zernicke polynomial Z 1 3 as shown to enhance the visibility of single cells in retina imaging. 12,13ibers were connected to superconducting nanowire single-photon detectors (Opus One, Quantum Opus).This configuration provides a pinhole equivalent to ∼0.8 Airy unit.The galvanometric mirrors were operated via an acquisition card (Multifunction I/O device, National Instruments) connected to a computer.Both detector outputs were connected to counter channels and were synchronous with the raster scan setup.The signal is sent back to the computer to create the final image.Subtraction of both images for phase-contrast imaging is done via custom software implementation and the optical system was programmed in python.
Repeated Line Acquisition Scheme
The ability of reflection confocal microscopy to image cellular structures is an opportunity to take advantage of temporally changing signals from cortical tissue.Using a methodology inspired from the OCT angiography, 14 speckle temporal analysis may be useful in this imaging system to provide similar outputs.Beyond standard scanning to generate images, we applied the repeated line method and acquired up to 100 images at random locations with a sampling frequency of ∼250 kHz per point to investigate the capabilities of the system at retrieving vascular structures.
Optical System Characterization
wTo quantify the imaging capabilities of the optical system, the lateral edge spread function (ESF) and axial line spread functions (LSFs) were analyzed.These methods were preferred as the smallest element of the available USAF 1951 target was the 6'th element of the 7'th group, which are shown in the Fig. 1(c).For the ESF, a USAF 1951 resolution target (R1L1S1P, Thorlabs) was used to create a strong contrast with minimal power input (12.5 nW) to ensure no saturation at the photon counter detection rate.Mapping the ESF from the number near the 7'th group and 5'th element gives the lateral LSF via the derivative.An exponential curve fit provides a lateral full width half max (FWHM) of 920 AE 10 nm in the horizontal direction and 820 AE 10 nm in the vertical direction.This discrepancy between both axes may arise from the parabolic mirror alignment.However, the Rayleigh criterion for lateral resolution with our setup provides a value of ∼830 nm, which is in the same range of values as the LSF here analyzed, which tends to indicate that our setup is almost diffraction limited [Figs.1(b) and 1(c) and 1(e) and 1(f)].For the axial LSF, a mirror surface was imaged with a stack of 50 images separated by 1 μm steps.A curve fit over the resulting intensity profile shows an axial FWHM of 8 μm [Fig.1(d)].
Phase Contrast Imaging can Distinguish Individual Cell Bodies
The intrinsic contrast retrieved from confocal imaging at this wavelength is partly confounded since different morphological structures can exhibit relatively similar signals.To provide more insight on the cortical constituents, a phase-contrast scheme was introduced to the reflective confocal microscope.150 μm wide images were acquired at different depths to test the system.To reproduce the intrinsic common signal from reflective confocal microscopy the signal of both collection channels can be added.Such acquisitions capture different structures such as myelinated axons, blood vessels and other glial cells up to 800 μm.As shown in Fig. 2, red and yellow arrows point to vascular structures and myelinated axons respectively.By subtraction of both channels, the phase contrast scheme delineates the reflective surface of the cellular bodies.Normalizing the signal background to a value around 0 outlines the cellular components in the image and clear out the low spatial frequency signal.Moreover, a left-right differentiation enhances cellular interfaces producing a high spatial frequency signal in the tissue.Anatomical features acquired in the cortex conform to descriptions found in atlases.However, the distinction between myelinated axons and small vessels remains hard to identify, since the only difference is the edge of each structure.A quantitative framework could be a useful tool to automatically label vasculature and anatomical features such as axons.
Temporal Analysis Outlines a Clear Vascular Network
From the images collected by the reflective confocal microscope, raw outputs showed that some tubular structures had internal intermittent signals.Hence, by performing a temporal variance computation (σ t ) of the differential image (Δfðx; y; tÞ) over the temporal mean (μ t ðx; yÞ) of the addition of the two channels (f 1 ðx; y; tÞ, f 2 ðx; y; tÞ) over "T" images, dynamic structures were outline from static components in the image E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 where : E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 To provide clear images of the vascular networks, a typical acquisition at 250 μm deep, 256 by 256 pixels at 102 kHz sampling per point with three consecutive line acquisition is shown at Fig. 3.The resulting image is the mean of five consecutive line scan acquisitions, which brings the time of acquisition to provide an image of the vascular component to around 10 s.Repeating this acquisition process while lowering the microscope objective down enables the digital reconstruction of angiograms.As shown in Fig. 3 in the dotted box, the same processing methodology is applied to an imaging sequence while 3 μm steps provided by a linear stage drive the microscope's objective down towards the cortex.The image stack presented starts at 100 μm and goes up to 280 μm deep in the somatosensory cortex.
Vascular Components can be Retrieved up to a Depth of 800 μm
To highlight the potential for intrinsic imaging in the NIRII window, imaging sequences were performed to retrieve vascular structures up to 800 μm deep in the somatosensory cortex.Examples at different depths of raw signal from one channel, the presented phase contrast scheme and the temporarily changing structures highlighted by the variance computation are shown in Fig. 4. The acquisition time was increased by performing a higher number of repeated lines of acquisition in the image.For instance, at 400 μm, five consecutive repetitions for each line was performed.Deeper at 800 μm, 10 repetitions were acquired thus increasing the acquisition time at different depths accordingly.Trials to recover vascular signals over 800 μm were unfruitful even with 20 repetitive acquisitions.Fig. 3 Raw images and signal processing routine from the reflective confocal microscope with a phase contrast scheme.The raw images from both channels are subtracted from one another to create the phase contrast image.Repeated acquisition on the fast axis scanner dimension provides temporal information which is retrieved via a temporal variance computation divided by the addition of the two raw images.The resulting image provides a vessel network map arising from the erythrocyte passage in the blood vessels.In the dotted box, the same image processing technique is repeated over 180 μm thickness in the somatosensory cortex to retrieve the volumetric vascular network.
Erythrocytes can be Monitored
Rapid scans of dynamic structures were performed in order to observe the erythrocytes pass-by in capillaries.By fixing the scan length to 20 μm with a line rate of 800 Hz over the lumen of a capillary, erythrocyte passage was monitored.As shown in Fig. 5, this imaging mode can be performed up to 800 μm deep in the cortex.Comparison between different depths showed a similar level of signal coming from erythrocytes in the lumen when increasing the power of the excitation light.
Intact Skull Imaging can also Retrieve Cortical Cellular Structures
Usage of NIR-II spectral range enables us to acquire micro-anatomical images through the skull with minimal scattering.Images at different depths through the skull were compared to acquisitions obtained after craniotomy.Figure 6 shows acquisitions at different depths of cells' bodies comparing a complete craniotomy and an intact skull imaging with a 20 μm field of view.Up to 500 μm, the images reproduce glomerular bodies as observed in the craniotomy data.However, at 800 μm, the noise coming from the relatively high illumination hampered the nature of the signal and what appeared to be the cells body was lost in the noise.In contrast, during an open cranial window imaging, superior SNR provided a good delineation of the cellular constituents even at a depth of 800 μm.To quantify the effect, the SNR was computed as the square of the expected value divided by the standard deviation of the squared data.While the craniotomy images provide a decent SNR, the intact skull images exhibited lower SNR.When applying the vascular imaging protocol, only peripheral vessels up to 200 μm deep were able to be imaged with the temporal filtering method and no capillary vessels were retrieved in this manner either.Microscope Reflectance confocal microscopy can provide convoluted signals with low selectivity for the biological constituents.This low selectivity is a result from the nature of the captured signal, which comes from reflected light onto the interface of media provided by changes in refractive index and absorption throughout the biological tissue.Past observations with phase contrast imaging focusing on lymphocytes through a transparent medium were able to depict the immune response from lipopolysaccharride induced inflammation. 12Such discerning power would be highly desirable for cortical imaging, which could eliminate the need for fluorescent dyes for cortical cells in some experiments.Using a molecular probe entails a biomolecular bond with the observed cell, which might interfere with their normal functions.The need for a biological probe is eliminated in phase-contrast acquisition, although the feasibility of such an intrinsic imaging technique in the presence of cortical tissue scattering needed to be investigated.Here, we demonstrated that the reduced scattering from the usage of the NIR-II spectral band could enable phase contrast up to 800 μm deep in the cortex.This result appears to be a considerable advance in the imaging depth of the technology since reflectance confocal imaging in the NIR range provides depth neighboring the 500 μm with similar craniotomy technique. 15,16bserving the images recorded with our microscope using phase contrast, the different anatomical structures, such as myelinated axons, blood vessels, and glial cells, seem to provide signals compatible with this mechanism and clearly delineate structures, such as seen in Fig. 2. Myelinated axons, distinguishable by their elongated shape and static signal, reflected a significant amount of light, probably arising from the myelinated sheets or Schwann cells primarily constituted of fatty molecules.Schwann cells may be seen as little bumps arranged in elongated columns.Depending on microscope placement and image resolution, these myelinated axons can be also seen as continuous tubules.In the cases of erythrocytes and glial cells, the change of refraction index at the cellular wall may be the mechanism reflecting the detected light.Since the phase contrast imaging is known to exhibit signals, which distinguish wavefront differences, the shadow observed in Fig. 5 is a good indicator of round surface projecting light back to the objective.Such detected light is then separated in two halves in the wave-vector domain via the split detection scheme, thus creating a sensitivity of the angle of incidence of light over the biological structure.In the literature of phase contrast imaging, most applications of epi-detection phase contrast scheme are used with a highly reflective surface beneath the sample in order to illuminate through the cells to shed light to internal structures. 17Since the usage of in-vivo models restrains the possibility of such methodology, signal coming from the cortical tissue is mainly the reflected light from the cell surface.
Repetitive Image Acquisition Detects a Vascular Network Topology
Temporal analysis of the reflective signal yields a vascular network in the imaging depth of the confocal microscope.This capability of revealing the path of dynamically flowing erythrocytes stems from their reflection in the vessels' lumen.This variance imaging scheme was shown to delineate the vascular architecture with little signal contamination from static structures.The variance scheme while performing an image stack provided connectivity of descending vessels.This connectivity of the vascular map may arise from the erythrocyte brief passage through the imaging plane of the microscope, which is hard to decipher in the reflectance confocal microscope's raw images.Variance imaging of axons aligned with the optical axis did not generate sufficient contrast because collinear orientation does not produce enough reflection, so that their signal appears static.
Phase Contrast Scheme Discerning Power Degrades with Depth
The phase contrast scheme helps to reduce the background signal that could come from the optics since common signals in both channels are subtracted to one another.Moreover, the polarization filtering of the spurious reflection is able to remove all the characteristic noise up to 10 nW under the objective.The very high sensitivity of the system is due to the use of superconducting nanowire single photon detector (SNSPD, Quantum Opus) photon counters.Even with this sensibility, no structure was visible past ∼1 mm of depth.A similar system has been shown to image over the visual cortex of mice up to a depth of 1.2 mm. 7The different cortical area and usage of a split detection scheme may be the cause that impaired the capabilities of deep light collection.
Imaging of Cortical Structures Through the Skull
Imaging through the intact skull deteriorated the overall sensitivity of the microscope to the reflective constituents of the cortical tissue up to 800 μm depth in the cortex.The signal collected provided vascular images only at superficial depths (under 200 μm), hence restricting the capabilities of the imaging setup to perform through skull longitudinal observation.It is clear that the deterioration of the resolution arises from the passage through the skull, which could be avoided with different techniques, such as skull optical clearing 18 or adaptive imaging via frontwave manipulation. 19However, optical clearing requires access to the skull surface by fixing a QWP onto the surface of the skull, which prevents the manipulation required.Implementation of an adaptive optic component to the system may be more suitable for future experiments.
Limitations
While the proposed imaging system is relatively low cost and effective for generating phase contrast from intrinsic signals, the long imaging sequences needed for acquiring clear angiograms of the vascular structure seems to limit its potential.A way to reduce acquisition time could be via a fast scanning method with a resonant scanning mirror to push further the observation of biological cells' functions in the cortical tissue.
On the methodology front, the usage of a QWP as a cranial window may be inconvenient regarding surgery manipulations and window clearance.Advent of high numerical aperture microscope objectives with integrated QWP would open the way to skull clearing method without the need for creating cranial windows.
Moreover, the QWP placement may be suboptimal since the waveplates used as cranial windows are optimized for collimated light.With the placement under the objective with a numerical aperture of 1.05, the angled incoming light may not rotate its polarization completely.Hence, this effect may reduce the intensity sent to the detectors.However, to minimize the background noise, the QWP after the microscope objective ensures that the spurious reflexions coming from the microscope objective lies in an orthogonal polarization from the detected signal, which is why the present configuration was preferred.
Conclusion
In this work, a phase contrast scheme was integrated in a long-wavelength reflectance confocal microscopy.Derived images enable the detection of the glomerular aspect of singular glial cells while rendering tubular structures, such as myelinated axons and vessels.
Moreover, by exploiting temporal variance, signals arising from the erythrocyte passage in the lumen of blood vessels enable the reconstruction of connected angiograms.
Intact skull imaging was also shown to be feasible, which could be useful in certain applications not requiring deep cortical imaging.Future work will aim to accelerate acquisitions and investigate contrast associated with recruitment of immune cells.
Disclosures
No conflicts of interest, financial or otherwise, are declared by the authors.
Disclaimer
F. Lesage has minor ownership in Labeo Technologies Inc.
Fig. 1
Fig. 1 Schematic presentation of the reflectance confocal microscope designed in this work (a).Polarization separation is provided by the quarter-wave plate which feeds back reflected signal to descan the received light.The knife-edge mirror is used to create the phase contrast imaging scheme.NiDAQ, National Instrument Acquisition card; SLD, superluminescent diode; λ∕2, halfwave plate; PBS, polarization beam splitter; SL, scan lens; TL, tube lens; and λ∕4, quarter-wave plate.KEM, silver-coated knife-edge mirror and PTDs, photon detectors.Axial image stack with 1 μm steps has been acquired (b) providing an axial LSF.The horizontal plane of the image shown is the depth axis.A USAF 1951 resolution target has been imaged to characterize the lateral resolution (c) and the ESF method was chosen.The intensity profile in the axial direction and lateral direction are shown and the FWHM was used as a metric for resolution.(d)-(f) Each intensity profile is plotted.For the axial resolution (d), a Gaussian curve is plotted over the data to provide the FWHM.The vertical and horizontal edge spread function are also plotted (e, f).Retrieval of FWHM is performed through the derivative of the line spread function and curve fitting of a Gaussian profile.
Fig. 2
Fig. 2 Imaging capabilities and phase contrast imaging at different depths.The images on the upper row are the addition of both channels of the reflectance confocal microscope providing the complete intrinsic signal from different cortical components.The subtraction of both channels, shown in the middle row, provides the phase contrast images which delineated the edges of the blood vessels (red arrows) and myelinated axons (yellow arrows).Close-ups of 30 μm field of view of cellular constituents are presented in the last row.Green boxes present the respective close-up images of cells.
Fig. 4
Fig.4Demonstration of vascular component retrieval at different depths.For each depth, the raw signal, the subtraction of both channels, and the variance computation are presented.Tubular components in the raw signal can be attributed to either myelinated axons or blood vessels, but the temporal computation scheme allows to differentiate dynamic vascular structures from static elements, i.e., axons.All scale bars are 50 μm wide.
Fig. 5
Fig. 5 Demonstration of erythrocyte monitoring.The variance computation of the images can generate a vascular network map that incorporates small capillaries in the cortex.Using the fast axis scanning mirror at 800 hz and by placing the confocal microscope on top of the detected capillary enables the visualization of the erythrocytes passage in the lumen.
Fig. 6
Fig. 6 Comparison between imaging through intact skull and a cranial window.SNRs are indicated next to the images at different depths.Images through the intact skull exhibit lower SNRs than their counterpart at the same depth through a cranial window.Through the skull, cells bodies are distinguishable up to 500 μm but no vessels were retrieved past 200 μm using the temporal filtering technique.Scale bars at 15 μm.
Structures from the Phase Contrast Reflectance Confocal | 6,127.8 | 2024-02-01T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
Characterization of Predictable Quantum Efficient Detector over a wide range of incident optical power and wavelength
We investigate the Predictable Quantum Efficient Detector (PQED) in the visible and near-infrared wavelength range. The PQED consists of two n-type induced junction photodiodes with $Al_2O_3$ entrance window. Measurements are performed at the wavelengths of 488 nm and 785 nm with incident power levels ranging from 100 ${\mu}$W to 1000 ${\mu}$W. A new way of presenting the normalized photocurrents on a logarithmic scale as a function of bias voltage reveals two distinct negative slope regions and allows direct comparison of charge carrier losses at different wavelengths. The comparison indicates mechanisms that can be understood on the basis of different penetration depths at different wavelengths (0.77 ${\mu}$m at 488 nm and 10.2 ${\mu}$m at 785 nm). The difference in the penetration depths leads also to larger difference in the charge-carrier losses at low bias voltages than at high voltages due to the voltage dependence of the depletion region.
Introduction
Silicon photodiodes are widely used in the wavelength range from 300 to 1000 nm to detect light in various applications.Underpinning spectral responsivity scales based on silicon photodiode working standard detectors are the most straightforward solution for quantitative determination of optical power in these applications.Traceability to SI (International System of Units) has been traditionally established using absolute cryogenic radiometers [1,2] for calibration of the working standard detectors.Silicon photodiode detectors as primary standards would be attractive because use of cryogenic radiometers requires liquid-helium temperatures and dedicated operation personnel, resulting in high maintenance costs.Predictable Quantum Efficient Detector (PQED) provides such a solution where the spectral responsivity of a silicon detector, operated at room temperature, is determined by fundamental constants, wavelength and a small, predicable correction for reflectance and charge-carrier losses [3][4][5][6][7].
One way to decrease the reflectance losses is to apply a trap detector configuration instead of a single photodiode [8][9][10].Charge-carrier losses can be reduced in induced junction photodiodes [8,11,12,13] where the pn junction is produced by the electric field of trapped charge in the photon entrance window layer of the diode.Two induced junction photodiodes of the PQED are aligned in a wedged trap configuration providing a primary standard detector for visible wavelengths [3,4,14].In addition to calibration of working standard detectors, the PQED can be used in various applications in photometry [15] and in measurements of low optical power when operated at liquid nitrogen temperatures [16,17].
Evaluation of the internal quantum deficiency (IQD) of the PQED has a particular interest.When the recombination losses of charge carriers are small and precisely predicted, the responsivity of the detector can be estimated with low uncertainty.PQEDs made of p-type silicon photodiodes with thick SiO 2 coating have been validated relative to absolute cryogenic radiometers and show excellent stability of responsivity over ten years [4,18].On the other hand, production of p-type PQEDs requires access to suitable lightly doped p-type silicon wafers and time-consuming coating process.An alternative is to use n-type silicon wafers and Al 2 O 3 surface layer to produce the induced junction, which offers a simpler photodiode production process [19], well known in photodiode manufacturing industry.Furthermore, a software to predict the IQD of the n-type PQEDs with 100 ppm (parts per million) relative uncertainty was developed and successfully applied, using data from photocurrent vs. bias voltage measurements over a narrow range of incident optical power [19].
A new batch of n-type induced junction photodiodes for PQEDs was produced in this work.Several measurements were carried out to study optical properties of the photodiodes and PQED, such as evaluation of reflectance, IQD, spatial uniformity, and bias-voltage dependent photocurrent (IV curves).Measurements of the n-type PQED were done in the visible and near-infrared spectral region using several optical power levels over a wide range from 100 µW to 1000 µW.The n-type PQED has not been studied before in the near-infrared wavelength range.The PQED can produce a photocurrent with low charge-carrier losses until silicon starts to become transparent at infrared wavelengths approaching 1000 nm.The aim of this study is to broaden our knowledge of PQED operation in the near-infrared region and at power levels approaching the nonlinearity range of the photodiodes.We describe how some features of the IV curves cannot be seen in the linear scale presentation but are only visible in specific logarithmic scale plots.
Photodiode fabrication and detector assembly
The PQED was constructed of two n-type silicon photodiodes.Schematic cross-section of the photodiodes is presented in figure 1.The photodiodes were fabricated on highly resistive (> 10 kΩ•cm) double side polished 150-mm-diameter and 675-µm-thick n-type silicon substrates.In figure 1, p-diffusion areas represent the diode contacts.The active area of the photodiode is 11 mm x 22 mm.The induced junction of the photodiode is produced by Al 2 O 3 coating of the silicon substrate.Negative charge in Al 2 O 3 induces p-type inversion layer [19,20] over the active area of the photodiode.An early description of the induced junction photodiodes can be found in [11].
The fabrication of the photodiodes of this work follows closely the process flow described in [19].Here, the fabrication started by thermally growing a 400-nm-thick SiO 2 layer.The oxide functions both as a screen oxide for the implantation and a field oxide for the device.The pimplantation areas were patterned with photolithography and the oxide was thinned down to 70 nm for the implantation by wet etching with HF.The patterned front side was implanted with boron and the backside with phosphorus.The implanted areas were activated at 1050 °C.Field oxide was removed from the active-and contact-areas by wet etching.A 30-nm-thick Al 2 O 3 was deposited by atomic layer deposition.The Al 2 O 3 was removed outside the active area by wet etching.A 300-nm-thick contact aluminium (Al) was sputter deposited and patterned with wet etching.The device was finalized by 20 minutes of annealing at 425 °C in H 2 /N 2 ambient.The photodiodes were assembled in a light trap configuration as shown in figure 2. The PQED consists of two photodiodes aligned in such a way that seven reflections from the photodiodes take place before the incident beam leaves the detector.The angle between the photodiodes is 15° and the angle of incidence on the first photodiode is 45°.The photodiodes are placed inside a metal cylinder with 10 mm aperture diameter.The PQED is used at room temperature with dry nitrogen flow through the aperture to prevent dust and moisture contamination of the detector.Both photodiodes have own connectors to current measurement electronics which allows detector diagnostics by photocurrent ratio measurement.
Characterization measurements and data analysis 3.1 Experimental setup
To investigate properties of the n-type PQED we executed following measurements: spatial uniformity scanning of the detector responsivity, detector reflectance measurements, responsivity measurements against p-type PQED and measurement of photocurrent dependence on bias voltage at various incident power levels.The measurement setup (figure 3) includes two laser sources: an argon-ion laser at 488.12 nm wavelength with a Gaussian-like beam diameter of 1.3 mm (1/e 2 ) and a single longitudinal mode semiconductor laser at 784.83 nm wavelength with a diameter of 2.6 mm of the vertically polarized beam.Both lasers were simultaneously used only when aligning the 785 nm laser beam for reflectance measurements.An optical power stabilizer provided a stable p-polarized laser beam, and a wedge mirror with monitor detector was used for laser drift correction.Tested detectors were placed on a moving XY-stage to execute automated measurements.The PQEDs were reverse biased and the sum of the photocurrents from two photodiodes was recorded.
Spatial uniformity scanning
To evaluate the spatial uniformity of responsivity of the PQED, we made a scanning measurement of the detector.Here the 488 nm laser beam was used because of smaller diameter.The PQED was placed on the XY translational stage which moved at 0.5-mm steps in vertical and horizontal directions.Figure 4 shows that the uniformity of responsivity is about 60 ppm in the central area with the size of 2 mm x 1 mm.There are two small areas of reduced responsivity on the left side of the detector.They may be caused by dust particles or defects in the detector structure.Spatial uniformity of another n-type PQED within 30 ppm in the area of 4 mm in diameter has been measured in [19].
Reflectance and responsivity
A p-type PQED was used as the reference detector in responsivity measurements of the ntype PQED of this study.The PQEDs were placed on a moving stage and the setup automatically measured photocurrents I p and I n from the p-and n-type detectors, respectively.Each detector was connected to a separate current-to-voltage converter (CVC) with a reverse bias voltage of 5 V.The incident optical power was about 100 µW.During data processing, dark currents and offsets of multimeters and CVCs were corrected.The ratio I n /I p of the corrected photocurrent values is equal to the responsivity ratio of the PQEDs, when the same optical power is measured by both detectors.
The spectral responsivity of the PQED is given by where R 0 (λ) = eλ/hc = (λ/µm)/(1.23984W/A) is the responsivity of an ideal quantum detector expressed by the vacuum wavelength λ of the incident radiation and fundamental constants e, h, c.Parameters ρ(λ) and δ(λ) describe the spectral reflectance of the PQED and chargecarrier losses (IQD) of the photodiodes, respectively.To determine the difference of internal charge-carrier losses δ n (λ)δ p (λ) of the n-type and p-type photodiodes, the detector reflectance values need to be taken into account according to equation (1).
The reflectance of the detectors was measured with the use of a calibrated Hamamatsu silicon trap detector with a baffle tube as shown in figure 3.For both laser wavelengths the optical power in reflectance measurements was around 1000 µW.The reflected beam of the 488 nm laser can be observed with bare eye if the laser power is large enough.Since the reflection of the 785 nm beam is too weak to be detected by bare eye or luminescence cards, an extra step was applied.We used the 488 nm laser as an auxiliary beam and ensured that the paths of these two beams, passing the PQEDs, totally coincide over a long distance.After reflection from the PQED, the reflectance at 785 nm could be measured when blocking the 488 nm laser beam.In these measurements 10 V reverse bias voltage was applied to the detector because of the large incident power.1. Photocurrent ratio, reflectance, beam diameter correction, and internal quantum deficiency (IQD) difference of n-type and p-type PQEDs.Standard uncertainty of the photocurrent ratio measurements is 20 ppm.Other reflectance values are measured, but the reflectance of p-type PQED at 785 nm is obtained from a validated calculation [3] with a standard uncertainty of 8 ppm.
Table 1 shows the measured photocurrent ratios I n /I p , reflectances for the two wavelengths, and a correction due to the beam size difference caused by the nonuniformity of spatial responsivity (figure 4).Reflection loss values agree well with previous measured and calculated results [3,6,19].For small reflectance and IQD values, equation ( 1) can be used to approximate the photocurrent ratio as The rightmost column in Table 1 gives the resulting differences in the IQD values.It is clearly seen that the n-type PQED has larger charge-carrier losses than the p-type PQED.Furthermore, these losses are larger by (79 ± 31) ppm at 785 nm than at 488 nm wavelength.Estimates of absolute IQD of the n-type PQED are (188 ± 73) ppm at 488 nm and (267 ± 65) ppm at 785 nm, where the standard uncertainty is dominated by the uncertainty of the predicted responsivity of the p-type PQED [3,5].The values include a 6 ppm correction for the estimated absorption loss in the Al 2 O 3 layer [19].
Photocurrent dependence on bias voltage
To study recombination losses in the photodiodes, measurements of photocurrent as a function of bias voltage were carried out.We used reverse bias voltages between 0 and 18 V to record the change of the detector photocurrent when applying a constant optical power between 100 µW and 1000 µW to the PQED.As the bias voltage source, we used a Keithley calibrator, which can provide a very stable voltage signal in the range of ±20 V.
In the preliminary data analysis, the highest measured photocurrent was used as a reference value.Normalized photocurrent data are presented as q = I(V)/I max where I(V) is the photocurrent with the applied bias voltage V and I max is the largest of the recorded current values at a certain optical power level.However, the linear data plots of figures 5(a) and 5(b) do not provide a sufficient view to the overall behavior of the photocurrent values at all bias voltages.
A better view is obtained on the logarithmic scale which the selected normalization allows to use in the form Q = log(1q).The data of figures 5(c) and 5(d) on the logarithmic scale show two distinct negative slope regions as a function of the bias voltage.These characteristics are not easily visible on the linear scale.For the curves corresponding to 1000 µW power level, the slope changes at the bias voltage of 2.5 V for 488 nm and at 3 V for 785 nm.At 180 or 190 µW power, the corner point is at about 0.5 V at both wavelengths.Selection of I max to normalize the photocurrent values is practical but rather arbitrary.The question of a proper normalizing photocurrent can be addressed by multiplying both sides of equation ( 1) with the incident optical power P. Noting that R(λ)P = I(V) and solving for IQD gives where I sat (λ) = [1 − ρ(λ)] R 0 (λ)P is the saturation photocurrent of an otherwise ideal photodetector, except that ρ(λ) > 0. Equation (3) indicates that using I sat (λ) as the normalizing photocurrent instead of I max makes the vertical scale of the IV curves to correspond to IQD.In figures 5(a) and 5(b), the location of I sat (λ)/I max is approximated to be 188 ppm and 267 ppm above the average level of saturated I(V)/I max values, respectively, corresponding to the absolute IQD values given at the end of section 3.3.The average saturated levels can be estimated to be 0.99998 at both 488 nm and 785 nm.
Using the normalization of equation ( 3) on the logarithmic scale, the photocurrent dependence on bias voltage is reproduced in figure 6 in such a way that the curves at different wavelengths can be easily compared with each other.It is seen that on the logarithmic scale there is an approximately constant difference between the curves at different wavelengths, especially at voltages above the corner point.That conclusion would change if I max would be used as the normalizing photocurrent because the curves would overlap at high bias voltages.
Discussion
The measured IQD values at 488 nm are similar as reported in [19] for n-type PQED, but a good spatial uniformity is obtained over a somewhat smaller area.Furthermore, this work indicates that the IQD at infrared wavelengths appears to be considerably higher in n-type PQED than in good quality p-type PQED.
The purpose of bias voltage dependence experiments was to detect the saturation point of the photocurrent at the measured wavelength depending on the power level.In addition, comparison of I(V)/I sat (λ) ratios at a fixed bias voltage can evaluate linearity of the detector depending on the incident optical power.For example, it can be seen from figure 6 that in the conditions of the reflectance measurements (1000 µW power, 10 V bias voltage), the additional charge-carrier losses relative to the saturated IQD are smaller than 0.02 %.Such deviation causes nonlinearity in the responsivity, but the low value of such nonlinearity does not affect here the reliability of the reflectance measurements of the n-type PQED.
Crucial parameters in determining the logarithmic IV curves are laser stability and noise level.In most of these measurements the noise level did not exceed 60 ppm.Initially the saturation photocurrent of an ideal quantum detector to be used for normalization of IV curves is unknown.In this work, comparison with p-type PQED allowed to assign numerical values to I sat (λ) of n-type PQED.It is expected that properly normalized logarithmic IV curves are useful in fitting three-dimensional charge-carrier recombination models to the experimental data, with the final goal of low-uncertainty determination of the IQD of PQED photodiodes [21].
The increase of IQD with increasing wavelength for n-type PQED was measured for the first time in this work and needs to be discussed.The penetration depths are 0.77 μm at 488 nm wavelength and 10.2 μm at 785 nm [22].With exponential decay, only 2 ppm of incident power remains at 488 nm wavelength at the depth of 10 µm inside the photodiode.Thus all 488 nm light is absorbed within the depletion region, the width of which is calculated to be approximately 60 µm at 0 V bias voltage, 100 µm at 1 V bias and 350 µm at 20 V bias.For the wavelength of 785 nm, the situation is different at low bias voltages, because 3500 ppm and 80 ppm of incident power remains at the depth of the depletion region width at 0 V and 1 V bias voltages, respectively.Those values provide a qualitative explanation of figure 6 which indicates that the differences between charge-carrier losses at 785 nm wavelength and at 488 nm are larger at low bias voltages than at high voltages.
Figure 1 .
Figure 1.Schematic cross-section of the n-type induced junction photodiode.
Figure 2 .
Figure 2. Photodiode assembly and light path in a PQED.
Figure 3 .
Figure 3. Block diagram of the measurement setup.
Figure 5 .
Figure 5. Photocurrent dependence on applied reverse bias voltage for 488 nm laser beam in linear (a) and logarithmic scale (c) and correspondingly for the 785 nm laser beam [(b) and (d)].The data points corresponding to I(V) = I max cannot be presented on the logarithmic scale.
Figure 6 .
Figure 6.Comparison of photocurrent dependence on applied reverse bias voltage for 488 and 785 nm laser light at different levels of incident power.Note that use of the saturated photocurrent of an ideal quantum detector (I sat ) for normalization, instead of I max of figure 5, allows correct description of the differences of charge carrier losses at high bias voltages. | 4,209.8 | 2021-06-23T00:00:00.000 | [
"Physics"
] |
Spatial and temporal distances in a virtual global world: Lessons from the COVID-19 pandemic
The experience of COVID-19 prompted us to rethink the imperatives of distance for the organization of value-creating activities globally. We advance a conceptualization of distance as representing separation in both space and time and posit that these distance dimensions represent different kinds of separation and require varied theoretical attention. We delineate the intrinsic qualities of spatial and temporal distances and theorize the impact of this extended conceptualization of distance on major tenets of international business theory and their predictions regarding the patterns of international business activity. We illustrate the ways by which varying configurations of spatial and temporal distances serve different value-creating activities and draw their implications for countries’ global integration. We advance a call for more attention to time and temporal distance and their impact on the ways firms organize their value-creating activities in an increasingly virtual world.
INTRODUCTION
As a theory dedicated to the understanding of business activity whose essence is value creation across space, distance has been a central construct in international business theory [see Beugelsdijk et al. (2020) for a comprehensive review]. In these discussions, distance is theorized as a multi-dimensional construct with geographic and metaphorical meanings and is maintained to exercise a strong impact on the intensity and patterns of international business activity (Alcácer, Kogut, Thomas, & Yeung, 2017;Berry, Guillen, & Zhou, 2010;Ghemawat, 2001;Johansson & Vahlne, 1977;Shenkar, 2012;Zaheer, Schomaker, & Nachum, 2012).
International business activity, however, traverses both spatial and temporal distances, but this has not been reflected in scholarly attention. While there has been a voluminous body of research on spatial distance, temporal distance has received scant attention (see Chauvin, Choudhury, & Fang, 2021;Gooris, & Peeters, 2014;Yang, Wen, Volk, & Lu, 2022;Zaheer, 1995 for notable exceptions). At best, it has been added as a control variable, and more typically it was ignored altogether. Keyword search of papers published in the Journal of International Business Studies yielded 571 hits for geographic distance and 166 hits for time zone. 1 Berry et al.'s (2010) influential paper lists nine dimensions of distance relevant for MNEs, but temporal distance, conspicuously, is not among them. Theoretical development of temporal distance has thus lagged that of spatial distance, and major theoretical constructs associated with temporal location and distance have been underexplored and are poorly understood.
This attention misallocation is disturbing in an academic field whose raison d'être is the study of the separation of business activity in time and space. It is also inconsistent with the nature of international business activity. The intangible assets that drive this activity are assumed to be transferable over spatial distance at no cost (Dunning & Lundan, 2008). Transferability across temporal distance, in contrast, is mired with challenges, and is costly to execute. The creation and utilization of many of the intangible assets that drive international business require face-to-face human interaction and cannot be separated in time in both the production and the consumption. The shift toward coordination-intensive forms of production among firms located in different time zones has further increased the time sensitivity of international activity (Hummels, & Schaur, 2013;Yang, et al., 2022).
The neglect of temporal distance undermines not only the ability to understand the implications of temporal distance but that of spatial distance as well. Many consequences assumed to spatial distance are in fact a result of temporal ones. The ignorance of temporal distance may inflate the effect of spatial separation because of omitted variables bias. This approach reflects an implicit or explicit assumption that temporal distance has no impact, an assumption that is inconsistent with research that documents the high cost of transfer and communication among entities separated in time (Chauvin, et al., 2020;Hinds et al., 2002;Hummels et al., 2013).
Studies show that when a time-zone measure, or some proxy for its consequences, are added to gravity models of business activity the impact of spatial distance drops significantly, and often turns insignificant (Espinosa, et al., 2012;Portes & Ray, 2005;Stein & Daude, 2007). Bahar (2020) found that the negative impact of spatial distance on knowledge transfer between headquarters and affiliates is significantly weakened as the temporal distance between them diminishes. The effect of one additional hour of time overlap among subunits is equivalent to a reduction of about 200 km of spatial separation between them. These findings suggest that the two distance dimensions are interdependent such that the same spatial distance affects firms differentially across different scales of temporal distance, further accentuating the need to account for temporal distance.
All this mattered less in the pre-COVID-19 era because traveling -a mode of crossing distance that lumps temporal and spatial distances together and obscures many of the differences between themwas the major means of crossing distance (Boeh & Beamish, 2012). Travel restrictions imposed by COVID-19 led to virtualization of economic activity and separation of value creation from physical location to an extent never experienced before (Cote, Estrin, Meyer, & Shapiro, 2020). This revealed the stark differences between spatial and temporal distances, as the virtualization of economic activity rendered spatial distance less relevant, but it has severe limitations in relation to temporal distance. These developments signify major shifts for international business activity and call for rethinking of the role of distance in international business theory, and its impact on the organization of value-creating activities on a global level.
In this paper, we seek to begin filling in this need and offer fresh thinking into the ways by which the increasing virtualization of business activity changes the implications of distance for international business theory and practice. Towards this end, we conceptualize distance as a construct that represents separation in both space and time, which together shape outcomes. Blending insights of global teams theories (Chauvin, et al., 2020;Mell, Jang, & Chai, 2020;Salas, Ramon, & Passmore, 2017), time economics (Bahar, 2020;Stein & Daude, 2007;Zaheer, 2000), and economic geography theories (Peuquet, 1994;Shekhar, & Xiong, 2008), we articulate the distinctive properties of temporal and spatial distances and reason that although at time they move in tandem, they are conceptually distinct (Espinosa, et al., 2012), and affect organizational outcomes differently (Chen & Lin, 2019;Gooris & Peeters, 2014), calling for different theorization so that their distinct consequences can be better understood. We conclude by extending a call for adopting a temporal lens towards international business theory and developing a research agenda around time and temporal distance in international business.
Our contribution assumes considerable importance as the virtualization of economic activity has accelerated the spread of MNE activities over space and time and led to experimentation with novel models for taking advantage of the new ways of organizing value-creating activities (e.g., ''work from anywhere''). Moreover, the choices that MNEs make in re-configuring the spatial and temporal separations of their activities affect not only themselves but economies and societies as well, shaping countries' comparative advantage and global competitiveness (Baldwin, 2019;Brakman, Garretsen, & van Witteloostuijn, 2021;Zaheer, 1995), further enhancing the importance of our contribution.
SPATIAL AND TEMPORAL DISTANCE
PROPERTIES Spatial and temporal distances differ from each other in ways that affect their impact on international business activity in important ways. At the most basic level, these distance dimensions are both continuous and cyclical, but they differ in the scale of their cyclicality, whether around the globe or around the sun. Spatial cyclicality -referred to as Earth's circumference -is the distance around the earth (slightly over 40,000 km when measured around the equator). Temporal cyclicality revolves around the sun, in a patterned 24-h rhythm that repeats itself in a daily cycle (circa diem = ''about one day'') (Shekhar, & Xiong, 2008).
These differences have important implications for business activity that takes place across distance (Pittendrigh, 1993). Temporal cyclicality is aligned with the natural rhythm of humans, whereas spatial cyclicality is not related to it. The natural process of human life evolves in a Circadian rhythm that regulates the sleep-wake cycle and repeats itself every 24 h. A variety of human indicators are affected by time, as is vividly apparent by difficulties of adjustment to changes in time zones (Jehue, Street, & Huizenga, 1993;Lemmer et al., 2002). Managers reported a 50% drop in productivity caused by traveling and adjustment to new time zones upon arrival (Boeh & Beamish, 2012). This rhythm of human beings shapes the consequences of temporal distance for business as well. Zaheer (1995) describes how the human cycle of a day dictates market dynamics in the global foreign exchange industry and obstructs the emergence of a truly global market during a 24-h global trading cycle. No equivalent effect is caused by spatial distance, whose dynamics can be thought of as exogenous to human rhythms.
These differences in the cyclicality of the distance dimensions entail that their impact differs at different scales (Espinosa & Carmel, 2004;Zaheer, Albert, & Zaheer, 1999). The impact of spatial distance increases linearly with an increase in distance, albeit at diminishing returns. The quality and frequency of communication drops significantly at a very small increase in spatial distance, but once it reaches certain levels, an additional increase in the magnitude of spatial distance has minimal additional effect (Allen, 1977;Waber et al., 2014). Traveling costs and time exhibit a more moderate and consistent rate of diminishing returns as distance increases (Boeh & Beamish, 2012). Scale matters a great deal in relation to temporal distance as well, but its impact manifests differently (Peuquet, 1994;Zaheer et al., 1999). The sensitivity of temporal distance to the Circadian circle of human beings implies that its impact on business depends not only on the length of the distance but also on the time of the day in which activity takes place.
Of notable importance for the sake of interaction over distance is the time-overlap among the parties for the exchange, as it determines the feasibility of synchronic communication, a critical determinant of communication quality and effectiveness (Bahar, productivity (Espinosa, et al., 2012;Salas, et al., 2017). Changes of 1 h associated with daylight savings were shown to have a strong impact on the communication among MNE sub-units scattered across different time zones (Chauvin, et al., 2020). Yang, et al. (2022) find that work time overlap between parents-subsidiaries reduces expatriate employment because it enables synchronic online communication to replace physical presence in subsidiaries' foreign countries. These differences between spatial and temporal dimensions imply that their elasticities in relation to each other vary across different scales of distance (Hummels & Schaur, 2013).
Yet another difference between the two distance dimensions is that spatial separation is symmetric, that is, distance (A,B) = distance (B,A) (see Zaheer, et al., 2012 for a nuanced view of this symmetry), but separation by temporal distance is not (Espinosa & Carmel, 2004;Zaheer, 1995). Temporal separation between A and B implies that A's time zone is different from that of B. This means that A and B would be at different points in their respective Circadian cycles at the time of the interaction, with corresponding implications for their alertness and productivity.
In addition, spatial and temporal distances are affected differently by the cardinal direction of movement across distance (Gooris & Peeters, 2014). Temporal distance changes only between East and West, whereas spatial distance changes in all cardinals, whether East/West or North/South. These differences entail that East/West move is affected by both spatial and temporal distances whereas North/South move is subject to impact of spatial distance only. This implies that movement across spatial distance may or may not be associated with change in temporal distance, but temporal distance is always associated with spatial distance (Boeh & Beamish, 2012;Jehue, et al., 1993).
Moreover, the directionality of movement, whether Eastward or Westward, matters in relation to both distance dimensions but for different reasons. The speed of humans' adjustment to different time zones varies considerably by the direction of movement. Travel adjustment Eastward is almost 50% longer than Westward adjustment (Lemmer, et al., 2002;Waterhouse, Reilly, Atkinson, & Edwards, 2007). Kamstra, Kramer, & Levi (2000) found significant differences in the impact of time change on equity returns between the fall and spring seasons, corresponding to movement of daylight-saving Eastward or Southwards. Directionality of move between cardinals affects spatial distance as well, but this effect originates in natural attributes such as land features, e.g., uphill or downhills mountains, or winds aloft, which affect the speed of movement in different directions by land, sea, and air (Peuquet, 1994).
Further, the distance dimensions vary also in terms of the means available to bridge over them. Spatial distance can be crossed via both traveling and virtual (synchronous or asynchronous) interaction, whereas the only way to cross temporal distance is via travel. These two means of crossing distance vary in their effectiveness and are associated with different mixes of costs and benefits. They enable different amounts of human interaction and affect its quality and outcomes (Hinds & Kiesler, 2002), as was apparent during COVID-19 in the vast variations in the impact of travel restrictions and isolation across industries (Côté, et al., 2020).
Last, and by no means least, are differences in the cultural connotations of space and time and their (Levine, 1997;Rooney, 2012), with deep roots in countries' histories and trajectories of economic development (Galor & Ozak, 2016). Different perceptions of time across countries were shown to have a strong impact on collaborative relationships among spatially separated teams (Saunders et al., 2004), as well as on governance choices and their outcomes (Peeters, Dehon, & Garcia-Prieto, 2015). No similar effects appear to exist in relation to space (Devine-Wright, & Clayton, 2010). Table 1 presents a summary of the qualities of spatial and temporal distances and highlights the differences between them.
Spatial and Temporal Distances Combined
Global activity evolves separation in both space and time and is thus subject to the combined effects of spatial and temporal distances, requiring a joint consideration of both distance dimensions (Chen & Lin, 2019). In Figure 1 we offer a parsimonious presentation of varying combinations of the two distance dimensions in relation to selected cities around the world, with London and New York as focal points. Temporal distance is measured by the number of time zones from London and New York (respectively, GMT and EST time zones). Spatial distance is operationalized by kilometer distance and direct flight time from these cities (flight time allows for comparability with temporal distance, as both measures are timebased). The full dataset of the distance measures is presented in ''Appendix''. As Figure 1 shows, temporal and spatial distances relate to each other in different ways, moving in tandem (Quadrants 1 and 4), where they subject firms and countries to the combined effects of separation in time and space, or departing from each other (Quadrants 2 and 3), confronting subjects with challenges of respective dimensions. The quadrants presented in Figure 1 show that the consequences of the same spatial distance vary across scales of temporal distance (the difference between quadrants 2 and 4 and between 1 and 3). Likewise, temporal distance differs in relation to different scales of spatial distance (the differences between quadrants 1 and 2 and 3 and 4) (Hummels & Schaur, 2013). In the following sections, we outline the implications of the differences between the two distance dimensions and their combined configurations for theory and practice.
IMPLICATIONS OF A MODIFIED CONCEPTUALIZATION OF DISTANCE FOR
INTERNATIONAL BUSINESS THEORY Conceptualizing distance as a construct that combines separation in both space and time and recognizing the distinctive properties of separation along these dimensions, carries important implications for international business theory. As an organizing framework in which to present our ideas, we employ Dunning's OLI paradigm, whose cohesive, all-embracing character makes it suitable for this purpose (Dunning & Lundan, 2008).
The OLI paradigm attributes the existence of the MNE and the patterns of its activity to three related factors, namely Ownership, Location, and Internalization advantages. Taken together, these three tenets explain why firms invest overseas and what determines the amount and composition of their international activity. We reason that these factors are modified in important ways by conceptualizing distance as encompassing separation in both space and time. We show that temporal distance affects the three dimensions of the OLI both by accentuating the impact of spatial distance, and in its own distinctive ways.
Ownership Advantages
Ownership advantages (the O of the OLI) describe the advantages that firms possess above those of local firms that enable them to overcome liabilities of foreignness and compete successfully in foreign countries. These advantages arise either from privileged ownership of, or access to, some incomegenerating assets that are transferable within firms across spatial distance, and/or from the ability to coordinate these proprietary assets across countries. Dunning labeled these advantages respectively 'Oa (asset) advantages' and 'Ot (transactional) advantages' (Dunning & Lundan, 2008;Verbeke & Yuan, 2010). Explicit in these conceptualizations is the assumption that both types of assets create advantages for firms over spatial distance.
We posit that temporal distance is also likely to affect these advantages. The ability to exploit the asset-based advantages across countries (Oa) and to organize transactions among subunits separated in space (Ot) are both sensitive to temporal separation (Bahar, 2020;Boeh & Beamish, 2012). The cyclical nature of temporal distance and its punctuations by the Circadian rhythm (Table 1) open opportunities for connecting temporally separated subunits virtually. This creates time-based Oa advantages in which around-the-clock organization of work enables firms to tap into diverse sources of knowledge and expertise, utilize low-cost resources, and reduce turnaround time. In parallel, dispersion of activities across temporally separated subunits enables MNEs to appropriate greater returns from their Oa by leveraging them around the clock (Carmel, 2006;Carmel, Espinosa, & Dubinsky, 2010;Zaheer, 2000).
Temporal distance also affects the Ot advantages. In part, challenges of temporal distance to the organization of work over distance accentuate those documented extensively in relation to spatial separation (Beugelsdijk, et al., 2020;Buckley & Casson, 2020;Ghemawat, 2001), but the mechanisms that drive them differ in important ways. The management of interdependencies among subunits separated in time requires specific transactionrelated capabilities that differ from those associated with the management of work across spatial distance. These capabilities need to address timerelated dynamics in the workplace such as nonlinearities of temporal distance as it is being punctuated by Circadian human rhythms, timezone overlap or lack thereof, and directionality of movement across cardinals (Hinds & Kiesler, 2002) ( Table 1). These capabilities could strengthen existing Ot advantages and be the source of new, timerelated Ot advantages.
Location Advantages
For international activity to take place, firms' ownership advantages must be more profitably exploited when used with factor inputs in host countries than in the home country (Dunning, 1998). Locationally bound resources are tied to the location that gives them rise and access to them requires physical presence in this location. The distribution of these location-specific resources across countries thus shape MNE location choices such that they select countries whose resources enable them to maximize the returns on their ownership advantages (Nielsen, Asmussen, & Weatherall, 2017).
Countries differ in terms of their temporal location in relation to other countries, turning temporal location into a location characteristic that could affect location choices. This impact manifests in a variety of ways, related to temporal proximity to other countries, time overlap with those of other countries, and cardinal location (Table 1). For instance, Mumbai's workday overlaps with countries that together account for 73% of world's GDP, making it 'the time zone champion'. By comparison, New York's workday overlaps with those of countries that account for only 33% of the world's GDP (Segalla, 2010). Greater temporal overlap with other countries opens opportunities for 'temporal brokerages' (Mell, et al., 2020) within the MNE, which bridge subgroups with little or no temporal overlap with each other, similarly for countries' The magnitude of the temporal separation affects communication and control and the feasibility of synchronous communication (Chauvin, et al., 2020). These differences are related also to countries' position along the Circadian rhythm relative to other countries, i.e., in terms of sleep time. Large temporal distance from others makes countries attractive for activities that take advantage of time-zone differences e.g., by organizing work around the clock (Marjit, 2007). Cardinal location in relation to other countries (e.g., between home and host countries) is likewise an important source of countries' temporal advantages, as they determine exposure to spatial distance only (North/ South movement) or to both spatial and temporal distances (West/East movement). These differences correspond to e.g., communication of US firms with Latin America versus Asia, and of European firms with Africa versus Russia, countries at similar spatial distances from the focal country and considerably different temporal distances.
Internalization Advantages
The third tenet of Dunning OLI states that for foreign investment to take place it must be more beneficial for firms to internalize the use of their ownership advantages than to sell or lease their use to a third party (I advantage). Firms' choice of internalizing cross-border operations is set where the marginal benefits of internalizing cross-border transactions are offset by the marginal cost. Spatial distance is recognized as an important determinant of these respective costs and the subsequent choices that firms make (Buckley & Casson, 1976.
Temporal separation is likely to affect these costs as well, in part raising the costs of spatial distance; in other ways exercising separate and different effects, which manifest in both markets and hierarchy, and affect their effectiveness as alternative governance mechanisms.
The cost of organizing value-creating activities hierarchically rises as temporal separation increases, particularly in activities that are intensive in information and require human interaction in real time (Stein & Daude, 2007). Increased temporal distance negatively affects the frequency and quality of communication (Chauvin, et al., 2020;Kiesler & Cummings, 2002), the amount of time it takes to accomplish work, and the quality of the output (Espinosa, et al., 2012;Hinds & Kiesler, 2002). The costs of temporal separation are particularly sensitive to time overlap, because it affects the ability to interact synchronically. Synchronous communication increases the intensity of the communication and its quality and affects the flow of knowledge and the effectiveness of collaborative work (Bahar, 2020;Espinosa, et al., 2012;Salas, et al., 2017). One study finds that a 1-h increase of temporal distance diminished synchronic communication among MNE subunits by more than 10% (but has no effect on asynchronous communication via e-mail) (Chauvin, et al., 2020).
Temporal separation also affects the costs of market transactions, as it impairs the efficiency of communication with third parties and raises the costs of establishment and maintenance of trust. This is a particular impediment in the absence of time overlaps that excludes synchronic communication. The establishment of trust relationships requires human interaction, if not in person at least virtually. This is a particular concern in transactions that are neither market nor hierarchy (Hennart, 1993), where trust substitutes for contracting and monitoring as a coordination mechanism (Alcacer, et al., 2017).
While the cost of transactions rises with temporal distance in relation to both markets and hierarchy, the rise is unlikely to be equal. The balance between these costs determines their respective advantages and is likely to vary across different activities (Buckley et al., 2020).
In Table 2 we present a summary of the impact of temporal distance on the three OLI components, in relation to those that have been theorized traditionally in relation to spatial distance.
IMPLICATIONS FOR PRACTICE
The differences between the temporal and spatial distance dimensions discussed above (Tables 1 and 2) and the varying configurations of spatial and temporal separations outlined in Figure 1 call for corresponding responses in MNEs' organization of activities across distance and in policymaking, as we outline below.
Implications for MNEs
Different configurations of space and time separation are suitable for different activities, reflecting variations in their sensitivity to spatial and temporal differences (Hinds & Kiesler, 2002). For instance, differences in simultaneity in the production (e.g., joint product development activities) and delivery (e.g., synchronized execution with consumers) affects the appropriate spatial and temporal configuration across different industries (Gooris & Peeters, 2014). Large temporal differences that allow for the creation of Oa temporal advantages by using around-the-clock production offer considerable potential advantage in industries in which value creation activities can be separated in time, e.g., back-office support, software development, and the likes. In contrast, Oa advantages that originate in the exploitation of temporal differences are of less value in most manufacturing industries. Such separation could even be debilitating for these industries because it challenges the potential of Ot advantages in the form of communication and coordination among subunits engaged in joint production. Such separation across time also increases the cost of transactions among MNE subunits and affects the benefits of markets versus hierarchy in serving foreign markets (Stein & Daude, 2007;Tomasik, 2013).
Likewise, different types of investment favor different configurations of separation in space and time. For horizontal, market-seeking investment, in which affiliates duplicate knowledge and business models developed at headquarters across countries, temporal proximity to headquarters and time overlaps that allow for synchronic communication could strengthen the Ot advantages. This type of investment often requires considerable human interaction with headquarters in order to administer effectively the transfer of knowledge and resources needed for affiliates to replicate HQs' knowledge effectively in foreign countries (Bahar, 2020). In parallel, it is less (not) sensitive to spatial distance because the transfer of material goods among subunits in such investment is minimal. Temporal proximity might matter less for vertical investment, characterized by fragmented organization of production, where spatial proximity to other subunits engaged in production of complementary output could accelerate the overall speed of production and reduce the costs of transactions, enhancing the benefits of internalization.
Further, temporal separation, particularly when it is large and excludes synchronic interaction, is debilitating for the creation of Oa in knowledgeintensive industries where value creation typically requires considerable amount of real-time interaction (Carmel, 2006;Chauvin, et al., 2020;Espinosa & Carmel, 2004). In contrast, temporal separation matters less for value-added activities in which transfers are based to a greater degree on codified knowledge that can be transferred without direct human interaction (e.g., a-synchronically), rendering temporal distance less important. Bahar (2020) finds evidence that affiliates with large overlap in working hours with their headquarters are more likely to be active in knowledge-intensive industries.
Variations in the impact of temporal and spatial separation on Oa and Ot exist also in relation to different modes of international operation. Temporal separation matters for both trade and FDI, but its impact on trade is considerably smaller because the need for real-time communication is smaller among trading partners (Stein & Daude, 2007;Tomasik, 2013). In parallel, for many types of FDI spatial distance matters less, favoring different configurations of space and time separation for trade and FDI.
Temporal location is also a part of countries' location (dis)advantages, with implications for MNE location choices and the opportunities they offer for the creation of L-advantages. Countries' temporal location determines around-the-clock access, thus raising the opportunities for taking advantage of immobile locational resources like skills and knowledge. This is notably apparent in relation to labor, where temporal location shaped the patterns of supply and demand for labor across (Brakman, et al., 2021). Similar effects are apparent in relation to suppliers and local partners, with implications for local specialization and creation of global production networks by MNEs (Acemoglu & Restrepo, 2018).
Implications for Policymakers
Countries' ability to integrate in global production networks is determined by the combined effect of their spatial and temporal location in relation to other countries (Figure 1), calling for policies that are responsive to specific configurations of spatial and temporal location. Historical accounts show that some policymakers have adopted an active approach to the management of time throughout history, with a view towards amending their temporal location in the service of global integration (Rooney, 2020). Setting the same time zone to a trading partner had been behind Argentina's flip-flopping its clock during most of the 20th century between UTC-4, where its geographic location places it, and UTC-2 that its trading relationships favor. Since 1993, it has been on UTC-3. Trading partners have also sought to influence each other's time zones, as American traders did in the 19th century when they persuaded Samoans to align their island time with that of nearby US-controlled American Samoa to make trading easier. More than a century elapsed until Samoa shifted its time to its locational time zone (Calabi, 2013;Wong, 2015).
Investment promotion policies should be extended to include temporal characteristics, as supplement to the spatial characteristics that were included in these policies throughout history (Henrikson, 2002;Nachum, Livanis, & Hong, 2021;Ward, 2005). Brazil's branding itself as a location for collaboration-intensive software development because its time zone overlaps with primary partners in North America is a case in point. Time zone, as it enables simultaneous collaboration, has occupied central place in Brazil's attempts to establish itself as a location for IT software and services (Prikladnicki, & Carmel, 2013). India, and to some extent the Philippines, in contrast have branded themselves as desired locations for investments based on temporal differences that allow them to take advantage of around-the-clock work (Carmel, 2006).
DISCUSSION AND CONCLUSION
In this paper we contribute to the development of theories of distance in international business by conceptualizing distance as a construct that combines spatial and temporal dimensions (Berry et al., 2010;Ghemawat 2011;Alcácer, et al., 2017). The implications of spatial distance for resource transfer and communication among MNE sub-units have long been theorized as a prime determinant of the scope of international activity (Buckley & Casson, 1976Dunning & Lundan, 2008). By adding the implications of temporal distance to these theorizations we offer a more coherent framework in which to theorize the patterns and intensity of international business activity and draw implications for practice.
In doing this, we contribute to a small but growing set of studies that has started to articulate the implications of temporal distance for MNEs (e.g., Bahar, 2020;Chauvin, et al., 2021;Gooris et al., 2014;Mell, et al., 2020;Yang, et al., 2022;Zaheer, 1995). Our contribution bridges the literature on spatial distance in international business with that of teamwork and the organization of work over temporal distance. Specifically, we contribute to the understanding of the relationships between the two distance dimensions and their combined and separated effects as they shape the consequences of distance for international business.
These contributions are of heightened contemporary significance. The increased virtualization of business activity entails a growing need for better understanding of the implications of spatial and temporal separations, as they affect firms, industries, and activities. We hope that the insights we offered in this paper regarding the distinct qualities of these distance dimensions and the ways they relate to each other would serve to support the development of a research agenda in which international business is treated as an activity that takes place across both spatial and temporal distances.
We also hope that this insight would guide firms as they construct the shape of their activities in this changing reality. The experience of prolonged lockdowns and traveling bans imposed by COVID-19 had equipped us with renewed insights into these issues and inspired our conceptualization of distance and its implications in this paper. We hope that these insights would feed into MNEs' reevaluation of the appropriate configurations of spatial and temporal distances for international activities (McKinsey, 2021).
Limitations and Future Research
Our study opens a large scope for future research to develop the ideas we advanced in this paper and address the limitations of our work. Perhaps the most immediate task for future research is to supplement our conceptual work by empirical testing. Our theory generates testable propositions regarding the impact of distance configurations on the patterns and intensity of international business activity. We also offer some tools to operationalize major theoretical constructs that could serve this research (Appendix A and Figure 1).
Additional work by future research is warranted also with reference to the relationships between temporal and spatial distances. Our discussions of these relationships, as summarized in Figure 1, presented the two distance dimensions as dichotomous and hid the richness of the nuances of the relationships along these continuous measures. Future research may address the limitations of this parsimonious approach and deepen the understanding of the way by which the scales of both distance dimensions affect outcomes (Bahar, 2020;Zaheer, et al., 1999). This would also deepen the understanding of the ways by which configurations of activities in space and time might serve as a source of MNE differentiation and competitive advantage.
Further, our theory focused predominantly on the impact of distance dimensions on internal MNE organization of work. Future research may supplement our discussions by extending them to the intra-firm context and examine the ways by which different distance configurations affect MNE relationships with third parties, via arm's-length or relational relationships (Chen et al., 2019).
There is also a need for an on-going evaluation of the relationships between spatial and temporal distances, and the way they affect international business activity, as technology continuously modifies the cost of crossing both spatial and temporal distances. Means of crossing distance have changed considerably throughout history because of technological developments and will continue to evolve (Antras, Redding, & Rossi-Hansberg, 2020;Baldwin, 2019). These developments affect the costs and benefits of the two distance dimensions, with important implications for the issues we raised in this paper.
ACKNOWLEDGEMENTS
The ideas underlying this paper were the subject of two Fellows Cafés hosted in the 2020 and 2021 AIB annual meetings entitled 'Time Zone Differences: A Challenge for International Business that Zoom Does Not Solve.' We thank the attendees for thoughtprovoking discussions and excellent comments.
OPEN ACCESS
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/ by/4.0/. | 7,970.2 | 2022-12-31T00:00:00.000 | [
"Business",
"Economics"
] |
Investigations of the Deuterium Permeability of As-Deposited and Oxidized Ti2AlN Coatings
Aluminum containing Mn+1AXn (MAX) phase materials have attracted increasing attention due to their corrosion resistance, a pronounced self-healing effect and promising diffusion barrier properties for hydrogen. We synthesized Ti2AlN coatings on ferritic steel substrates by physical vapor deposition of alternating Ti- and AlN-layers followed by thermal annealing. The microstructure developed a {0001}-texture with platelet-like shaped grains. To investigate the oxidation behavior, the samples were exposed to a temperature of 700 °C in a muffle furnace. Raman spectroscopy and X-ray photoelectron spectroscopy (XPS) depth profiles revealed the formation of oxide scales, which consisted mainly of dense and stable α-Al2O3. The oxide layer thickness increased with a time dependency of ~t1/4. Electron probe micro analysis (EPMA) scans revealed a diffusion of Al from the coating into the substrate. Steel membranes with as-deposited Ti2AlN and partially oxidized Ti2AlN coatings were used for permeation tests. The permeation of deuterium from the gas phase was measured in an ultra-high vacuum (UHV) permeation cell by mass spectrometry at temperatures of 30–400 °C. We obtained a permeation reduction factor (PRF) of 45 for a pure Ti2AlN coating and a PRF of ~3700 for the oxidized sample. Thus, protective coatings, which prevent hydrogen-induced corrosion, can be achieved by the proper design of Ti2AlN coatings with suitable oxide scale thicknesses.
Introduction
The increasing number of applications in which hydrogen is being used as a storage medium in energy conversion technologies demands the consideration of new construction materials, or at least a profound surface conditioning of established materials to prevent, e.g., hydrogen diffusion induced embrittlement or other forms of corrosion, especially the development of so-called "white etching cracks" [1]. One route for corrosion protection is the development and application of temperature-resistant coatings with excellent barrier properties for hydrogen. Recently performed studies indicate that MAX phase materials might fulfill these requirements [2][3][4][5][6]. The general formula, M n+1 AX n , (short MAX) describes a family of materials consisting of an early transition metal (M), mostly Materials 2020, 13 a group 13 or 14 element (A) and nitrogen and/or carbon (X) with the stoichiometry of n = 1,2,3 [7]. The MAX phases crystallize in a hexagonal lattice within the space group D46h (P63/mmc) in which the octahedral M n+1 X n layers are separated by atomic monolayers of pure A-atoms. MAX phase materials are known to have a good oxidation resistance [3,8,9], a high damage tolerance as well as a high electrical and thermal conductivity [10]. The good oxidation resistance of Al containing MAX phases usually stems from the formation of dense and thermodynamically stable thermal grown oxides (TGO) consisting of α-Al 2 O 3 on the coatings surface at relatively low temperatures of 600-700 • C. For comparison, the direct physical vapor deposition (PVD) of an α-Al 2 O 3 in an industrial scale deposition process usually requires temperatures above 1000 • C [11]. Lower deposition temperatures of 500-600 • C have also been observed but at the expense of a brittle fracture behavior [12,13]. A further advantage of α-Al 2 O 3 oxide scales thermally grown on MAX phase coatings is the well-known self-healing effect whereby small defects or cracks in the coating, which might serve as diffusion pathways, are blocked by oxide growth [2]. For this purpose, the oxidation kinetics of the TGO has to allow for a quick healing and oxidation of the surface, but has to prevent fast oxygen diffusion to the coating-substrate interface. The oxidation kinetics of Ti 2 AlC at 1200 • C were modelled by G.M. Song et al. in [3]. This model contains the growth of oxide grains and assumes that the diffusion paths along the grain boundaries increase with time. This results in a time dependency for the increase in the thickness of the oxide scale d Ox (t) of: d Ox (t) = 2 k n × t 1/4 (1) The growth factor k n = ΩD GB ∆C 3δ d 0 contains a constant prefactor Ω, the diffusion coefficient for oxygen along the grain boundaries D GB , the size of the grain boundaries δ, the initial lateral grain size d 0 and the gradient in the oxygen concentration ∆C.
This model of α-Al 2 O 3 -formation, as well as the structural properties of MAX phases, i.e., the sequence of dense MX-layers, motivated the present investigation on their barrier properties against hydrogen diffusion.
Although little information about the diffusion of hydrogen in MAX phases exists, similarly composed carbides or nitrides of early transition metals are already used as diffusion barriers for hydrogen [4,14]. It is expected that the anisotropic structure of MAX phases will induce a directional anisotropy of the hydrogen diffusion. In [5], F. Colonna and C. Elsässer presented the findings of an atomistic simulation of diffusion processes in Ti 2 AlN using density-functional theory (DFT). Therein, interstitial diffusion paths of hydrogen and oxygen were examined. It was found that, for hydrogen, the migration perpendicular to the basal planes has a maximum barrier of~3 eV, whereas the migration barrier parallel to the basal plane is one order of magnitude lower. The high migration barrier parallel to the c axis was explained by the presence of the Ti 2 N double layer, where the interstitial octahedral sites of Al 3 Ti 3 are already occupied by nitrogen atoms.
An experimental study on the hydrogen barrier properties of MAX phase coatings was presented by C. Tang et al. in [6]. Therein, ZrY-4 alloy cylinders were coated with Ti 2 AlC and Cr 2 AlC by a multilayer deposition followed by a subsequent annealing step. This process led to a {0001}-textured polycrystalline growth which could also be detected in [15] for Ti 2 AlN. After loading the specimens in an Ar+H 2 atmosphere the cylinders were investigated by neutron radiography. It could be shown that a 5 µm thick Ti 2 AlC and Cr 2 AlC reduced the penetration of hydrogen under the detection limit.
To evaluate coatings in terms of their capability to reduce the hydrogen permeation a permeation reduction factor (PRF) can be calculated using the mass specific ion current j: [17] as hydrogen permeation barrier. Both coatings tend to form a dense crystalline structure, which is capable of effectively reducing the hydrogen permeation up to a PRF(Al-Cr-O) = 3500 and PRF(Er 2 O 3 ) = 800.
Deposition of Ti 2 AlN
The Ti 2 AlN MAX coatings were deposited on AISI 430 ferritic stainless-steel substrates (Fe81/Cr17/Mn/Si/C/S/P), which were polished (1400 grit) and cleaned in acetone and isopropanol using an ultrasonic bath prior to deposition. A custom build industrial sized magnetron sputter chamber SV400/S3 (FHR Anlagenbau GmbH, Ottendorf-Okrilla, Germany) equipped with rectangular titanium (purity 99.8%) and aluminum targets (99.999%) was utilized. To obtain a pronounced {0001}-texture with a parallel orientation of the basal planes and the substrate surface, we alternately deposited 150 single layers of Ti and AlN on the substrate, beginning with Ti. During the radio frequency-sputtering of the aluminum target, nitrogen (purity 99.9999%) was introduced in the chamber. A final subsequent annealing at 700 • C for 1 h in vacuum led to the formation of textured Ti 2 AlN MAX phase coatings. Details of the deposition process are described elsewhere [15]. The coatings thickness was in the 2 µm-3 µm range.
Oxidation Procedure and Analysis
To investigate the oxidation kinetics of the Ti 2 AlN coatings, comparable samples originating from the same batch were oxidized at 700 • C for 5 h, 10 h, 20 h and 100 h in a muffle furnace (Nabertherm GmbH, Lilienthal, Germany) in air. The samples were afterwards removed from the furnace and cooled in air. The crystallographic orientation and phase composition of oxidized and non-oxidized coatings were investigated by X-ray diffractometry (XRD) using a PANalytical Empyrean in parallel beam geometry (Empyrean, PANalytical, Almelo, The Netherlands) and Cu Kα 1 radiation with a 2-bounce Ge 220 monochromator. The samples were irradiated with primary X-rays using a line focus. The diffracted X-rays were detected using a PIXel-3D detector with a 1 mm slit for the phase analysis.
A surface sensitive phase analysis was performed using a confocal Raman spectrometer (Model inVia, Renishaw plc., Gloucestershire, United Kingdom) in backscatter geometry. The excitation wavelength λ Nd:YAG = 532 nm was used to determine possible changes in the Ti 2 AlN phase upon thermal treatment whereas the wavelength λ HeNe = 633 nm proofed to be suitable for exciting fluorescence bands in the thermally grown AlO x phase. In all measurements, a 100-fold objective focused the laser on the surface to a spot diameter of about 2 µm. XPS depth profiles were recorded with a PHI 5000 VersaProbe II (Ulvac-PHI, Inc., Chigasaki, Japan) equipped with an argon sputter option using Al Kα-rays. For analyzing the coarse elemental distribution close to the interface of coating and substrate a metallographic cross-section was prepared after an electrochemical deposition of Ni for protective purposes. The electron probe micro analysis (EPMA) was performed utilizing a JXA-8100 (Jeol, Akishima, Japan). The measurements were performed with an acceleration voltage of 15 kV and a dwell time of 30 ms.
Deuterium Permeation Setup
To investigate the diffusion of deuterium from the gas phase through coated and oxidized membranes, a permeation setup was developed following the works of C. Frank et al. [18], J. Gorman et al. [19] and D. Levchuk et al. [20]. The test rig consisted of two chambers separated by a thin steel membrane (see Figure 1). The high pressure side is filled with the diffusional species or the purging gas, the low pressure side is evacuated by a turbomolecular pump and an ion getter pump down to pressures of~10 -6 Pa. The latter is also equipped with a quadrupole mass spectrometer (Model PrismaPlus TM QMG 220, Pfeiffer Vacuum Technology AG, Aßlar, Germany) to determine the gas composition as well as the mass and time dependent ion current, which is detected by a secondary Materials 2020, 13, 2085 4 of 9 electron multiplier. With infrared transmissible windows on both sides, the membrane was heated by a focused halogen radiation heater and the temperature as well as the temperature distribution was recorded by a heat sensitive camera. The membranes, illustrated in Figure 1b, were water jet cut (Ø = 30 mm) from a 0.2 mm thick steel foil (AISI 430) and coated as described before. The uncoated back sides were corundum blasted to increase the absorption of infrared radiation. The membrane was mounted with conical copper gaskets with the coating facing the high pressure side. After a minimum base pressure of 1 × 10 −5 Pa was reached, the measurement was started. To investigate the hydrogen barrier properties, the isotope deuterium was employed in order to avoid interpretation ambiguities due to contaminations with residual gases or water molecules. The permeation measurements were performed close to thermodynamic equilibrium. First, deuterium was injected on the atmospheric pressure side. Then the membrane temperature was set to a maximum and was reduced stepwise when a constant ion current was reached. The ion current of the atomic mass m(D2) = 4 was recorded. The permeation reduction factor was calculated by (2) using the steady state values of the ion currents of a non-coated sample as a reference.
Oxidation
To investigate the influence of the oxygen exposure at high temperatures on the phase composition, XRD diffractograms were recorded for different exposure times and compared to the pristine sample. In To investigate the hydrogen barrier properties, the isotope deuterium was employed in order to avoid interpretation ambiguities due to contaminations with residual gases or water molecules. The permeation measurements were performed close to thermodynamic equilibrium. First, deuterium was injected on the atmospheric pressure side. Then the membrane temperature was set to a maximum and was reduced stepwise when a constant ion current was reached. The ion current of the atomic mass m(D 2 ) = 4 was recorded. The permeation reduction factor was calculated by (2) using the steady state values of the ion currents of a non-coated sample as a reference.
Oxidation
To investigate the influence of the oxygen exposure at high temperatures on the phase composition, XRD diffractograms were recorded for different exposure times and compared to the pristine sample. In Figure 2a, the diffractogram of the as-synthesized coating reveals an almost phase pure Ti 2 AlN coating having a strong {0001}-texture. The peak at 42.5 • is attributed to the (200) lattice plane of TiN. The Fe-bcc peaks at 44.6 • and 65.0 • are assigned to the steel substrate. Diffractograms of the oxidized samples are depicted in Figure 2 with an offset for better visibility.
These phase compositions appear almost unaltered upon thermal exposure. Only in the 2Θ-region between 42 • -43 • a slight change in the peak position is visible. This region is depicted in detail in Figure 2b. Due to broad peak widths and weak angle dependent interferences, the signals from TiN and α-Al 2 O 3 cannot be clearly distinguished. Further ambiguities arise due to the small TGO layer thickness and its possibly amorphous structure. Hence, further surface sensitive Raman analysis was performed (see Figure 3). The Raman spectra of the coatings in Figure 3a still feature the characteristic Raman peaks for Ti 2 AlN despite the oxidized surface, though an increase in the background is detected. The broad background and the peak between 500 cm −1 -600 cm −1 might stem from the formation of surface oxides and/or oxycarbides [21] as well as from the formation of TiN close to the surface due to Al depletion [22]. Titanium oxides like anatase and rutile, which were reported in [23] after the oxidation of Ti 2 AlN coatings at 750 • C, are not detected. performed (see ). The Raman spectra of the coatings in a still feature the characteristic Raman peaks for Ti2AlN despite the oxidized surface, though an increase in the background is detected. The broad background and the peak between 500 cm −1 -600 cm −1 might stem from the formation of surface oxides and/or oxycarbides [21] as well as from the formation of TiN close to the surface due to Al depletion [22]. Titanium oxides like anatase and rutile, which were reported in [23] after the oxidation of Ti2AlN coatings at 750°C, are not detected.
The fluorescence spectra in The numerical fit of the values for d Ox according to (1) is plotted in Figure 5. The errors of the oxide thicknesses were estimated to 10 nm resulting from a temporal variation in the sputter rate during XPS measurements. The growth factor of the TGO was calculated to k n = 402 ± 17 mol m×s with a quality factor of R 2 = 0.9794. The quality of the fit argues for the suitability of the mathematical description by (1 for the oxidation kinetics. However, no conclusion can be drawn so far as to whether the O or the Al diffuses along the grain boundaries to the oxidizing interface according to the above described model of G.M. Song et al. The spectra of an uncoated ferritic steel substrate after 100h at 700 • C in Figure 4b revealed the formation of a TGO consisting of of Cr-, Mn-and Fe-oxides. The thickness of the TGO was calculated to 540 nm, which compares to approximately 100 nm in the case of a coated substrate. The EPMA images of the sample oxidized for 100 h at 700°C depicted in Error! Reference source not found. represent the elemental distributions of Ti (b), Al (c), N (d), O (e), Ni (f) and Fe (g), where the colors indicate the normalized elemental concentration. The measured distribution of Ti, Al and N across the coating thickness features a depletion in Ti and Al at the interface of Ti2AlN/TGO and in the subsurface region. Accordingly, the substrate is locally enriched by Al and N and the formation of precipitates perpendicular to the surface is visible. In such areas, only a minor Fe-concentration is measured, as the microprobe signal is always to normalized 100% for all elements. The inward diffusion of Al and N is accompanying the outward diffusion of Fe into the coating according to the Fe elemental distribution map. Besides the thin oxide scale, which formed on top of the MAX phase The EPMA images of the sample oxidized for 100 h at 700 • C depicted in Figure 6 represent the elemental distributions of Ti (b), Al (c), N (d), O (e), Ni (f) and Fe (g), where the colors indicate the normalized elemental concentration. The measured distribution of Ti, Al and N across the coating thickness features a depletion in Ti and Al at the interface of Ti 2 AlN/TGO and in the subsurface region. Accordingly, the substrate is locally enriched by Al and N and the formation of precipitates perpendicular to the surface is visible. In such areas, only a minor Fe-concentration is measured, as the microprobe signal is always to normalized 100% for all elements. The inward diffusion of Al and N is accompanying the outward diffusion of Fe into the coating according to the Fe elemental distribution map. Besides the thin oxide scale, which formed on top of the MAX phase coating, oxygen can be detected within the Ni-plating. This is caused by the formation of a longitudinal crack within the Ni-plating during the preparation of the cross-section.
The strong interdiffusion of the weakly bound A-element of MAX phases with the substrate is known to be a crucial aspect, when it comes to the chemical stability in high temperature applications [23]. Therefore, the interdiffusion should be suppressed by applying additional barrier films against Al diffusion between the substrate and coating. Further loss of the A-element also occurs during oxidation and annealing in vacuum due to Al 2 O 3 formation and evaporation. In [27], Zhang et coating, oxygen can be detected within the Ni-plating. This is caused by the formation of a longitudinal crack within the Ni-plating during the preparation of the cross-section. The strong interdiffusion of the weakly bound A-element of MAX phases with the substrate is known to be a crucial aspect, when it comes to the chemical stability in high temperature applications [23]. Therefore, the interdiffusion should be suppressed by applying additional barrier films against Al diffusion between the substrate and coating. Further loss of the A-element also occurs during oxidation and annealing in vacuum due to Al2O3 formation and evaporation. In [27], Zhang et al. calculated that the Ti2AlN MAX phase lattice structure is capable of accommodating Al vacancies down to a Ti2Al0.75N stoichiometry.
Hydrogen Permeation
The D2 ion current was measured by secondary ion mass spectrometry using the setup illustrated in Error! Reference source not found.. The influence of the coatings on the permeation was investigated in a state close to the thermodynamic equilibrium. Three different membranes were investigated: the uncoated substrate material (substrate), the substrate coated with 2.7 µ m of Ti2AlN (substrate+Ti2AlN) and the substrate coated with 2.7 µ m of Ti2AlN with a subsequent oxidation for 20 h at 700°C (substrate+Ti2AlN+TGO). In Error! Reference source not found., the D2 ion currents are plotted in the temperature range from about 50°C to 400°C. Three measuring cycles were performed for each membrane. The results of the different cycles are denoted by the different symbols in the graph (square, circle and triangle). The small variance in the data points confirms the reproducibility of the permeation induced ion current.
Hydrogen Permeation
The D 2 ion current was measured by secondary ion mass spectrometry using the setup illustrated in Figure 1. The influence of the coatings on the permeation was investigated in a state close to the thermodynamic equilibrium. Three different membranes were investigated: the uncoated substrate material (substrate), the substrate coated with 2.7 µm of Ti 2 AlN (substrate+Ti 2 AlN) and the substrate coated with 2.7 µm of Ti 2 AlN with a subsequent oxidation for 20 h at 700 • C (substrate+Ti 2 AlN+TGO). In Figure 7, the D 2 ion currents are plotted in the temperature range from about 50 • C to 400 • C. Three measuring cycles were performed for each membrane. The results of the different cycles are denoted by the different symbols in the graph (square, circle and triangle). The small variance in the data points confirms the reproducibility of the permeation induced ion current. The diffusion through all three membranes follows an Arrhenius type behavior, which confirms the assumption of a diffusion-controlled permeation. The deposition of a 2.7 µ m thick Ti2AlN coating already reduces the permeation of deuterium. The PRF at 300°C is calculated to a factor of 45. As optical investigations on the coating after the measurements still revealed some minor cracks in the coating, the PRF might still be lower for a defect-free MAX-phase coating. However, the formation of the oxide scale leads to a further, significant reduction of the permeation. With an oxidation of the coated steel membrane for 20 h at 700°C, the formation of TGO of 80 nm thickness is expected, compare Error! Reference source not found.. With respect to the uncoated steel The diffusion through all three membranes follows an Arrhenius type behavior, which confirms the assumption of a diffusion-controlled permeation. The deposition of a 2.7 µm thick Ti 2 AlN coating already reduces the permeation of deuterium. The PRF at 300 • C is calculated to a factor of 45. As optical investigations on the coating after the measurements still revealed some minor cracks in the coating, the PRF might still be lower for a defect-free MAX-phase coating. However, the formation of the oxide scale leads to a further, significant reduction of the permeation. With an oxidation of the coated steel membrane for 20 h at 700 • C, the formation of TGO of 80 nm thickness is expected, compare Figure 5. With respect to the uncoated steel membrane, a PRF of about 3700 was achieved. This reduction can be explained by the low solubility of hydrogen in the α-Al 2 O 3 phase as well as by the potential healing of small defects in Ti 2 AlN, which blocks alternative migration paths with low energy barriers. The reduction of three orders of magnitude strongly supports the initially assumed suitability of Ti 2 AlN coatings as high temperature hydrogen diffusion barriers.
Conclusions and Outlook
Ti 2 AlN coatings were synthesized on ferritic steel samples by a repeated deposition of Ti/AlN double layers and a subsequent annealing in vacuum. The oxidation experiments at 700 • C in air revealed the formation of a thin TGO at the sample surface consisting mainly of α-Al 2 O 3 . By analyzing the thickness of the TGO, the kinetic confirms the findings of G.M. Song et al., which are described by a growth in thickness with a time dependency of~t 1/4 . Thereby, a thin protective oxide is quickly formed after exposure to air but further growth is strongly hindered by a slow diffusion of migrating particles through the dense oxide. It was shown that the TGO not only serves as an effective protective layer against further oxidation, but also serves as a diffusion barrier against hydrogen. Whereas a 2.7 µm thin Ti 2 AlN coating reduces the permeation of deuterium to a factor of 45, the formation of an α-Al 2 O 3 scale further reduces the permeation within three orders of magnitude. The healing of coating defects like pores and cracks at elevated temperatures upon oxidation is seen as an additional advantage of thermally grown diffusion barriers in comparison to directly deposited barrier coatings.
Further investigations need to focus on the interdiffusion process at the interface of coating and substrate in order to reduce the loss of the Al, which is required for the formation of α-Al 2 O 3. Finding an optimum thickness of the TGO, which significantly reduces the hydrogen permeation and at the same time exhibits a sufficient thermal and mechanical stability is the crucial task for the utilization of Ti 2 AlN as protective coatings in industrial applications.
In summary, Al containing MAX phase coatings, which tend to form a dense α-Al 2 O 3 on the surface upon oxidation, seems to be effective protective coatings in high temperature applications, where oxygen and hydrogen corrode the substrate material. Funding: Financial support by the Baden-Württemberg-Stiftung gGmbH in the context of "CleanTech" (project CT-6 "LamiMat") is gratefully acknowledged.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,698 | 2020-05-01T00:00:00.000 | [
"Materials Science"
] |
Temperature induced shifts of Yu-Shiba-Rusinov resonances in nanowire-based hybrid quantum dots
The strong coupling of a superconductor to a spinful quantum dot results in Yu – Shiba – Rusinov discrete subgap excitations. In isolation and at zero temperature, the excitations are sharp resonances. In transport experiments, however, they show as broad differential conductance peaks. Here we obtain the lineshape of the peaks and their temperature dependence in superconductor – quantum dot – metal nanowire-based devices. Unexpectedly, we fi nd that the peaks shift in energy with temperature, with the shift magnitude and sign depending on ground state parity and bias voltage. Additionally, we empirically fi nd a power-law trend of the peak area versus temperature. These observations are not explained by current models
In a quantum dot-superconductor system, the exchange interaction of an unpaired, Coulomb-blockaded electron in the quantum dot with quasiparticles in the superconductor detaches discrete excitations from the edge of the superconducting gap 1 , as first explained by Yu, Shiba and Rusinov (YSR) for classical spins [2][3][4] .When the coupling of the quantum dot to the superconductor is increased, the Kondo temperature, T K , rises above the superconducting gap, ∆, prompting a doublet→singlet ground state transition marked by zero excitation energy [5][6][7] .While the lineshape and temperature dependence of the normal-state spin-1/2 Kondo effect have been thoroughly characterized [8][9][10][11] , its YSR superconducting analog is yet to be subjected to the same degree of scrutiny.
At finite temperature, the spectral weight of YSR excitations is characterized by an approximately Gaussian lineshape 12,13 .In a realistic setup, in which the intrinsic superconductor-impurity system is probed by a scanning tunnelling tip [14][15][16][17][18][19] or a metallic contact 6,[20][21][22][23] , the excitations are measured as peaks in the differential conductance 24 , and various mechanisms may obscure their intrinsic lineshape.On one hand, the peaks can be dressed with a Lorentzian form in the presence of a relaxation channel for quasiparticles, which can be provided, for example, by a soft superconducting gap; i.e., a pseudogap populated by quasiparticle density of states up to the Fermi level 25 .On the other hand, as by-product of the metallic lead, the normal-state spin-1/2 Kondo effect can emerge and distort the peak lineshape when T < T N K , where the superscript N here is used to distinguish the Kondo temperature of the normal lead from T K , the one of the superconducting lead, and T is the temperature 26 .In addition, photon assisted tunnelling can broaden the superconducting density of states 27,28 , though this issue may be solved by increasing the capacitance of the junction 28 .
In planar semiconductor/superconductor devices, in which the gate tunability of the semiconductor is employed to define a quantum dot in close proximity to the superconductor 29 , a deteriorated interface between the superconductor and the semiconductor has been related to a soft superconducting gap [30][31][32][33][34] .Earlier measurements of the temperature dependence of YSR excitations on soft-gapped devices reported no significant effects at k B T ∆ [35][36][37] .However, the use of a superconducting lead in place of a normal one led to nonequilibrium features at high temperatures [35][36][37] .
The interface improvement gained by the in-situ deposition of Al on InAs nanowires yields a hard gap -i.e., a gap devoid of quasiparticle density of states-in tunnel spectroscopy 31,38 .Using these nanowires, we define S-QD-N devices by either 1) etching Al 38 or 2) shadowing in-situ 32,39 to obtain a bare semiconductor channel.The devices are shown to have a hard gap, with ∆ nearly temperature-independent in the temperature range explored.At temperatures significantly smaller than ∆, we observe a ground-state and bias-voltage dependent shift of the YSR subgap excitations, in apparent contradiction to recent calculations developed for the simpler S-QD system 12,13 .The shift occurs irrespective of the conductance of the YSR peaks, implying a negligible role of the normal lead in this effect, and excluding a possibly lurking Kondo effect.
Results
A sketch of the system in consideration is shown in Fig. 1a.From left to right, we show a normal metal lead with a Fermi-Dirac distribution at the electron temperature T e , separated from a spinful quantum dot level by a tunnel barrier of coupling Γ N .The exchange interaction of the spin 1/2 with virtually excited quasiparticles in the superconductor via the 24 , and agrees with the measured gap singularities (c.f.Fig. 8a).(d) Colormap of YSR peaks vs. plunger gate voltage, measured at V N = −6.82V, V S1 = −6.56V, V bg = −20 V.The color scale has been saturated to highlight the subgap features.barrier of coupling Γ S produces YSR δ resonances inside the gap, while the finite but low temperature leads to additional interaction with a small population of thermally fluctuated quasiparticles and produces broadened peaks at the δ -peak position 12 .In our setup, the dilution refrigerator temperature T is lower than T e (at base, T = 20 − 30 mK while T e ≈ 80 mK), as it is typically the case 40,41 .A doped-Si substrate backgate increases the lead capacitance (C ≈ 10 pF) of our devices, which has been shown to reduce environmentallyassisted tunnelling 28 .The backgate, V bg , is also used as an additional tuning knob of the quantum dots.Al is covering three facets of the nanowire, and Au is used as contact to the bare facets.In Methods, we show details of device fabrication and evidence of hard gap in our devices.The differential conductance, dI/dV sd , of the devices is measured with a lock-in amplifier, where V sd is the source-drain bias voltage.
We first focus on the device in which Al was etched, shown in Fig. 1b.Bottom gate V N controls the coupling of the QD to Au, while bottom and side gates V S1 and V S2 control its coupling to Al. V S2 was kept at -5.5 V throughout the experiment.Bottom gate V P acts as QD plunger gate, controlling the charge occupation.
In our setup, YSR dI/dV sd peaks present in V sd − dI/dV sd traces can be fitted to the sum of two Gaussian lineshapes over a range of gate voltages and temperatures.This allows us to extract values for the position, height, and full-width at half-maximum (FWHM) of the peaks against these variables.Figure 1c shows a trace exemplifying the fits in comparison to the sum of two Lorentzians.The tails of the Lorentzians drop too slowly to account for the data around zero bias, an effect observed for all the fitted data.
Figure 1d shows a map of subgap dI/dV sd as function of V sd and plunger gate voltage, V P .The two small loops identified by X and Y correspond to YSR doublet→singlet excitations.We independently corroborate their doublet ground state nature through their evolution in an external magnetic field 20,21 (see Methods).The charging energies corresponding to these spinful charge states are U = 3.1 meV and U = 2.7 meV, respectively, obtained from Coulomb-diamond spectroscopy, whereas the gap singularities appear at |∆| = 0.27 meV.The condition U ∆ places the system within the YSR regime 1 .The trace in Fig. 1c was acquired at the electron-hole (e-h) symmetry point of charge state Y, indicated by a vertical arrow in Fig. 1d.
Figures 2a-c show three examples of the effect of changing V S1 for charge state Y. From plots 2a to 2c, the YSR loop shrinks and opens again as the same ground state changes from doublet to singlet via gate control of Γ S 20, 21 .Figures dI/dV sd (e 2 /h) 2d-f show in turn the temperature dependence of the YSR peaks at the e-h symmetry point of the respective colormaps in Figs.2a-c.As the temperature increases, the pair of peaks which corresponds to doublet ground state and is closer to ∆ in low-temperature bias position splits apart (Fig. 2d).Strikingly, when the initial bias position of the peaks is roughly the same, as in Figs.2e and 2f, the pair of peaks which corresponds to singlet ground state (Fig. 2f) goes faster towards zero-bias than the pair which corresponds to doublet ground state (Fig. 2e).In contrast, Numerical Renormalization Group (NRG) calculations of the spectral weight of YSR peaks in the single-impurity Anderson model with a conventional superconducting lead have predicted a temperature-independent peak position for a constant gap 12 , and the opposite behavior to our observations for a temperature-dependent ∆ 12,13 .To the best of our knowledge, no current models account for this behavior.
To obtain a quantitative description of the variation of the peak position against temperature, we fitted to a Gaussian the YSR peak at positive bias from the three datasets in Figs.2d-f and from five more datasets taken at intermediate peak positions; all of them at the e-h symmetry point of charge state Y.The temperature range covered by the fit (from 22 mK to ≈ 550 mK, or from 0.01∆ to ≈ 0.25∆) corresponds to the low-temperature regime 12 , where ∆ is constant (T T c = 2.2 K) and significant quasiparticle thermal excitation is not expected to occur.At 22 mK (550 mK), the quasiparticle density in the Al lead is theoretically estimated as is the density of states of Al at the Fermi energy [42][43][44] .We observe, however, that the subgap conductance increases with temperature, indicating non-negligible quasiparticle thermal excitation (see Fig. 9f under Methods).
Figure 3a shows the extracted evolution in temperature of the position of the peaks, five of which correspond to doublet ground state (in red) and three to singlet ground state (in blue).As elsewhere in this work, error bars correspond to standard deviation.The datasets have been fitted to parabolas y = a 0 + a 1 T + a 2 T 2 (solid lines) in order to indicate that they do not change faster than T 2 .Black circles pair datasets of singlet and doublet ground states whose initial bias position roughly match.The qualitative picture extracted from Fig. 2 is corroborated for such pairs; namely, when having approximately the same initial bias position, datasets of singlet ground state shift faster towards zero bias than datasets of doublet ground state.In addition, a new detail is worth mentioning.The curvature of the datasets of doublet ground state changes from positive to negative as V sd → 0; i.e., as the peaks are biased away from ∆.In the case of the datasets of singlet ground state, the curvature becomes more negative as V sd → 0 for the data available.
This ground-state and bias-position dependent behavior can be parametrized by the exchange coupling, J.In Fig. 3b, we plot the endpoints of each dataset in Fig. 3a as a function of g = πJSD(E F ), where S is the spin.We convert YSR peak position to g using E Y SR = ∆ 1−g 2 1+g 2 , valid in the classical spin limit 1,26 .YSR peaks of doublet ground state whose low-temperature bias position is closer to ∆, corresponding to small g, shift towards ∆ as the temperature is increased.The shift direction is reversed when g is tuned towards the doubletsinglet ground state transition.The reversal occurs between g = 0.7 − 0.85, when the YSR peak of doublet ground state of lowest bias position shifts towards zero bias.When g = 1.2 − 1.5, after the transition occurs and the ground state is a singlet, the remaining datasets shift towards zero bias.
In Fig. 3c, the YSR peak position across charge state Y is shown to shift towards larger bias with temperature, indicating that the observed temperature-shifting behavior is not exclusive of the e-h symmetry point.The dataset used to extract the plunger gate dependence of the peak position at various temperatures is shown under Methods in Fig. 9.
We now turn our attention to the peak height and width dependence in temperature against an increase of the coupling of the quantum dot to the normal lead.We plot in Figs.4a-c three examples of the effect of changing V N within charge state X in the doublet ground state.From left to right plots, the conductance of the subgap states is significantly enhanced.We interpret the enhancement of the conductance as stemming from a decrease in the Γ N , Γ S asymmetry r from Γ N Γ S to Γ N ∼ Γ S due to an increase of Γ N .We can offer an order of magnitude of this asymmetry from the relation peak height= 2e 2 /h × 4r/(1 + r) 2 , where r = Γ N /Γ S 20, 45 .By fitting YSR peaks with Gaussians at the e-h symmetry point of charge states X and Y across the entire gate space (V N , V S1 ) explored, we obtain a peak height range of 0.003e 2 /h → 1.5e 2 /h, or Figures 4d-f show colormaps of the temperature dependence of YSR peaks at the e-h symmetry point of each of the examples from Figs. 4a-c.In confirmation of our previous observation, the YSR peaks, which are of doublet ground state and away from zero bias, split apart as the temperature is increased.We highlight another observation; as the temperature rises the peak height is decreased and the peak width is increased.
In Fig. 5 we summarize quantitative data on this effect.Figure 5a shows a plot of decreasing peak height curves and increasing FWHM curves as temperature is increased.These were acquired from a fit of the positive-bias YSR peak in the three datasets of Figs.4d-f plus four additional datasets, all taken at the e-h symmetry point of charge state X of doublet ground state.All curves were fitted to parabolas y = a 0 + a 1 T + a 2 T 2 , to indicate that they do not change faster than T 2 .For comparison, we plot the linear broadening 3.5k B T due to the Fermi-Dirac distribution of the normal lead.The slope of the FWHM data is smaller than 3.5k B T below 0.2 K, and larger than 3.5k B T above 0.4 K, while its magnitude is larger than 3.5k B T , indicating that thermal broadening by the metallic lead is not the only broadening mechanism.Note that additional thermal broadening is expected in the S-QD side even in the absence of a metallic lead 12 , whereas the tunnel coupling to the normal lead at zero temperature can broaden the peaks even further 1,24 .
While the upper bound of the theoretical conductance of YSR peaks is 2e 2 /h 25 , in the presence of a finite quasiparticle relaxation tunnelling rate to a continuum of states, an increase of relaxation rate or temperature leads to a decrease in peak conductance as ∼ 1/(Γ + T ), where the lifetime Γ includes the relaxation rate 22,25 .In the same formalism, the FWHM scales as ∼ (Γ + T ).Therefore, the product of FWHM and peak height, which provides the area of the peak, is a constant independent of temperature.The constant is equal to 1 if the product is normalized by its value at T = 0. Surprisingly, the products of the peak height and FWHM of the seven datasets in V sd (mV) (from 0.3∆ to 0.7∆).For comparison, a constant dashed line equal to 1, predicted by the relaxation formalism, is also shown.
Finally, we report results from a second device fabricated from a nanowire shadowed during in-situ Al deposition by a thick and shorter nanowire 39 .The resulting Al/nanowire heterostructure, shown in 6a, eliminates the need to etch Al to form the junction.Figure 6b shows a scanning electron micrograph of the device.Side gate V G1 was used as plunger gate, while side gate V G2 and a substrate backgate V bg were used to bring the wire close to charge depletion.YSR excitations of doublet ground state form loops identified by their smaller size than their adjacent counterparts, as exemplified in the dI/dV sd (V sd ,V G1 ) map of Figure 6c.From Coulombdiamond spectroscopy, we determined the charging energy of the associated spinful charge state indicated by Z as U = 1.1 meV, and ∆ = 0.195 meV.As in the previous device, the sum of two Gaussians fits YSR peaks in dI/dV sd (V sd ) traces up to 5/12 In Figs.6e,f, we show the peak position, height and FWHM extracted from fitting the temperature dependence of four charge states of doublet ground state at their e-h symmetry points, including state Z.The qualitative similarities of the data in both devices is noticeable.As before, the peak-position datasets in Fig. 6e exhibit a bias-dependent change of curvature with temperature.Similarly, in Fig. 6f the FWHM and peak height vs. temperature datasets obey opposite trends, while the FWHM shows a curvature increase with respect to 3.5k B T .
Nonetheless, there is a quantitative difference.In Fig. 6g, we plot the product of peak height and FWHM normalized by their values at 30 mK, the lowest temperature at which data was recorded for this device.The four datasets collapse into the same curve, resembling the result from the previous device.However, the curve into which they collapse grows with T 3 , whereas that of the previous device grew with T 2 .
Discussion
As commented above, a Kondo singlet with Kondo temperature T N K can form with the normal lead in the QD-N part of the S-QD-N system 26 .To address this possibility, we estimate the temperature of the Kondo resonance in the isolated N-QD system.T N K then depends on gate voltage through the level position, ε 0 , of the QD as 2 e πε 0 (ε 0 +U)/ΓU , where Γ is the linewidth of the level 10 .Due to the sensitivity of the exponent to changes in Γ = Γ N , a small variation in Γ N at the e-h symmetry point results in a large change in T N K .At the singlet-doublet transition point 5 , k B T K = 0.3∆ = 81 µeV for U = 3.1 meV, which we can use to estimate Γ S = 1 meV and an upper bound of T N K ≈ 1.7 mK for the YSR peaks of doublet ground state of largest conductance (r = 0.3).Such a small T N K indicates that the Kondo effect of the normal lead is not playing an important role at the e-h symmetry point.
In view of this, the role of the N-QD part of the system is reduced to a non-perturbing tunnel probe, and an explanation for the ground-state and peak-position dependent YSR peak behavior against temperature is to be found in the remaining part of the system, QD-S.A trivial reduction of Γ S with temperature, which would increase monotonically the energy of the singlet state, is ruled out based on the non-monotonic behavior of the curvature of the peak-position datasets depending on initial V sd position.In turn, the f (T ) = T α dependence extracted from the normalization of the peak area has a less evident origin.Phonon-mediated quasiparticle relaxation could in principle provide temperature-dependent broadening in our N-QD-S setup 46,47 , but has so far only been used phenomenologically in analyzing the outcome of the S-YSR setup, in which an additional superconductor probes YSR excitations, leading to the need of deconvolving intrinsic YSR effects from 6/12 those of the superconducting probe 48 .
In the presence of a significant relaxation tunnelling rate from YSR subgap resonances to a continuum of states, an asymmetry of the height of the peaks in positive-negative bias voltage is expected 25 .However, this additional tunnelling rate theoretically results in a Lorentzian YSR peak lineshape 25 , while we observe Gaussian lineshapes.In addition, the temperature dependence of the YSR peak area remarkably deviates from the expected dependence given by the relaxation model presented in Refs. 22,25 Despite these inconsistencies, the height of the YSR conductance peaks is asymmetric in bias voltage even at the e-h symmetry point, as it is readily seen from Figs. 1 and 6, in apparent agreement with an important relaxation tunnelling rate 25 .It is unclear which of these observations are determinant arguments in favor or against the existence of a finite relaxation tunnelling rate in our devices.
Majorana zero-modes, which can arise in Rashba semiconductor nanowires coupled to superconductors under a properly oriented magnetic field, can give rise to zero-bias differential conductance peaks in tunnel spectroscopy 49,50 .Nevertheless, peaks from YSR states bear distinctive features from those from Majorana modes.While the lineshape of Majorana peaks is a Lorentzian 50,51 , our data indicates that well-separated YSR peaks have a Gaussian lineshape.Note, however, that when two YSR peaks collapse into a single zero-bias peak (e.g., at a singlet-doublet transition), the Lorentzian lineshape might be harder to rule out (c.f.Fig. 7e).Additional differences can be found in the change of their FWHM with temperature in comparison to 3.5k B T 50 .
To summarize, we have simplified the complex S-QD-N system by employing a hard superconducting gap and noninvasive N probes, while exploiting the gate tunability of YSR excitation and ground state energies.We have extensively characterized the temperature dependence of YSR resonances in two devices, establishing a basis for further experimental and theoretical work.In particular, the origin of shifts in bias voltage of the resonances against temperature is not explained by current models 12,13 .
Methods
Below we provide additional details of the fit and of device fabrication, as well as an independent corroboration of the ground state of the S-QD-N system.
Fabrication of the devices
To fabricate the first device, a 110-nm wide InAs epitaxially half-shell Al-covered nanowire was deterministically deposited on a bed of local bottom gates and additional sidegates were defined during contact deposition.By etching-off the 7-nm-thick Al from the top half of the nanowire and contacting the resulting bare wire with Ti/Au, a N-QD-S junction was defined with a 250 nm channel of bare wire.The Au contact on the Al-lead side was 400 nm away from the channel.The leads ended in large-area bonding pads with a capacitance of ≈ 10 pF estimated by a simple parallel-plate capacitor model to the Si backgate through 200 nm of Si oxide.
To fabricate the second device, a shadowed 90-nm wide InAs wire was deterministically deposited on a Si substrate of similar characteristics as the previous device, leading to similar leads capacitance.Side-gates were defined during evaporation of the ohmic contacts to the bare wire.The barewire channel between the 20-nm thick Al film and the Ti/Au contact was 450 nm long.The Au contact on the Al-lead side was 500 nm away from the channel.
Measurements
The devices were voltage-biased.The DC current was acquired using a digital multimeter, while the dI/dV sd signal was recorded using standard lock-in amplifier techniques.To obtain dI/dV sd an excitation of 3 µV on top of V sd was applied at a frequency of 116.69 Hz for the device in which Al was etched, and of 132.7 Hz for the device in which Al was shadowed.
Details of the fit
We fitted dI/dV sd (V sd ) curves to Gaussians where A + (A − ) represents the height of the positive (negative) bias peak, V sd+ (V sd− ) represents the position of the positive (negative) bias peak, and ≈ 2.35c + (≈ 2.35c − ) represents the width of the positive (negative) bias peak.The fit is good up to the quasiparticle continuum, where the peaks lose weight to it.The fits were done below a temperature at which it was not possible to distinguish the edge of the gap ∆, or below a temperature at which the two YSR peaks merged into one (whichever was the lowest).This temperature limit varied within datasets.Across one dataset, we kept fixed the bias range in which the fit was performed.Figures 7a-d show a typical example of the same pair of YSR peaks at different temperatures.The fit quality does not deteriorate with an increase in temperature.To verify the robustness of the parameters extracted from the fit, we compare in Figs.7f,g the maximum of the peak and its position to the peak height and peak position extracted from the fit.Due to data noise (in Fig. 7d, fluctuations in conductance at the positive-bias YSR peak with respect to the Gaussian fit are 3 × 10 −3 e 2 /h), these two values are slightly different, but follow the same trend.
Gap hardness
A soft superconducting gap is expected to provide additional relaxation channels which should result in extrinsic YSR peak broadening leading to a Lorentzian-shaped peak 25 .In our two devices, the gap is hard as evidenced by the subgap conductance suppression, while the experimentally extracted YSR peak lineshape is Gaussian.Fig. 8
9/12
Variation of the gap against temperature A decrease of the gap against temperature can produce motion of the position of the YSR peaks within the gap in the opposite direction as observed in the experiment 12,13 .In both the etched and shadowed devices we observe a decrease of the gap of no more than 5%.In both cases, we determined this through the temperature dependence of the YSR peaks in the charge state of singlet ground state next to one of the examined doublets.In even charge states, the S-QD-N junction behaves effectively as a co-tunnelling junction without subgap YSR excitations, making this procedure feasible 24 .Figure 9 shows that the ∆ peaks remain constant in bias from 22 mK to 590 mK, despite losing weight.Note that the gap progressively fills with quasiparticle density of states as the temperature is increased.
Determination of the ground state by Zeeman-split spectroscopy
As seen before in Refs. 20,22 nd explained schematically in Fig. 10d, in a finite magnetic field the states of singlet ground state show two excitations corresponding to two spin-resolved excited doublets.This provides a way to distinguish them from states of doublet ground state, which show only one excitation.We verified the ground state of the charge states X and Y to which the datasets of Figs. 1 to 5 correspond by observing the Zeeman splitting of the YSR peaks in an external magnetic field B. Figures 10a-c show that the X and Y loops of doublet ground state expand with B without any visible peak splitting.This growth is due to doublet splitting, as the energy of the spin-down doublet state is decreased.However, adjacent charge states of singlet ground state to the left and right of X and Y show peak splitting with B, with peaks splitting parallel-wise to their edges.We also corroborated the ground state of the Y charge state once it was tuned into a singlet ground state.Figures 10e,f show splitting with magnetic field of the characteristic YSR peaks of singlet ground state 20,22 .Due to the lower critical field of the shadowed sample, this verification could not be performed for the datasets shown in Fig. 6.
Figure 1 .
Figure 1.Gaussian YSR peaks.(a) Sketch of the normal-quantum dot-superconductor (N-QD-S) system.(b) Scanning electron micrograph of the device.Bottom and side gates are false-colored in yellow and orange, respectively.Al appears in blue.(c) Fit of YSR peaks from differential conductance data to the sum of two Gaussian (Lorentzian) curves, shown in blue (black).∆ corresponds to the edge of the superconducting gap24 , and agrees with the measured gap singularities (c.f.Fig.8a).(d) Colormap of YSR peaks vs. plunger gate voltage, measured at V N = −6.82V, V S1 = −6.56V, V bg = −20 V.The color scale has been saturated to highlight the subgap features.
Figure 2 .Figure 3 .
Figure 2. Tuning YSR states across a doublet-singlet ground state transition.(a-c) Colormaps of YSR peaks in charge state Y at increasing coupling to the superconducting lead.(d-f) Temperature dependence of YSR peaks at the e-h symmetry point of maps (a-c), indicated by an arrow.(a) V S1 = −6.42V, (b) V S1 = −6.76V, (c) V S1 = −6.94V. V P is compensated for the change in V S1 .V N and V bg were kept at -6.82 V and -20 V, respectively.
Fig 5a scaled by their values at 22 mK bunch into a single quadratic curve, as shown in Fig. 5b.This occurs despite of widespread change in peak height (of about 2 orders of magnitude), FWHM (from 0.3∆ to 0.5∆) and peak position
Figure 4 .FWHMFigure 5 .
Figure 4. Tuning the magnitude of YSR peaks.(a-c) Colormaps of YSR peaks in charge state X (of doublet ground state) at increasing peak conductance.(d-f) Temperature dependence of YSR peaks at the e-h symmetry point of maps (a-c), indicated by an arrow.(a) V N = −6.77V, (b) V N = −6.22V, (c) V N = −5.7 V. V P is compensated for the change in V N .V S1 and V bg were kept at -6.4 V and -11.55 V, respectively.
Figure 6 .
Figure 6.Data from additional device.(a,b) Scanning electron micrograph of (a) a typical set of as-grown wires and (b) the device.(c) Colormap of YSR subgap peaks evolving in plunger gate voltage across spinful charge state Z. Arrows indicate the position of the gap singularities.(d) Fit to Gaussians of YSR peaks at the e-h symmetry point of Z. (e,f) Temperature dependence of (e) peak position and (f) peak height, FWHM extracted from fitting YSR peaks at the e-h symmetry point of four different charge states of doublet ground state, including Z.All datasets have been fitted to parabolas of the form y = a 0 + a 1 T + a 2 T 2 , to indicate that they do not change faster than T 2 .(g) Temperature dependence of the product of peak height and FWHM scaled by their values at 30 mK.The four datasets collapse in a single cubic curve.
Figure 7 .Figure 8 .
Figure 7. Details of the fit of YSR peaks to Gaussian curves.(a-d) Robustness of the fit against temperature.(e) Zero-bias differential conductance peak obtained from the crossing of two YSR peaks at the right doublet-singlet crossing of state Y in Fig.1d(at V P = −0.006V) fitted to a single Lorentzian (black) and Gaussian (blue) curves.While the Gaussian curve captures better the tail at positive bias of the crossed YSR peaks, it fails to do so at negative bias due to the presence of the superconducting gap edge.(f) Peak maximum and fitted peak height against temperature.(g) Position of peak maximum and fitted peak position against temperature.
Figure 9 . 6 Y 6 YFigure 10 .
Figure 9. Variation of the gap against temperature.(a-e) Colormaps of the evolution of Fig. 1d in temperature.The dashed lines indicate the position of the YSR peaks in the charge state of singlet ground state between charge states X and Y, which are related to the edges of the gap 24 .These do not move with temperature.(f) Positive-bias linecuts through the center of the even singlet sector, representing the temperature dependence of the gap.
shows dI/dV sd (V sd ) traces of the gap in linear and logarithmic scale measured in deep Coulomb blockade in the regime Γ N , Γ S U, in which | 7,194.6 | 2020-02-28T00:00:00.000 | [
"Physics"
] |
Feshbach resonances in the F + H2O → HF + OH reaction
Transiently trapped quantum states along the reaction coordinate in the transition-state region of a chemical reaction are normally called Feshbach resonances or dynamical resonances. Feshbach resonances trapped in the HF–OH interaction well have been discovered in an earlier photodetchment study of FH2O−; however, it is not clear whether these resonances are accessible by the F + H2O reaction. Here we report an accurate state-to-state quantum dynamics study of the F + H2O → HF + OH reaction on an accurate newly constructed potential energy surface. Pronounced oscillatory structures are observed in the total reaction probabilities, in particular at collision energies below 0.2 eV. Detailed analysis reveals that these oscillating structures originate from the Feshbach resonance states trapped in the hydrogen bond well on the HF(v′ = 2)-OH vibrationally adiabatic potentials, producing mainly HF(v′ = 1) product. Therefore, the resonances observed in the photodetchment study of FH2O− are accessible to the reaction.
A. Potential energy surface
The potential energy surface (PES) of F+H 2 O system was constructed using the all-electron spin restricted explicitly correlated singles and doubles coupled-cluster approach (AE-CCSD(T)), together with optimized correlation consistent triple-zeta basis set including RI and MP2 auxiliary sets (cc-pCVTZ-F12) 1 . The spin-orbit coupling (SO) energies were computed with the internally contracted multi-reference configuration interaction method with Davidson correction (iMRCI+Q) and the basis set of aug-cc-pVTZ, using multi-configurational SCF reference functions with 15 electrons in 10 active orbitals, followed by the Breit-Pauli Hamiltomian 2 . All these calculations were performed with the MOLPRO 2012.1 package 3 .
More than 24,000 points were used to construct the PES using the neural networks. These points, covering the asymptotic reactive channels of F+H 2 O, HF+HO, as well as interaction region, were selected iteratively by an effective scheme 4 which was proposed for high dimensional PES constructions. The 6 bond lengths are used for the input layer. The asymptotic channels and interaction region were fitted segmentally to improve the fitting accuracy and efficiency, and connected with smooth switch functions. The switch functions for each part are defined from the bond distances between F and H atoms as (with H atoms sorted to satisfy F−H1 ≤ F−H2 ):
The system Hamiltonian in the reactant Jacobi coordinates for a given total angular momentum tot can be written as ̂= − ℏ 2 2 2 2 + ( tot − 12 ) 2 2 2 + 1 2 2 1 1 2 + 2 2 2 2 2 2 + ( , 1 , 2 , 1 , 2 , ) where is the reduced mass of F and H 2 O, 1 is the reduced mass of H and OH, 2 is the reduced mass of OH. tot is the total angular momentum operator of the system; 12 is the rotational angular momentum operator of H 2 O; 2 is the rotational angular momentum operator of OH, and 1 = 12 − 2 is the orbital angular momentum operator of H 2 O. The reference Hamiltonian ℎ ( )( = 1,2) is defined as where ( ) is a diatomic potential.
The time-dependent wave function can be expanded in terms of the translational basis of R, the vibrational basis ( ), and the body-fixed (BF) rovibrational eigenfunction as The BF total angular momentum eigenfunctions can be defined as where is the parity of the system. ̅ (̂) is the Wigner rotation matrix, depending on Euler angles which rotate the space-fixed frame onto the body-fixed frame and are the eigenfunctions of 2 . 1 2 12 ̅ (̂1,̂2) is the angular momentum eigenfunction of 12 defined as 1 2 12 ̅ = ∑ ̅ 12 (̂1)√ 2 1 +1 4 < 2 1 0| 12 > 2 (̂2) where 2 are spherical harmonics. Note that the restriction (−1) + 1 + 2 + 12 = where ′ is the reduced mass of HF and OH, 1 ′ and 2 ′ are the reduced mass of HF and OH, 1 ′ and 2 ′ are the rotational angular momentum operator of HF and OH, which coupled to form 12 ′ .
The time-dependent wave function can be expanded as It should be pointed out here that the functional forms for the product arrangement basis (primed) are different from the one for the reagent arrangement basis (unprimed).
The BF total angular momentum eigenfunctions can be defined as (11)
C. PCB approach and numerical parameters
In the product coordinates based (PCB) approach 5,6 used here, we prepared an initial wave packet for H 2 O in the initial ground rovibrational state in the reactant Jacobi coordinates, and propagated it for 17000 a.u. from the asymptotic region to =6.0 bohrs. It is straightforward to carry out this propagation, because at that distance, only inelastic scattering process occurs. A coordinate transformation was then carried out to transfer the whole wave packet from the reactant coordinates to the product coordinates. After a continuous propagation for additional 80000 a.u. in the product coordinates, which beyond the range of the hydrogen bond well and strong interaction between HF and OH species, the converged reactive flux and state-to-state information can be obtained.
The wavefunction is propagated using the split-operator propagator. An L-shaped wavefunction expansion for ( ′ ) and 1 ( 1 ′ ) was used to reduce the size of the basis set. We carried out state-to-state calculations for the total angular momentum probabilities are only slightly different from the full 6D ones, indicating the OH bond is a good spectator for the reaction, and can be fixed in its initial vibrational state.
With two heavy atoms (F and O) involved and long-range dipole-dipole interactions in the exit channel, the computation is extremely expensive. To reduce the computational costs, the results in the main text are based on the PA5D calculations.
Since there are two equivalent product channels in the reaction, the reaction probabilities should be multiplied by a factor of 2, if compared with QCT results. | 1,401.4 | 2020-01-13T00:00:00.000 | [
"Physics"
] |
The Virtual Printshop: A case study on using Virtual Reality in early stages of user-centered product design
In the early stages of a product development process (PDP), VR can facilitate communication between designers and product end-users to improve the quality of feedback that users provide to designers. While various forms of VR can already be found in the PDP, they primarily target designers, rather than designers and end-users. Furthermore, available tools and toolkits do not match the skills and requirements of designers in early stages of the PDP. The current paper presents an approach that first determines how to effectively support early stage design activities (referred to as the application) and subsequently provides designers with tools to realize this application themselves. The approach is implemented in an industrial case study involving practitioners from a multinational manufacturer of printing solutions for professional markets. The Virtual Printshop resulting from this case study provides an evaluation platform for various types of early stage product evaluations. A concluding generalization of the cases study results shows that the application can be translated to several other design domains. It was found that there are similarities in how these different design domains integrate VR design tools with their existing tool chains.
I. INTRODUCTION
P RODUCT development is a complex matter. An average Product Development Process (PDP) involves market research, concept development, detailed design and engineering, manufacturing, market release and after-sales activities. Throughout these phases a product evolves from an initial concept (a market insight, a first sketch or idea) to a (physical) realization of the product. A challenge in particular inherent to the early phases of the PDP is the lack of concrete design information. Design information (e.g. product dimensions, cost estimations or user requirements) is either not yet available or scattered amongst stakeholders in a multidisciplinary design team. This lack of information forces designers to make decisions based on scarce, scattered or unreliable information [9] [16], potentially leading to either unsuccessful products or expensive modifications in later stages of the PDP.
User Centered Design advocates the involvement of product end-users throughout the PDP. End-user involvement can improve the information quality and quantity. End-user feedback for instance facilitates concept generation and selection, or identifies usability issues in an early stage [13] [12]. However, with only limited design information available it is difficult to provide end-users with a clear presentation of a product concept and future use context. While traditional boundary objects (means to transfer knowledge between communication partners [3]) such as sketches and mockups properly represent aspects such as style, dimensions or shapes, they can not be used to demonstrate more complex interactive behavior without requiring end-users to interpret technical drawings. Virtual Reality (VR) can extend the collection of early stage prototyping tools by allowing end-users to not only see the future product (which could also be achieved with a concept sketch or mockup), but also experience the product and the interactions within its use context.
The current paper investigates the deployment of VR as a means to facilitate communication between designers and end-users in the early stages of a PDP. As design activities in the early stages of design are different from those found in engineering or manufacturing stages, an approach for identifying early stage design tasks that require support in the communication between designers and end-users. Such an approach is introduced in section III and implemented in an industrial case study, presented in section IV. Section V reflects on the case study as well as the research approach. The paper concludes with an outlook on a more elaborate framework of which this case study is part of.
II. BACKGROUND
VR technologies gradually found their way into the realms of the PDP. In the early '90s VR mostly acted as a layer on top of well established CAD systems [17] for visualisation (e.g. CAVE systems and head mounted displays), and, later, as natural and immersive interfaces for existing CAD systems, such as the VRAx immersive modeling system, the NavIMode CAD interface and the ConstructTool immersive modeling system, all of which are described in [24]. The substantial costs and a focus on large and complex data sets made VR primarily applicable in larger industries such as aerospace and automotive design. Advancements in hardware and software have reduced costs and extended the application scope of VR to simulation, training, prototyping and evaluation purposes [11].
These product design applications generally exploit the ability of VR to allow non-existing products or environments to be experienced in a natural and realistic way. This is beneficial when the real world situation is too dangerous (e.g. a drive simulator, as described by Tideman [20], when an environment needs to be controlled (e.g. in simulation and evaluation as described in [14]) or when physical prototyping is too expensive or simply not possible yet (e.g. virtual prototyping, as described in [4]). The examples of advantageous applications of VR technologies in the PDP illustrate the substantial set of applications available for the field of product design. However, the majority of these applications aim to support collaboration between designers rather than for instance between designers and end-users. Furthermore, applications generally target advanced stages of the PDP, such as engineering and manufacturing. As we are interested in facilitating communication between designers and end-users in the early stages of the PDP, additional preliminary research was conducted to assess the current state of VR in this area.
A. VR in Early Stages of Design
A series of in-depth interviews with over 40 designers, engineers and managers from four multinational companies (involved in automotive design, mechatronic design, mechanical engineering and consumer electronics) showed that the use of VR was mostly limited to the use of CAD systems and 3D displays for engineering reviews. In the early stages of product development, designers sometimes used simplified CAD models for quick visualizations, but did not involve VR technologies otherwise. Nevertheless, interviewees acknowledged the potential benefits of applying VR in early stages of the PDP, after outlining possible applications. It was however difficult for the participants to translate the theoretic VR applications into concrete requirements. The same finding was reported in [7], where participants "had difficulty expressing and developing ideas for specific applications of a technology they had little experience of". In a subsequent VR demonstration session participants were shown various forms of VR technologies being used in design applications, including an augmented reality factory layout application, an immersive drive simulator, a 3D virtual usability test lab and a 3D interactive experience lab [19]. The participants were now able to see and discuss various interpretations of how VR could be used in the PDP. The session ended with a group discussion about how to actually realize these applications. Participants pointed out that in the early stages of development, it is important to be able to work quickly, as creativity can not easily be predicted or guided. It was found to be important for VR design tools to be operated directly by designers, i.e. realizing the VR application themselves instead of being facilitated by other departments or an external company. Furthermore, designers are fond of their tools and very skilled in using them. Ideally, VR design tools should therefore fit existing tool chains rather than replace them.
The preliminary research shows that designers recognize the potential benefits of using VR in the early stages of the PDP, but apparently lack the tools (or awareness of tools) that enable them to realize these applications themselves.
B. Problem Definition
To better understand this mismatch between applications and tools to realize those from the perspective of a product designer, an analogy with an established design tool, Adobe Photoshop, can be made. Photoshop is a tool that is used by the designer to create visualizations that help with showing a product concept to customers. The use of the visualization is referred to as the application of, in this case, graphics software. Designers know what kind of applications they can realize with the tool, and (by training or experience) know how to use the tool to do so. When this analogy is translated back to VR, two issues emerge. Firstly, the majority of existing VR tools originates from research in computer science, and often consists of toolkits that extend programming languages with VR specific functions (see [25] for an extensive survey). Examples include VR Juggler [5], OpenTracker [18] and ARToolkit [10]. While these toolkits provide a good platform for further development by experts, they are by no means usable by non-expert designers; in the analogy of Photoshop, it would be like providing designers with a programming language and GUI libraries to create their own Photoshop. Secondly, user friendly tools such as ComposAR [23] or DART [15] do provide a more accessible authoring tool but reduce the tool's flexibility (e.g. the range of applications). More flexible VR development suites, such as Autodesk's Showcase, VRED Professional or Dassault Systemes' 3DVIA primarily target later stages of the PDP such as engineering and simulation.
The approach presented in the next section will address the gap between the potential benefits of VR applications in early stages of a user centered PDP and the tools available for designers to realize these applications.
III. APPROACH
The research approach is characterized by its participative and hands-on nature, meaning that design practitioners are actively involved in the identification of VR tool requirements and the evaluation of application and tool prototypes. The approach involves five major phases.
1) Application Definition -The first step in the approach is for the researcher and the practitioners collaboratively define the advantageous application of VR within the design process of the participating company. Collaboration between the researcher and practitioners is required to exchange in-depth knowledge of the respective fields of expertise. 2) Application Development -Given the application outline, a functional prototype of the application is developed by the researcher to allow practitioners to experience and evaluate it. Throughout this development, practitioners will be involved to test and refine the application, making sure that the prototype matches the envisioned VR application. 3) Application Review -The functional prototype is used by the practitioners to verify the effectiveness of the VR application. Here the main question is whether or not the use of VR indeed facilitates the intended design activity (as defined in step 1). 4) Tool Selection -After verifying the application the researcher provides the participants with VR design tools (by selecting, combining or adapting existing tools) that enable designers to realize the envisioned application themselves (up to a desired level of customization). These tools are not necessarily the same tools as used by the researcher to develop the prototype, because the skills and requirements of designers differ from those of the researcher. 5) Generalization -The VR application and the accompanying design tools, which are custom-made for one specific company, are presented to companies from various design domains (e.g. automotive, consumer electronics, etc.) in order to assess the validity of the applications and tools across design domains. This approach has been implemented in an industrial case study. The detailed proceedings of the case study are presented in the next section.
IV. CASE STUDY
The case study was carried out for a multinational manufacturer of printing systems for the professional market. The company's design department is primarily involved with the design of user interfaces and user-product interactions, and includes interaction designers, product designers, visual designers, usability engineers and software prototypers. While the end-users of this product are typically trained printer operators, designing a good user interface is challenging because of the technical complexity of the machines, but also because of the various use contexts in which the products are used (e.g. universities, small offices or professional printshops). Consequently, the design department of this company is interested in finding new tools and methods for actively involving their end-users in the development and evaluation of new printers. The following subsections provide a detailed description of how the five phases of our approach were implemented in this case study.
A. Application Definition
The early stages of the company's PDP include several activities in which the designers work with end-users. For example, the designers conduct interviews with end-users, do site visits and invite end-users to evaluate new user interface prototypes. To determine which activities can benefit from VR, the researcher and participating designers need to share and exchange domain knowledge effectively. In a group workshop, which was the first major event in the case study, we applied a participatory design technique based on storyboards to facilitate the exchange of knowledge between designers and the researcher. The use of visual storyboards was inspired by various participatory design methods such as Inspiration Cards [8], Pivots [21] and the Future Technology Workshop [22]. In our workshop the designers were asked to describe design activities by arranging individual frames into a coherent story. Each frame visually depicted a generic event, such as 'working on a computer', 'having a meeting' or 'talking to a customer'. The designers were also asked to insert 'technology frames', which depicted a specific VR technology, such as augmented reality, haptic input devices or motion tracking. The participants (the workshop included a total of 12 participants) first created individual storyboards (see figure 1), which were merged into four group storyboards after a round of presentation and discussion. The four resulting group storyboards visualise different situations in the design process where VR applications are considered useful. For instance, one of the storyboards describes the use of augmented reality to allow end-users to see virtual future copiers in their own office. Another storyboard describes the use of VR to support detailed design of machine components by allowing engineers to inspect the future product in virtual reality. The contribution of the storyboard workshop lies not in the novelty of these applications, but rather in giving VR technologies a familiar context, which makes it easier for designers to identify and discuss requirements for the application.
After participating in the workshop, the designers were able to provide the researcher with a clear description of their challenge, as well as an indication of how they expect VR to face that challenge. According to these results, the designers' primary reason for using VR is to improve the experience that end-users have while they are involved in the evaluation of new printer (interaction) concepts; they should feel 'at home' while operating a printer. The use context influences the interaction between the operator and the printer; ambient noise may distract the operator, or the operator maybe involved in other tasks than printing. Given the influence of the use context on the interaction with printers, designers should take the use context into account during the design and evaluation of the user interface and interactions. However, the dedicated usability lab that is currently used for this purpose (see figure 2(a)) does not represent a realistic use context; it is an empty room with a clinical appearance, while a real use context typically consists of crowded printshops where phones are ringing and customers are calling for attention (see figure 2(b)). An envisioned 'virtual use context' is expected to provide a more realistic, flexible (it can be adapted to match the use context of different end-users) and controlled (designers can decide what does or does not happen in the use context) environment for conducting early stage product evaluations.
B. Application Development
The second part of the case study involves the development of a VR application that provides designers with a virtual use context that can be used for early product evaluations involving end-users. This interactive virtual use context requires an appropriate technical implementation; designers and end-users need to be able to immerse themselves in the environment. Two technical alternatives were discussed with the design practitioners. Mobile augmented reality (AR) could be used to place end-users in the virtual use context and let them act-out work habits and task sequences. Alternatively, a fully virtual environment such as a CAVE or a relatively simple first-person game environment could provide a different experience yet still suit the desired application. In order for design practitioners to assess these technical alternatives, it was decided to develop and evaluate two application directions, namely the 'Virtual Printshop' and the 'Augmented Reality Printshop'.
1) The Virtual Printshop consists of a digital 3D virtual office that is projected on a large rear-projected screen (3x2m). Designers, positioned in front of this screen, use a keyboard and mouse to navigate a first-person perspective through the environment. The application runs on a standard desktop computer and uses the Blender game engine [6] for rendering and controlling the interactive 3D environment (see figure 3(a)).
2) The AR Printshop consists of a tablet PC equipped with a camera. Pointing the tablet on a visual marker will display corresponding 3D models on the tablets display. This allows designers to physically move around while exploring the augmented reality environment, pointing at specific markers. The augmented reality is based on a combination of ARToolkit [10] and the Blender game engine (see figure 3(b)).
An existing printshop has been used as a reference for creating virtual models of office furniture, machinery, layouts and room decorations (see figure 2(b)) that provide a common basis for both applications. Some of the objects are interactive; the printer models have system states, such as 'printing', 'idle', or 'out of paper' that can be changed by user interactions. A 3D authoring tool [6] was used by the researcher to create the two applications.
In addition to evaluating the difference between the Virtual Printshop and the AR Printshop, we were also interested in the required level of realism of the virtual contexts. Without proper SBC Journal on 3D Interactive Systems, volume 3, number 3, 2012 references it is difficult for design practitioners to indicate what level of realism they need for a use context to be effective in product evaluation sessions. Without a sufficient level of realism users may not recognise an environment or objects, or may not take the evaluation task seriously. Creating highly realistic environments on the other hand (visually, but also in terms of audio and interactions) is time consuming and therefore less feasible in the early stages of a design process.
In order to see how the level of realism affects the VR application, both printshop applications were created with two degrees of realism. The high realism applications include visually rich objects (e.g. detailed geometry, photo-realistic textures and realtime shadows), 3D sound and interactive animated models (e.g. moving printer parts). The low realism applications use models with less detailed geometry, no textures, no shadows, regular stereo sound and lack animated objects. Figure 4 illustrates the different levels of realism used in the Virtual Printshop and the AR Printshop.
C. Application Review
The AR Printshop and the Virtual Printshop were deployed in a test case. The aim of this test case was to assess the effectiveness of product evaluations in a virtual use context and to compare the Virtual Printshop to the AR Printshop.
A group of four designers from the company was asked to carry out a product evaluation in both the AR Printshop and the Virtual Printshop, and compare these sessions to the product evaluation sessions in the traditional test environment (i.e. a product evaluation in the dedicated usability lab, see figure 2(a)). The topic of the product evaluation consists of a new paper feed tray, for which three design concepts have been created. Each concept represents specific positions and opening mechanisms of the tray. This topic was chosen because it covers physical interactions between the operator and the product (i.e. operators have to be able to reach the tray), as well as interactions between the user interface and the tray (i.e. the user interface should inform operators about (a) The Augmented Reality Printshop, in which a designer uses an augmented reality tablet (1) to walk around the augmented reality markers (2).
(b) The Virtual Printshop, in which designers (2) operate a first-person perspective 3D environment projected on a large screen (1).
Fig. 5.
The two prototypes of the virtual printshop in use during the application review.
an empty paper tray). While the product evaluation session should also include real end-users, it was decided to only involve designers because of the experimental nature of the applications. During the test session, designers who operated the virtual printshop (i.e. control the keyboard and mouse, or hold the AR tablet) temporarily acted as end-users.
The participants were subsequently introduced to 1) the high realism Virtual Printshop, 2) the low realism Virtual Printshop, 3) the high realism AR Printshop and 4) the low realism AR Printshop. The designers spent about thirty minutes in each of these four virtual printshops, carrying out a use scenario to evaluate the different paper tray concepts. The use scenario (which was the same throughout the evaluation session) involves the following steps.
1) A printer runs out of paper and switches to idle 2) The operator collects a new pack of paper 3) The operator opens the tray and inserts new paper 4) The printer resumes its print job Fig. 6. Layout of the printshop. During the product evaluation, the printer on the lower left runs out of paper and needs to be refilled. The participants collect a new pack of paper from the paper storage. A queue of customers forms at the front-end desk during the evaluation. Figure 5 shows the group of designers as they carry out the evaluations in the two different virtual printshops. Figure 6 illustrates the key elements of the use scenario in a layout of the printshop. After completing the evaluation sessions a group discussion was held to gather feedback on the different types of virtual environments.
The discussion focused on the differences between the Virtual Printshop and the AR Printshop, and the difference between the two levels of realism of the printshops. Given the low number of participants and the experimental nature of the use case we focused on gathering qualitative rather than quantitative feedback. Consequently, the insights regarding the differences between the Virtual Printshop and the AR Printshop, and the difference between the two levels of realism should be considered valid only within the context of this case study.
Overall, both printshop applications allow users to move from one printer to another, and to include workflow elements such as receiving printing orders from customers, postprocessing a print job or doing administrative tasks on a computer. Being virtual, the workflow can easily be adapted to assess the effects of room layout modifications or changes in machines or personnel. In addition to doing product evaluations and workflow analyses, the environments can also be used for generating and quickly evaluating new ideas or communication purposes (e.g. interactive demonstrations of new products).
With respect to the differences between the Virtual Printshop and the AR Printshop, it was found that designers preferred the Virtual Printshop over the AR Printshop. Designers indicated that the augmented reality approach does not really achieve a feeling of being in the printshop; the restricted view through the tablet computer, the lack of walls and the sudden 'popping up' of objects in the augmented reality environment prevent the participants from staying 'immersed' in the virtual world. A benefit of mixed reality on the other hand is that it also simulates physical interactions; designers had to kneel down in order to reach lower paper trays. However, such physical and ergonomical aspects are more easily tested through wooden or paper mockups, limiting the added value of VR in this area. It was found that the level of realism of products should be high, comparable to the high-level demo. A printer in the lowrealism printshop triggers less feedback than a highly realistic printer, and it makes it difficult to assess the dimensions of the object. The realism of the context is less important, but should be slightly higher than the low-level demo (e.g. add shadows, visual cues for interaction). Participants agreed that it is a matter of experience to know what to include (or not) in the context (e.g. is a clock a part of the workflow?). In context visualization, the layout adds sufficient reference for recognizing a certain printshop; chairs do not need to be a 1:1 copy of the real chairs, as long as there are chairs on the correct location in the room. Apart from visual realism, participants also noted that sound significantly affects the sense of realism. The low-realism sound (on or off) was considered confusing, even though it provides a clear indication of printer status. It was concluded that sound should be either realistic (stereo, 3D, interactive) or completely left out.
Following these findings, it is concluded that the application facilitates the anticipated design task, namely early stage product evaluations. Designers indicated that the virtual printshops contribute to a more realistic use experience, thus answering the first question of the test case. Furthermore, based on the feedback from the designers it is decided to focus on the realisation of the Virtual Printshop rather than the AR Printshop. It was also found that even with a lower level of detail, participants still recognize a use context, as long as there are sufficient references to the real-life environment.
D. Tool Selection
Having established the Virtual Printshop as the application, the next step in the case study is to provide designers with appropriate tools to realize this application themselves. Tool selection depends on several aspects, such as the required level of realism of the resulting virtual environment, the available skills (e.g. modeling or programming the environment) and possibly the integration with other tools used in the PDP (e.g. to use data from existing model repositories). Given their experience with the application prototype earlier in the case study, the designers were able to contribute to the tool selection by expressing concrete requirements and preferences. Designers were introduced to three steps required to realize the Virtual Printshop application, and the range of tools available for each of these steps.
• Geometry Modeling -This step involves the creation or importing of model geometry (including shapes, colors, materials, etc.) needed for the virtual environment. In the virtual printshop this includes printer models, furniture and avatars. Tools available for this step range from regular 3D modeling suites and CAD software, to simply importing existing models from internal or external model repositories.
• Scene Integration -Scene Integration involves the creation of a virtual room or area and putting the 3D geometry in this environment. In the virtual printshop the room consists of the printshop room, and the arrangement of printers and furniture within the printshop. Tools available for this step range from 3D modeling suites and CAD software to dedicated interior decoration and layout software.
• Behavior Modeling -The third step involves defining the interactive behavior of objects and the environment.
In the virtual printshop this includes the system behavior of printers (e.g. being able to print and output paper) and the ability of avatars to form a queue at the printshop's desk. Tools available for behavior modeling range from regular programming and scripting languages to visual programming languages and pre-programmed behavior.
The researcher explained how different tools used for each of these tasks lead to different levels of realism and virtuality. For instance, higher levels of realism require more complex modeling tools such as game engines, while low realism environments can be created with easy to use offthe-shelf interior design software. Sharing this information with designers enables them to assess the trade-offs between application characteristics and tool requirements, but also allows for a comparison between the tools needed to realize the VR application (the VR tool chain) and the tools already available within the company. Taking this information into account, the participating designers were able to compose a tool chain and allocate tool chain components to specific departments or disciplines.
• Geometry Modeling is allocated to product designers who already work with CAD models. During the prototype session it was found that some objects, such as printers, should have a relatively high level of realism. These models could therefore be directly imported from the company's existing CAD database. Other objects, such as furniture, have lower requirements with respect to realism (or similarity with a real-life environment) and can therefore be imported from generic 3D databases, such as Google 3D Warehouse [1].
• For Scene Integration, designers prefer a low threshold and easy to use tool rather than a more flexible but complex tool such as a generic game engine. Interior decoration software such as SweetHome3D [2] provide a user friendly way to create virtual environments, and allow users to import other 3D assets (e.g. printers and furniture). This part of the tool chain would be used by usability engineers, who are usually in charge of arranging product evaluations.
• The designers indicated that Behaviour Modeling can be allocated to dedicated prototypers (designers trained in creating interactive software prototypes or mockups), who are already available in the design department. Given their experience with software prototyping the Behavior Modeling tools can focus on functionality and flexibility rather than ease of use.
The tool chain was verified in a series of follow-up workshops. Here designers, usability engineers and prototypers were involved in carrying out their respective parts of the tool chain. Designers and usability engineers used SweetHome3D for creating virtual environments (which they had to do based on e.g. a floorplan and photos of a reference environment). Assets (furniture, printers, etc.) were imported from internal CAD databases as well as public databases such as Google 3D Warehouse. The resulting virtual environments were used in a subsequent workshop in which the Blender Game Engine was used for adding behavior to these environments (e.g. ability to walk through the environment, interact with objects, etc.).
The workshops show that even without specific training in 3D modeling, designers were able to import models from databases and put them in a virtual environment created from scratch. Adding behavior to this environment on the other hand turned out to be difficult even for experienced prototypers. While it was expected that prototypers would be able to use a generic Game Engine for this, it was found that the learning curve of these tools (in this case the Blender Game Engine) are quite steep. In addition to the steep learning curve it should be considered that the tool will not be used on a daily basis, and that not every design department has a dedicated prototyper (or designers with similar skills) available. This bottleneck could be addressed, either by providing designers with easy to use programming tools for creating interactive prototypes, or by outsourcing this task to experts such as dedicated virtual prototypers (within or outside the design department).
The Tool Selection phase of the case study allowed designers to compose their own tool chain based on experienced gained during the prototype evaluations. Out of the three tool chain components, two are supported by tools that are sufficiently usable and effective, and integrate well with existing tools and databases.
E. Generalization
Up to this point, the development of the application prototype and the selection of tools have been company specific activities, leading to an application of VR for early stage design tasks for this particular company. In the final stage of the case study we investigated how well the results translate to other companies and design domains; can other companies benefit from a similar application, and if so, do they have different requirements regarding tools? During a group meeting attended by three design companies (a product design agency, a truck design multinational and a machine design multinational), designers were asked to find an analogue of the Virtual Printshop application that is relevant in their own practice, and subsequently indicate how well the accompanying tools integrate with their existing tool chain (see figure 7).
1) Generalization of the Application: After demonstrating the virtual printshop to the session participants, the designers were asked to break down the application into a generic 'virtual context' (e.g. the printshop) and generic 'virtual objects' (e.g. printers, furniture, avatars, etc.). These generic elements were given a new and concrete shape by the designers, for instance by letting the virtual context become a highway and the virtual objects trucks and cars. In addition to describing their virtual environments the participants were also asked to compare their envisioned applications to the pre-defined application and indicate if for instance the level of realism or level of complexity should be above or below the level presented in the case study (i.e. use it as a benchmark).
Two of the three companies were able to identify applications analogue to the virtual printshop in their own design practice.
1) Virtual Bakery Shop -The design agency (A in figure 7) selected one specific product for this session that suits the VR application presented. The selected product is a machine that bakes/finishes bread inside a shop or supermarket. A VR application similar to the Virtual Printshop could be a time saving application in their design process by supporting the communication with their customers. The designers envision a virtual bakery shop, in which their product concept as well as the current machines, objects and people present in the bakery would be represented. Aspects such as safety, product routing and product presentation could be incorporated in the application. 2) Virtual Factory -The machine designers (B in figure 7) envision a fairly straight-forward translation of the original application. They would use a 'Virtual Factory' to show a client (the buyer of a new machine) a realistic representation of the proposed solution. This application would primarily support sales and negotiation phases, but in a way also provide validation of assumptions and design proposals; the client will be able to indicate whether or not a proposed solution meets the requirements. Compared to the original virtual printshop however, the primary aim would not be to evaluate or improve design solutions.
The truck design company (C in figure 7) was unable to describe an analogue application. The only translation would be to use the truck cabin as a virtual context, in which truck drivers can look around and for instance experience future dashboard or cabin concepts. However, this application was not considered very relevant by the company.
2) Generalization of Tools: After identifying analogue applications, the companies were introduced to the tool chain used in the case study to realize the applications and asked to discuss the compatibility of the tool chain with their current tool chains. As shown in figure 7, it was generally agreed that Geometry Modeling is quite well supported by tools currently in use (usually the company's CAD software or model database) and that Scene Integration can be either covered by current CAD software or supported by the tool presented in the case study. Behaviour Modeling is more difficult to integrate with the tools and skills of design departments, as it is not considered a core task of early stage product design. Consequently, nor the people or the tasks are generally available to do Behaviour Modeling. Figure 7 shows that company B 'solves' this by simply leaving out the Behaviour Modeling step; it was argued that even without having an interactive environment, the application would be beneficial. Company A on the other hand indicated that the required tools and skills would be acquired externally rather than leaving out this part of the tool chain altogether. Fig. 7. Three additional companies were involved to generalize the case study results, including a product design agency (A), a machine design multinational (B) and a truck design multinational (C). Etched areas depict tool chain components that were not included in the tool chain of the particular company.
V. DISCUSSION
The Virtual Printshop that was developed in the case study does not yet provide all the functions required for practical use (for instance, the application does not support importing models from external sources). The current proof of concept allowed for a qualitative assessment of effectiveness of the application, but was not sufficiently polished to be used by actual end-users and skew quantitative data. The discussion presented in this section therefore focuses on the experiences and insights gained while carrying out the case study.
The approach as implemented in the presented case study has been successful in creating awareness about VR among design practitioners, and in exploring and refining opportunities for effective applications. The close collaboration with designers ensured that the application is useful in practice; designers constantly indicated whether or not they would 'see this work' in real-life. For example, after presenting a storyboard that illustrated potential applications of augmented reality, a discussion was triggered about practical issues; "do we send an AR kit to our customers, or do we invite them over to do it here?". Interestingly, the discussion did not focus on technical arguments to make this decision, but rather practical ones (e.g. "a customer may not understand how to use the AR kit" or "if customers augment their own use context, they'll have a realistic experience and it will save us time of modeling use contexts ourselves"). Discussions like this not only provide the researcher with a better understanding of practical requirements of VR applications, but also indicate that the participating designers understand the technologies well enough to engage in discussions about it. Another positive side-effect of the close collaboration is that the design department becomes (and remains) committed to participate in the case study; they were part of creating the initial application and like to stay involved in its further development, evaluation and validation. Moreover, a proper understanding of the envisioned application seems to reduce the threshold for designers to start using or learning to use new tools; they are willing to invest time and effort if they are aware of the benefits gained in return.
A downside of close collaboration with practitioners that was encountered during the case study is the infinite number of 'new opportunities' that emerges while discussing the application. This issue is difficult to handle because the researcher, who in the end is in charge of the development, needs to decide whether or not a new opportunity should be taken into account. Some of the opportunities are low hanging fruits, meaning that the application is improved or extended without significant development effort. For example, designers indicated that the virtual printshop could also be used to discuss and communicate room layouts and related issues, such as the impact of the layout on total costs, the environmental impact or maintenance. While this use of the virtual printshop is different from the originally envisioned application, it does illustrate its versatility which in turn can help the adoption of the application within the company. Other opportunities and ideas proposed during discussions are less easy to implement, and do not always contribute to the application. It frequently occurred that designers proposed to use technologies (e.g. motion tracking suits or 3D displays) without motivating why they would want to use it. In these cases it is important for the researcher (or in general; the facilitator of these discussions) to assess the usefulness of adding technologies.
VI. CONCLUSION In this paper we addressed the gap between the potential benefits of VR applications in early stages of a user centered PDP and the tools available for designers to realize these applications. In the presented case study we identified a useful application of VR for the participating design department, and provided a selection of tools that allows designers to realize the application themselves. While the resulting Virtual Printshop is a relatively low-end form of VR it provides an effective facilitating role in early stage design activities. The test case showed that reviewing and acting out workflows in the Virtual Printshop is considered a valuable addition to existing methods, mainly because the virtual environment provides a realistic and familiar use context. Acting out the workflow in these contexts triggers participants to express knowledge and feedback that might otherwise be left out. A generalization of the cases study results showed that the application can be translated to several other design domains.
With respect to tools, similarities were found in how different design domains integrate VR design tools with their existing tool chains. Designers prefer to import 3D models from existing repositories rather than modeling everything themselves, even if this results in more accurate models. Gathering models and integrating this in a virtual environment is considered a feasible task for design departments, either with existing tools or with tools available elsewhere on the market. Behavior modeling (e.g. programming the 3D models and environments) is considered a difficult skill that is not always available within design departments. Given the low use frequency of the tool and required investment in time and money, training designers to do this themselves is not always desirable. Alternatively, more user friendly (lower threshold) tools are to be found or created to cover this part of the tool chain.
A. Future Work
The Virtual Printshop originally aimed to facilitate communication between designers and end-users. The test sessions in the case study however only involved product designers, mainly because of the experimental status of the VR applications. In follow-up projects, the company continued working with some of the tools (in this case a combination of SweetHome3D and Google 3D Warehouse) to improve communication with clients. These follow-ups hopefully lead to opportunities to further evaluate the Virtual Printshop with actual end-users.
The presented case study is the first in a series of three industrial case studies in which the approach is implemented. Each case study features a different industrial partner, allowing for a comparison of the individual case study results as shown in figure 8. The resulting 3x3 matrix provides the content of a more elaborate framework on how to use VR to facilitate user centered design activities in the early stages of a PDP. Furthermore, the case studies allow us to iteratively improve the approach presented in this paper, leading to a more founded method for identifying useful and usable VR applications in the early stages of a user centered design process. | 9,773.4 | 2013-01-21T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Projective Geometry as a Model for Hegel’s Logic
: Recently, historians have discussed the relevance of the nineteenth-century mathematical discipline of projective geometry for early modern classical logic in relation to possible solutions to semantic problems facing it. In this paper, I consider Hegel’s Science of Logic as an attempt to provide a projective geometrical alternative to the implicit Euclidean underpinnings of Aristotle’s syllogistic logic. While this proceeds via Hegel’s acceptance of the role of the three means of Pythagorean music theory in Plato’s cosmology, the relevance of this can be separated from any fanciful “music of the spheres” approach by the fact that common mathematical structures underpin both music theory and projective geometry, as suggested in the name of projective geometry’s principal invariant, the “harmonic cross-ratio”. Here, I demonstrate this common structure in terms of the phenomenon of “inverse foreshortening”. As with recent suggestions concerning the relevance of projective geometry for logic, Hegel’s modifications of Aristotle respond to semantic problems of his logic.
Introduction
Modern mathematical logic is standardly thought of as commencing around the middle of the nineteenth century with the work of George Boole and Augustus de Morgan, although, largely unbeknownst to the participants of this movement, a similar attempt to apply algebra to ancient syllogistic logic had been pursued by Leibniz almost two centuries earlier.A few decades after Boole, however, a different approach would be launched with Gottlob Frege's "classical" logic, later championed and developed by Bertrand Russell.While each movement looked to mathematics, each looked to different branches of the discipline and conceived of the relation of their logics to mathematics in different ways.In contrast to algebra, the Frege-Russell strand looked to analysis and, moreover, conceived of logic not as mathematics but as an autonomous discipline providing its rational foundation. 1 Within the emerging analytic paradigm of philosophy in the early twentieth century, the Frege-Russell approach would triumph over the earlier "algebra of logic" tradition stemming from Boole, 2 as well as over traditional Aristotelian forms of logic thought to be ultimately tied to an inadequate "subject-predicate" conception of logical form.One victim in particular would be the logic of Hegel.Recently, historians of the early years of the modern classicist movement have broadened the mathematical context within which it developed beyond analysis and algebra, with a number of investigators looking to the role of the nineteenth-century discipline of projective geometry-a discipline that had been singled out in the 1930s by Ernest Nagel [1] as particularly relevant.In fact, before his turn to foundational and logical issues, Frege had worked in projective geometry, and the relevance of this discipline has been raised especially in relation to addressing various semantic shortcomings apparent in the early forms of classicism, e.g., [2,3]. 3Another example of such a possible role for projective geometry has been suggested by Pablo Acuña [6] (p. 8) with the suggestion that Wittgenstein, in describing the perceptible sign of a proposition as a "projection of a possible state of affairs" [7] ( § 3.11) may have had in mind the specific status of projection in projective geometry.
In this paper, I explore the idea of the involvement of a form of geometry with many of the features of modern projective geometry in Hegel's earlier attempts in the nineteenth century to rejuvenate Aristotle's syllogistic.Within the ancient mathematical culture to which Hegel was appealing, however, this was not identified primarily as a form of geometry, but rather as a mathematical theory of musical harmony.
Projective geometry would be a major area of innovation in nineteenth-century mathematics, but it is not always acknowledged that its roots extended back to antiquity with the work of Pappus of Alexandria (c.290-350 CE).Pappus, however, had sought to preserve work from earlier times and had taken structures at the heart of projective geometry from Apollonius of Perga (c., and hints at earlier associations with the Pythagorean music theory can be found in the name that would be eventually given to projective geometry's principal "invariant", the "harmonic cross-ratio". 4 Pappus's early steps in projective geometry had been revived and built on in the seventeenth century by Girard Desargues [8], a French mathematician and engineer and contemporary of Rene Descartes.Although Desargues had a few initial followers, notably the young Blaise Pascal, his work would fall into neglect, swamped by the success of the analytic geometry introduced by Descartes's Géométrie in 1637.Desargues's alternative geometry would, however, reemerge in the early nineteenth century, an early expression of which-perhaps the earliest-was the book De la Corrélation des Figures de Géométrie by the French military engineer and hero of the French Revolution, Lazare Carnot, a German translation of which Hegel had in his library [9] (p.673).
It is known that Hegel had become intensely interested in geometry around the time that Carnot's book was published in 1801, and that his reading of ancient geometry had been influenced by thinkers from the Platonist tradition in antiquity, such as Proclus [10].Along with this, Hegel's thinking about astronomy had been strongly influenced by the early seventeenth-century astronomer Johannes Kepler, who, with his theory of optics, had contributed to Desargues's geometric project and who had, like earlier Neoplatonists, championed Plato's music-based cosmology.In his 1801 Dissertation completed at Jena [11], Hegel, to the disparagement of many contemporaries, ventured into this "music of the spheres" tradition, and while such an approach to the physical world was, and remains, easy to dismiss, I will argue for its significance via its relation to projective geometry itself.While Hegel may or may not have consciously grasped this link to Plato in Carnot's work, he would nevertheless attempt to rejuvenate Aristotle's syllogistic project in ways that reflected something of both Plato's earlier music-theoretical approach to spatial relations and Carnot's projective geometry.The result would have surprising consequences for the relation of Hegel's Science of Logic to the modern science of logic as it continued to develop beyond the form found in Frege and Russell. 5 In Section 1 below, I briefly review the relation of logic to geometry in ancient Greece, against the background of two different approaches to a problem facing Greek mathematics, that of the discovered incommensurability of continuous and discrete magnitudes.Of these, an earlier Pythagorean-influenced solution can be seen in Plato, while a later solution more in line with Euclidean geometry can be found in Aristotle.In Section 2, it is argued that, while the latter approach would inform Descartes's analytic geometry in the seventeenth century, the former would inform Desargues's initially unsuccessful projective alternative.Then, in Section 3, a common underlying mathematical structure to both music theory and projective geometry is explored via the phenomenon of "inverse foreshortening".In Section 4, the role of musical proportions is examined, while in Section 5, Desargues's projective geometry is examined in relation to the type of "science of perspective" that Leibniz had attempted to introduce in the seventeenth century-a science that, like projective geometry, had built upon earlier theories of perspectival representation in painting and architecture that had flourished in the Renaissance.Finally, in Section 6, distinctive features of Hegel's logic are considered as expressing a projective equivalent of Aristotle's more "Euclidean" syllogistic.
Logic and Geometry in Ancient Greece
The origin of the science of logic within European culture is typically placed around the middle of the fourth century BCE with Aristotle's development of his system of syllogistic logic as presented in his Prior Analytics [13].In 367, Aristotle had, as a young man, joined Plato's Academy in Athens, remaining there some twenty years, and during this time, mathematics had been a major concern of both Plato and many other academicians.This had predominantly taken the form of work on geometry, with the development of approaches that would be later codified around the end of the fourth century by Euclid in the thirteen books of his Elements.Given this prominence, it is not surprising that various authors have speculated about geometry having shaped Aristotle's logic.John Corcoran [14] (p.284), for example, has described Aristotle's logical achievements as "unthinkable without the emphasis on deductive reasoning in geometry that he had found in Plato's Academy", while Vangelis Trianatafyllou [15] (p.10) notes, in the light of the "encompassing geometricocentric paradigm" of Greek scientific thinking, "it would indeed make perfect sense for logical methodology to mirror-up to a certain degree-that of geometry".While Aristotle's new discipline was not "mathematical" in the modern sense, comments such as these suggest that it had nevertheless been modelled on the extant discipline of geometry.
However, it would seem that what we now know as "Euclidean geometry" had not been the only approach to mathematics during the years between Plato's founding of the Academy in the mid-380s and Euclid's codification of that science around 300 BCE.The Greeks did not have an equivalently developed number theory, and what they did have was largely restricted to the theory of musical harmony [16] (p.72).However, some have argued (e.g., [17]) for the influence of the music-theoretical approach to ratios and proportions on Euclidean geometry itself.
Book V of Euclid's Elements containing the theory of ratios and proportions is standardly attributed to Eudoxus of Cnidus, who, while a little older than Aristotle, had joined Plato's Academy about a year before his younger colleague and about twenty years after Plato's founding of the school.Eudoxus's approach to ratios and proportions here are standardly seen as pivotal in the Greek response to a problem that had arisen for Pythagorean mathematicians-that of the incommensurability of ratios of line-lengths with ratios of numbers.It had been grasped that what are now known as the square roots of non-square numbers could not be expressed in ratios of natural numbers.However, Eudoxus, employing a form of reasoning often likened to the way the real numbers would be defined in the later nineteenth century [16] (p.86, n. 14) [18], had shown a way of identifying ratios of lines with ratios of numbers-a solution apparently known to Aristotle.With this, Eudoxus had short-circuited existing Pythagorean attempts to deal with the problem of incommensurability, which appealed to a unity of the three "musical means".
Earlier, ratios (logi) and proportions (analogi) had been discussed in ways appropriate for the three central "proportions" of music theory-numerical relations holding between two "extremes" divided by "middle terms" or "means", of which there were three, the "geometric", "arithmetic", and "harmonic". 6However, the senses of analogos seem to have narrowed between the work of the major early Pythagorean music theorist and cosmologist, Archytas of Tarentum, a rough contemporary of Plato, and that of Aristotle.For Archytas and, seemingly, for Plato himself, all three double-ratios, geometric, arithmetic and harmonic, were called analogi [17,19,20].Archytas had thought of these three ratios as relevant to astronomical study-geometry, arithmetic, astronomy, and harmonics being conceived as "sisters" [21] (p.37).
By the time of Aristotle, only the geometric double-ratio or "mean proportional", defined as an equality of ratios [logi], as in a:b = c:d, was called a proportion [analogos].This definition as given by Aristotle [22] (1131a31) and Euclid [23] (Bk.5, def.4) effectively coincides with what is known as a proportion today, and Eudoxus's general theory of ratios and proportions seems to have undercut the earlier appeal to a "unity" existing among the three means, a unity holding despite the incommensurability between the geometric mean, the calculation of which would have required square roots, and the other two.
With Eudoxus's innovations, the earlier "musical" solution to the problem of incommensurability seemed to have fallen by the wayside, at least for several centuries.Nevertheless, traces of the earlier pre-Eudoxean history of the complex relations between arithmetic, music, and geometry could still be found in a feature of Aristotle's logic, as Aristotle's technical vocabulary found in Prior Analytics Book 1, involving intervals, extremes and means, had originated in the theory of musical harmony [24,25].However, given the innovations of Eudoxus, it might be assumed that for Aristotle any continuity with music theory itself had become entirely nominal, such terms by then having shed their specifically musical connotations.
This suspicion is supported by a consideration of the parallel between Aristotle's account of the first-figure syllogism in the Prior Analytics with a passage from a text on music theory, the Sectio Canonis [26] (p.158-159), 7 that, while usually attributed to Euclid, was probably based on earlier work.Thus, Aristotle has it that, "if A is predicated of every B, and B of every C, it is necessary for A to be predicated of every C" [13] (25b32-26a3), while the author of Sectio Canonis writes, "let there be an interval BC and let B be a multiple of C; and let it be that as C is to B, so B is to D. I say surely that D is a multiple of C. For since B is a multiple of C, C therefore measures B. Now as C was to B as B was to D, so C also measures D" [26] (p.239).
It is evident from the Sectio Canonis that, here, "measure" is being used in the sense of "divide", since "C measures D" if "D is a multiple of C", and significantly, in Metaphysics Bk VIII, Aristotle gives a numerical analogy involving division for the relation of concepts within a definition: "For definition is a sort of number; for it is divisible, and into indivisible parts [. ..] and number is also of this nature" [27] (1044b34-35). 8In short, the parallel effectively models the way Aristotle unpacks definitions in inferences, such that concepts will be thought of as contained in other concepts much in the way that numbers are contained as factors in subsequent numbers of a geometric sequence.Thus, despite the "musical" analogy, there is no reference to the other two musical means in this model, a fact that sits with Reviel Netz's linking of the ratios and proportions found in Sectio Canonis to Euclid's (or Eudoxus's) treatment of ratios and proportions in Book V of Elements [16] (pp.65-67).
Plato, who would have been about 60 years old by the time both Aristotle and Eudoxus had become active in the Academy, had clearly adhered to the earlier link between musical and geometrical ratios and proportions as manifested in the dialogue Timaeus.While dismissed by Aristotle as no more than a metaphor [29] (290b12-14), this link would be resurrected in the first century BCE by Neoplatonists [16] (p.394) and would persist through the Middle Ages, especially in Christianised form via the work of Boethius, and would be revived again in the Renaissance by Ficino and others.If, at the turn of the seventeenth century, its continued use by the astronomer Johannes Kepler was starting to look dated, its being broached by Hegel two centuries later would surely have looked distinctly eccentric.However, underlying the Pythagorean-Platonic astronomy was, I will argue, a distinctive and modern non-Euclidean geometry-perhaps carried by its implicit optics-in which the three "musical means" played a significant role.
Projective Geometry from the Greeks to the Seventeenth Century
In Descartes's analytic geometry, the then recent discipline of algebra, a largely non-Greek form of mathematics derived from Arabic and Indian sources, 9 could be brought to bear on figures of Euclidean geometry by its device of orthogonal "x" and "y" coordinates.This allowed a metric to be applied to continuous geometric magnitudes in a way that could not be envisaged by the Pythagoreans because of the problem of incommensurability.From the perspective of the seventeenth century, however, this Greek problem had resulted from the restriction of Greek numbers to the natural or "counting" numbers, 1, 2, 3, 4, etc., a restriction that had been overcome by the incorporation of new number forms that had by then been adopted.These included "rational" numbers, 10 zero, negative numbers, and, importantly, the so-called "irrational numbers", such as square roots of non-square numbers such as 2, 3, or 5. Thus, while the Pythagoreans had not been able to give a numerical value to the diagonal of a square of side one unit, this could now be expressed by the new number, √ 2. From this modern perspective, Eudoxus's innovations in relation to ratios and proportions could be taken as pointing in this modern direction.
Descartes's analytic geometry would be spectacularly successful and would largely eclipse the rival geometry of Desargues. 11In contrast to analytic geometry, Desargues' did not allow for the application of a metric to the figures of Euclidean geometry.As the title of his work suggests, his focus was the "conic sections", the circle, the ellipse, the hyperbola, and the parabola, which had been conceived by Apollonius in antiquity as produced by sectioning a cone on different angles.Work on the conic sections had been revived a few decades before Desargues by the astronomer Johannes Kepler, who, in a work of optics, had taken this concept further than Apollonius by treating these seemingly different shapes as different "projections" of a single shape, the circle [30].That is, ellipses, parabolas, and hyperbolas were conceived in a way that so-shaped shadows might be "projected" onto differently inclined flat surfaces by a light source interrupted by an opaque circular disc.In this context, the type of metric introduced by Descartes was not relevant, as now the focus was on the relative relations between points on different projectively linked figures.This, however, would produce a further need within projective geometry.
In Euclidean geometry, line-lengths and the angles between them are fixed or "invariant", but this is no longer the case in projective geometry, where a line-segment, for example, might be projectively equivalent to another of different length. 12This loss would imply the need for some other source of invariance-some feature that was invariant across different projections.This would be provided by a peculiar double-ratio holding among four points on a line, now known as the "harmonic cross-ratio", that would be invariant under projection. 13Earlier constructions of this object could be found in Pappus and Apollonius.
Another of Kepler's innovations that would find its way into Desargues's geometry was the idea of "points at infinity" [30].If ellipses and parabolas were thought of as projections of a circle, there should be some type of correspondence between their respective parts, for example, between the centre of a circle and the foci found in ellipses and parabolas.The two foci of an ellipse, for example, might be thought to coincide when the ellipse was squashed into a circle.Similarly, stretching an ellipse might be thought to further separate the foci, with one eventually coming to exist at an infinite distance from the other.Now, the resulting visualizable figure would be a parabola.
The incorporation of points at infinity into projective geometry would have crucial consequences for this approach.A line can be determined by any two points through which it passes or "joins", and similarly, a point can be determined by any two lines that intersect or "meet" at it.In projective geometry, however, all points and lines can be so defined, as every pair of lines-including parallel lines-are defined as meeting.There is thus a complete symmetry between points and lines: every pair of points define some particular line, and every pair of lines define some point.This "duality" of points and lines means that, for every theorem concerning a certain structure holding among points, an equivalent theorem exists concerning an analogous structure holding among lines.
Points at infinity would also be found within another source of Desargues's geometry, the various theories of perspective that had developed during the Renaissance in relation to the depiction of perspective in painting and architectural drawing [31].Artists of the fifteenth and sixteenth centuries had been principally concerned with the "projection" of three-dimensional objects onto the artist's two-dimensional picture plane, and the results of these types of studies are on display in the foreshortening seen in Raphael's fresco at the Vatican, The School of Athens, painted in 1530 (Figure 1, below).In this, line lengths that are objectively equal, such as the edges of the square floor tiles, become smaller as they are portrayed as receding into the distance, and lines that are objectively parallel appear to converge towards a "vanishing point".Standardly, the vanishing point had been depicted on, or just above, the horizon, but the horizon is blocked in Raphael's painting and, as pointed out by Bigelow and Leckey [32], the converging "parallel" lines actually converge on, and so draw attention to, a book in the hand of the left-most of the two central figures.That figure is meant to portray Plato, and the book he is holding is the Timaeus.
fresco at the Vatican, The School of Athens, painted in 1530 (Figure 1, below).In this lengths that are objectively equal, such as the edges of the square floor tiles, be smaller as they are portrayed as receding into the distance, and lines that are object parallel appear to converge towards a "vanishing point".Standardly, the vanishing had been depicted on, or just above, the horizon, but the horizon is blocked in Raph painting and, as pointed out by Bigelow and Leckey [32], the converging "parallel" actually converge on, and so draw attention to, a book in the hand of the left-most o two central figures.That figure is meant to portray Plato, and the book he is holding Timaeus.The figure next to Plato is Aristotle, and the foreground figures are clearly div into two groups.The group on Plato's side includes Pythagoras (the figure writing opened book in the bottom left-hand corner) and that on Aristotle's side, Euclid (the responding figure in the bottom right-hand corner, bending over a slate and hold compass).The implied association of Plato with Pythagoras (and the corresponding trast with Aristotle and Euclid) is reinforced by the references within the painting to opythagorean music theory.In addition to reference to the Timaeus, the work in w Plato had presented his musico-mathematical cosmology, the book in which Pythag is writing contains a reference to a sequence of four numbers, 6, 8, 9, and 12 [32] (pp.420), the so-called "harmonia" or "musical tetraktys" structured as a double-ratio in w 6:9 is taken as equal to 8:12 [33] (p.200).In the Epinomis, Plato, or one of his follow had described this structure as "granted to the human race by the blessed choir o Muses", adding that this gift had "bestowed upon us the use of concord and symme promote play in the form of rhythm and harmony" [34] (991b).Later Neoplatonists as Nicomachus of Gerasa [35] (pp.284-285), Iamblichus of Chalcis [36] (p.50), and Pr [37] (pp.143-145) would identify this structure with that "most beautiful bond" th maeus, the mythical Pythagorean astronomer of Plato's Timaeus (perhaps based on A tas), describes as being responsible for the unity of the parts of the living cosmos [38] 32a).These numbers, 6, 8, 9, and 12, had represented the spacings among the poin viding a vibrating string into the three fundamental harmonic intervals of Pythago music theory: the tonic, here given the value 6, the perfect fourth above it, the value the perfect fifth above it, 9, and the octave, 12.
In the Berlin Lectures on the History of Philosophy, Hegel would claim that Aristotl based his own formal syllogism on a simplified distortion of this bond supposedly ing the various parts of the living cosmos into a unity [39] (pp.209-210).Being fam Euclid) is reinforced by the references within the painting to Neopythagorean music theory.In addition to reference to the Timaeus, the work in which Plato had presented his musico-mathematical cosmology, the book in which Pythagoras is writing contains a reference to a sequence of four numbers, 6, 8, 9, and 12 [32] (pp.419-420), the so-called "harmonia" or "musical tetraktys" structured as a double-ratio in which 6:9 is taken as equal to 8:12 [33] (p.200).In the Epinomis, Plato, or one of his followers, 15 had described this structure as "granted to the human race by the blessed choir of the Muses", adding that this gift had "bestowed upon us the use of concord and symmetry to promote play in the form of rhythm and harmony" [34] (991b).Later Neoplatonists such as Nicomachus of Gerasa [35] (pp.284-285), Iamblichus of Chalcis [36] (p.50), and Proclus [37] (pp.143-145) would identify this structure with that "most beautiful bond" that Timaeus, the mythical Pythagorean astronomer of Plato's Timaeus (perhaps based on Archytas), describes as being responsible for the unity of the parts of the living cosmos [38] (31b-32a).These numbers, 6, 8, 9, and 12, had represented the spacings among the points dividing a vibrating string into the three fundamental harmonic intervals of Pythagorean music theory: the tonic, here given the value 6, the perfect fourth above it, the value of 8, the perfect fifth above it, 9, and the octave, 12.
In the Berlin Lectures on the History of Philosophy, Hegel would claim that Aristotle had based his own formal syllogism on a simplified distortion of this bond supposedly binding the various parts of the living cosmos into a unity [39] (pp.209-210).Being familiar with the relevant Neoplatonic interpreters of Plato, 16 Hegel had most likely followed Iamblichus and others in identifying this bond with the musical tetraktys.Further evidence for this association appears in Hegel's discussion of "ratio" or "proportion" (Verhältnis) in Book 1 of The Science of Logic, where its most developed form, the "power-proportion", has exactly the features of the inverse double-ratio structure of the harmonic cross-ratio [12] (pp.70-79).
It seems to have been Archytas who had calculated the musical means such that while the sequence of octaves had been determined as a geometric sequence in which each term doubles its predecessor, the two consonant intervals within the octave, the perfect fourth and perfect fifth, were determined by the harmonic and arithmetic means of the octave's extremes.We are to understand these different "means" in terms of two fundamentally different numerical sequences, the geometric and arithmetic, which are incommensurable.
In an equally spaced arithmetical sequence of numbers such as 1, 2, 3, 4, 5, . .., the middle of three consecutive terms is half the sum of the other two, their "arithmetical mean" (or average).By contrast, the "geometric" mean is the middle term of three consecutive terms of a geometric sequence, such as 1, 2, 4, 8, 16, . .., in which each successive term is a constant multiple of its predecessor.Here, the geometric mean will be calculated as the square root of the product of its extremes.As noted above, a sequence of octaves, the most harmonious intervals, is determined geometrically, but within the octave, the most consonant note is the perfect fifth, which is determined by the arithmetic mean of the octave's extremes.Archytas's third mean, the harmonic, is the inverse of the arithmetic mean in the context of the underlying geometric sequence, Proclus later summarizing the relation between the three means as such that "the geometric proportion includes the other two and they are reciprocal with one another" [37] (p.145). 17As its reciprocal or inverse, the harmonic mean, b, of a and c will be calculated as 1 divided by the arithmetic mean of 1 a and 1 c or ( ), which reduces to 2ac a+c . 18For the terms 1 and 2 of a geometric sequence, Archytas had calculated the harmonic mean as the ratio 4:3 and the arithmetic mean as 3:2.To obtain a sequence of integers, each term can be multiplied by 6, resulting in the musical tetraktys, 6, 8, 9, and 12.
Archytas is also attributed with having proved that a pair of "epimoric" (superparticular numbers, n and n + 1) such as 1 and 2, or any multiples of such pairs, could not be "divided" by the mean proportional, i.e., the geometric mean [16] (pp.71-72).That is, without irrational numbers, effectively the only means by which an octaval interval could be divided were the harmonic and arithmetic means. 19The two equal ratios, 6:9 and 8:12, of the four numbers of the musical tetraktys would provide an instance of the future harmonic cross-ratio, the principal invariant of this form of geometry.The harmonic cross-ratio can seem confusing, 20 but it is based upon a simple idea.
Consider a segment between points A and B on a line, with that segment divided at a variable point, X, that can move freely within that interval, as in Figure 2 below.The position of X can be said to determine the value of a "division ratio" between the segments AX and XB, i.e., AX:XB, or expressed as a fraction, AX XB . 21It happens that for each point at which X divides AB "internally", another unique point, Y, as displayed in 2b, exists on the line but outside the interval, dividing the interval "externally", such that the two division ratios are the same, that is AX:XB = AY:YB (or AX XB = AY YB ) [8] (pp.83-85).The ratio of these two equal division ratios, with the value of 1, 22 is the harmonic cross-ratio. 23 Logics 2024, 2, FOR PEER REVIEW In the Euclidean plane, a straight line through points A and B will be thought to ext to infinity in each of the opposed directions from A to B or from B to A, aligning with idea that the values of and − are opposed.However, in the projective plane, the p at infinity approached when travelling in the direction from A to B is considered the s point that is approached when travelling in the direction of B to A. Again, this was a p made by Kepler in his work on optics [42] (p.299).In fact, the point at infinity as portra This equality of the division ratios in the harmonic cross-ratio has peculiar consequences for relations within the projective plane.As X moves between A and B in a particular direction, for example, away from B and towards A as in Figure 2b, Y will move in the opposite direction away from B. Moreover, as X approaches the point mid-way between A and B, as in Figure 2c, Y will approach a point an infinite distance from the line segment [8] (p.85). 24If X continues to move past the mid-point in the direction of A, Y will reappear on the line but now on the opposite side of the segment, approaching A from the left as X approaches it from the right.
In the Euclidean plane, a straight line through points A and B will be thought to extend to infinity in each of the opposed directions from A to B or from B to A, aligning with the idea that the values of ∞ and −∞ are opposed.However, in the projective plane, the point at infinity approached when travelling in the direction from A to B is considered the same point that is approached when travelling in the direction of B to A. Again, this was a point made by Kepler in his work on optics [42] (p.299).In fact, the point at infinity as portrayed in Figure 2 above will be just one of an infinite number of such points, each being the point of intersecting parallels pointing in different directions.These points will form a single line at infinity which is closed, like a circle. 25
The Role of the Musical Ratios in Projective Geometry
For Plato and the Platonists, the significance of the three Pythagorean means had extended beyond their role in accounting for the consonant musical intervals of octave, perfect fifth, and perfect fourth because the "unity" holding among them was seen as addressing the global problem of the incommensurability of continuous and discrete magnitudes.An arithmetical sequence is such that both arithmetic and harmonic means can be expressed in ratios of natural numbers, but the calculation of the geometric mean, involving square roots, meant that it could not be properly calculated. 26However, facing the need to find approximations for such quantities in making actual calculations in the context of astronomy, the Pythagoreans are known to have employed algorithms inherited from earlier Babylonian mathematics, 27 and the musical tetraktys itself provides such an algorithm. 28 Taken together, the harmonic and arithmetic means of a pair of extremes provide upper and lower limits for approximate values for the geometric mean of those extremes. 29Moreover, taking the harmonic and arithmetic means of those two means provides an even narrower range of approximate values, and so, iterated in this way, the harmonic and arithmetic means provide a narrowing range of upper and lower bounds for approximations for √ 2. In this sense, for calculations, the geometric mean is "broken" into the other two.
It is such a harmonization of incommensurable opposites like this that Plato seems to refer to in the Timaeus in relation to the bonds required to unify the distinct parts of the cosmos.As Timaeus points out, were the cosmos planar rather than three-dimensional, there would be needed only a single middle term, but as the cosmos is three-dimensional, two are needed [38] (d32a-b). 30That incommensurables are being united in this double bond seems clear from the way Timaeus goes on to describe the ratios in terms of the division of a complex mixture said to combine "the Same" and "the Different", which is "hard to mix into conformity with the Same" [38] (35a-b).Almost a thousand years later, Proclus would describe this unity achieved in the cosmic soul as one in which "the geometric means binds the substantial totality of the soul, for the essence is a single logos [ratio] running through all things and connecting the first, middle and last" while the "harmonic proportion connects all the Samenesses that has been divided in the case of the soul, establishing a common ratio between the extreme terms and yoking together things that are naturally similar" and "the arithmetic binds together the various Differences in the soul's procession" [37] (pp.175-176).As we will see below, a differentiation between objects as grouped in terms of their samenesses, that is, their shared properties, and as distinguished in terms of their differences, will emerge in Hegel's account of logic in his distinction between the conceptual determinations of "particularity [Besonderheit]" and "singularity [Einzelheit]"-this signalling a major departure from Aristotle's syllogistic, in which there is no official place for "singular" as opposed to "particular" judgments [45] (p. 1).
Hegel had appealed to Plato's cosmology in his Dissertation at Jena in 1801, a move treated with derision by many of his contemporaries.However, I suggest that, on the basis of his reading of Plato's own application of the three musical means to the geometry of three-dimensional space, Hegel had been predominantly concerned with a feature of the mathematics underlying both a form of geometry and Pythagorean music theory.Such a feature in relation to projective geometry is not difficult to show, as can be seen by comparing the images in Figure 3 below.which there is no official place for "singular" as opposed to "particular" judgm (p. 1).
Hegel had appealed to Plato's cosmology in his Dissertation at Jena in 1801 treated with derision by many of his contemporaries.However, I suggest that, on of his reading of Plato's own application of the three musical means to the geo three-dimensional space, Hegel had been predominantly concerned with a featu mathematics underlying both a form of geometry and Pythagorean music theor feature in relation to projective geometry is not difficult to show, as can be seen paring the images in Figure 3 below.Introductory books in projective geometry typically include diagrams such a of train tracks receding into the distance as in Figure 3a.While the sleepers on w tracks rest are of equal length, they appear in the diagram to progressively shrin the parallel tracks resting on them appear to converge so as to meet at a vanishi But when the neck of a guitar is looked at from a certain angle from the bottom e Figure 3b, while the strings, which are (approximately) parallel, appear to conv similar way to the train tracks, the gaps between the frets do not shrink in the wa the railway sleepers.In fact, if the observer correctly sets the viewing angle, they appear equidistant.This is because objectively, the frets are not evenly spaced sleepers as seen in Figure 3c but grow further apart as one moves up the neck body of the guitar to the headstock (as in Figure 3d) and thus compensate for shortening.In both phenomena, there is a superimposition of an arithmetic s which advances like the sleepers on a railway track, and a geometric sequence, w vances like the sequence of frets of the neck of a guitar.Thus, in receding train t objective arithmetic sequence is projected onto an apparent geometric one, whi guitar neck, when viewed from the appropriate angle, an objective geometric seq fret spacings is projected onto an apparent arithmetic one.
This superimposition of geometric and arithmetic sequences is present in m pler form in the sequence of tonic, fourth, fifth, and octave in the Pythagorean sc sidered arithmetically, if one "adds" a perfect fourth to a perfect fifth, a full octave However, when considered as intervals in a geometric sequence, a complete octav from the multiplication of the two intra-octaval intervals, just as 32 We sh be surprised, then, that the "musical tetraktys" turns out to be an instance of the invariant in projective geometry.Introductory books in projective geometry typically include diagrams such as images of train tracks receding into the distance as in Figure 3a.While the sleepers on which the tracks rest are of equal length, they appear in the diagram to progressively shrink, while the parallel tracks resting on them appear to converge so as to meet at a vanishing point.But when the neck of a guitar is looked at from a certain angle from the bottom end, as in Figure 3b, while the strings, which are (approximately) parallel, appear to converge in a similar way to the train tracks, the gaps between the frets do not shrink in the way seen in the railway sleepers.In fact, if the observer correctly sets the viewing angle, they actually appear equidistant.This is because objectively, the frets are not evenly spaced like the sleepers as seen in Figure 3c but grow further apart as one moves up the neck from the body of the guitar to the headstock (as in Figure 3d) and thus compensate for the foreshortening.In both phenomena, there is a superimposition of an arithmetic sequence, which advances like the sleepers on a railway track, and a geometric sequence, which advances like the sequence of frets of the neck of a guitar.Thus, in receding train tracks, an objective arithmetic sequence is projected onto an apparent geometric one, while on the guitar neck, when viewed from the appropriate angle, an objective geometric sequence of fret spacings is projected onto an apparent arithmetic one.
This superimposition of geometric and arithmetic sequences is present in much simpler form in the sequence of tonic, fourth, fifth, and octave in the Pythagorean scale.Considered arithmetically, if one "adds" a perfect fourth to a perfect fifth, a full octave results. 31However, when considered as intervals in a geometric sequence, a complete octave results from the multiplication of the two intra-octaval intervals, just as 3 2 × 4 3 = 2. 32 We should not be surprised, then, that the "musical tetraktys" turns out to be an instance of the principal invariant in projective geometry.
The Role of the Harmonic Cross-Ratio within Perspectival Representations
In the last quarter of the seventeenth century, Gottfried Leibniz would attempt to develop the types of Renaissance studies of perspective on which Desargues had drawn into a "scientia perspectiva", an "art of showing the appearance of an object in the tabula" or "plane of appearance" [46] (p.48) conceivable as the picture plane on which a painter creates a perspectival representation of an array of objects laid out on some "objective plane". 33 Like Desargues, Leibniz had aimed to abstract from the three-dimensional relationships of points in space to a type of formal two-dimensional geometry of the "plane of appearance" itself-a type of abstraction manifested, for example, in his dropping reference to any "objective plane" in relation to the tabular or "plane of appearance" [46] (p.52).This relative disinterpretation to an essentially formal axiomatic geometry would now allow a group of lines intersecting at a point-what geometers call a "pencil of rays"-to receive a variety of interpretations when reapplied to a perspectival representation.For example, such a pencil could represent parallel lines converging on some point at infinity, or, alternatively, they could represent refracted parallel light rays converging at the eye of an observer represented within the picture. 34In projective geometry, when rays from a pencil intersect with a line, the relationships among the points of intersection can be regarded as a projection of the equivalent relationships among the angles between the rays of the pencil.This allows determinate structures-"projectivities" and "perspectivities", [47] (ch.1)-to be transmitted across the plane as in Figure 4.Among these ranges of points and pencils of lines, the figure of the harmonic cross-ratio is crucial.
develop the types of Renaissance studies of perspective on which Desargues had drawn into a "scientia perspectiva", an "art of showing the appearance of an object in the tabula" or "plane of appearance" [46] (p.48) conceivable as the picture plane on which a painter creates a perspectival representation of an array of objects laid out on some "objective plane". 33Like Desargues, Leibniz had aimed to abstract from the three-dimensional relationships of points in space to a type of formal two-dimensional geometry of the "plane of appearance" itself-a type of abstraction manifested, for example, in his dropping reference to any "objective plane" in relation to the tabular or "plane of appearance" [46] (p.52).This relative disinterpretation to an essentially formal axiomatic geometry would now allow a group of lines intersecting at a point-what geometers call a "pencil of rays"-to receive a variety of interpretations when reapplied to a perspectival representation.For example, such a pencil could represent parallel lines converging on some point at infinity, or, alternatively, they could represent refracted parallel light rays converging at the eye of an observer represented within the picture. 34In projective geometry, when rays from a pencil intersect with a line, the relationships among the points of intersection can be regarded as a projection of the equivalent relationships among the angles between the rays of the pencil.This allows determinate structures-"projectivities" and "perspectivities", [47] (ch.1)-to be transmitted across the plane as in Figure 4.Among these ranges of points and pencils of lines, the figure of the harmonic cross-ratio is crucial.A pencil of rays, p, q, r, s, passing through point O, is sectioned by the (red) line l to form a range of 4 points, A, B, C, D. This pencil projects, from point O, this range onto corresponding points (A′, B′, C′, D′) on a further sectioning line, l′.Should p, q, r, s form a harmonic pencil, A, B, C, D, will form a harmonic range, as will A′, B′, C′, D′, on the line l′, onto which this first range is projected.In turn, the harmonic range, A, B, C, D, will be projected onto the pencil of lines passing through O′, p′, q′, r′, s′, making it a harmonic pencil, which in turn is projected onto the range A″, B″, C″, D″, formed on a further sectioning line l″, making it a harmonic range.Thus, if The various projectively linked pencils of rays and ranges of colinear points could thus allow for the idea of correlations among the sightlines or viewpoints of differently located subjects, like those represented in Raphael's painting, as mediated by relations among points, lines, and planes on the common objects of their vision.Moreover, these Figure 4.A pencil of rays, p, q, r, s, passing through point O, is sectioned by the (red) line l to form a range of 4 points, A, B, C, D. This pencil projects, from point O, this range onto corresponding points (A ′ , B ′ , C ′ , D ′ ) on a further sectioning line, l ′ .Should p, q, r, s form a harmonic pencil, A, B, C, D, will form a harmonic range, as will A ′ , B ′ , C ′ , D ′ , on the line l ′ , onto which this first range is projected.In turn, the harmonic range, A, B, C, D, will be projected onto the pencil of lines passing through O ′ , p ′ , q ′ , r ′ , s ′ , making it a harmonic pencil, which in turn is projected onto the range A ′′ , B ′′ , C ′′ , D ′′ , formed on a further sectioning line l ′′ , making it a harmonic range.Thus, if AB BC = AD DC , then The various projectively linked pencils of rays and ranges of colinear points could thus allow for the idea of correlations among the sightlines or viewpoints of differently located subjects, like those represented in Raphael's painting, as mediated by relations among points, lines, and planes on the common objects of their vision.Moreover, these different interpretations could be superimposed, and the points at infinity could also be understood as representing the "viewpoint" of some transcendentally located "viewer"-in the seventeenth-century context, the famed "God's-eye viewpoint", 35 the objectivity of which could be contrasted with the subjective and partial perspectival viewpoints of finite subjects located within space and time. 36The relevance of such ideas for religious thought about the relations of humans to God was clearly not lost on the likes of Pascal and Leibniz.
Pascal had apparently thought that the relations between finite points within the projective plane and its points at infinity might provide an answer to the question of our knowledge of God [48].Considered in the context of the projective plane, from the perspectives of figures within space, points at infinity are no longer conceived as entirely unreachable or "transcendent" but rather as infinite points that can enter into determinable relations to finite ones in light of the determinacies of the harmonic crossratio. 37In his later writings, however, Pascal seems to have changed his mind, and opted for a fundamental incommensurability existing between God and humans, modelled on the incommensurability, or what he described as the "heterogeneity", between discrete and continuous magnitudes [48] (Section 5).
For Leibniz, clearly his science of perspective was intended to tie into the more general epistemological and metaphysical considerations of the idea of perspective as raised in his Discourse on Metaphysics of 1686 [50].Being a "rationalist" in theology as elsewhere, he suggested that rational mechanisms were available to a finite subject to lead them, as if climbing "Jacob's ladder", to an absolute point of view.In virtue of an individual's capacity to reflect upon the factors constraining his or her own perceptual knowledge, he or she might ascend rung by rung, moving progressively away from the contingencies shaping experience of a subject within the world. 38 Leibniz at least knew about Desargues's projective geometry and certainly was familiar with one of its major theorems-"Pascal's theorem"-and his proposed perspective science included the idea of points at infinity in the form of the vanishing points of apparently converging parallel lines and similarly to represent the sightlines of individuals portrayed within a perspectival representation.However, he seems not to have had the one essential element that had allowed Desargues to create a distinct and unified non-Euclidean systematic form of geometry, the harmonic cross-ratio that Pascal had taken as showing how we might understand our links to God.But without this, for Leibniz, there was no invariant to ensure that the unity of the space being articulated by variously compounded "projectivities" and "perspectivities"-nothing to rule out the "paradoxical" types of space familiar, for example, within pictures of the Dutch printmaker M. C. Escher with their closed but infinitely ascending staircases. 39Nor was he left with any mathematical means for incorporating "points at infinity" into determinate relations with ratios of finite magnitudes.
This type of thinking had formed part of Hegel's background.He was well aware, for example, of the efforts of the Swiss mathematician Johann Heinrich Lambert to develop Leibniz's science of perspective, referring to it and criticizing it in The Science of Logic [52] (p.544). 40The harmonic cross-ratio in the particular form of the musical tetractys from Plato's Timaeus might have suggested a way forward for the intended science of Leibniz and Lambert. 41To see how this might work, we need to extend this discussion from the geometric to the logical register.One possible way forward here is to consider Hegel's logic as standing to Aristotle's formal logic in such a way as to reflect features more characteristic of the projective geometry implicit in Plato's thought than the Euclidean features of Aristotle's.
Hegel's Logic as Understood as a Projective Equivalent to Aristotle's Euclidean Syllogistic
Geometry had been the most developed science in fourth-century Athens, the context in which Aristotle developed his logic, and, as noted above [14,15], many have commented on the apparent "geometric" features of Aristotle's logic, despite the fact that it was meant to apply more generally to forms of argument beyond those found in mathematics.It is in relation to these geometric features that I want to raise the significance for Hegel of Plato's more "projective" alternative to Aristotle's "Euclidean" assumptions and to explore the logical analogues of such differences.
As we have seen, Aristotle had himself appealed to the mathematical notion of division as a way of understanding the conceptual relations found in the linguistic act of definition: "For definition is a sort of number; for it is divisible, and into indivisible parts [. ..] and number is also of this nature" [27] (1044b34-35).But Aristotle was not reducing logical or conceptual relations to mathematical ones with this, as he aspired to establish an entirely conceptual sense of magnitude, a sense that was more general than either of the particular forms of magnitude studied in mathematics-the discrete quantities of arithmetic and the continuous magnitudes of geometry. 42And to abstract from these concrete forms of magnitude was to abstract from the problem of the incommensurability holding between them.
Clearly, Plato's "syllogism" in the Timaeus cannot be understood as having been entirely abstracted from the mathematical disciplines in this Aristotelian way [25] (p.207).For Plato, incommensurability was not simply abstracted away from, but was addressed in the Pythagorean way, involving the mediated unity of otherwise incommensurable magnitudes represented by the unity holding among the three musical means.Hegel's formal logic, I have been arguing, should be seen along Platonic rather than Aristotelian lines and should show logical features that reflect Plato's rather than Aristotle's approach.I will suggest that logical equivalents of two features of projective geometry can be recognized in Hegel's formal logic here.First, the idea of a mediated unity of incommensurables is reflected in Hegel's account of judgment in which duality of otherwise incommensurable judgment types expresses the type of mediated duality found between points and lines in projective geometry.Next, one of these judgment types will presuppose a conception of the infinite that shows similarities to projective geometry's idea of "points at infinity".
(1) Hegel's incommensurable judgment forms In contrasting Aristotle's syllogism with Plato's, Hegel had noted that Aristotle had employed only one "middle term" in his syllogistic [39] (p.211)-clearly the geometric-whereas in Plato, the middle term had been "broken" or "doubled" into a duality between arithmetic and harmonic means.In the narrowly musical context, this breaking of the geometric mean into the other two reflects the fact that the production of consonant intervals within the octave required the harmonic and arithmetic means.In fact, dividing the octave at the geometric mean produces the most dissonant interval, the "tritone".This need for the geometric mean to be broken into the complementary arithmetic and harmonic means carried over into other mathematical contexts, however.It had been a prerequisite for the application of numbers to the world in astronomy, given the need for finding numerical approximations for geometric means, such as that of the numbers 1 and 2-in modern terms, of finding approximate values for numbers that, like √ 2, are only characterized by general descriptions. 43 In the logical context, the need for harmonic and arithmetic means, I suggest, turns out to reflect a "semantic" need that is something like that of finding applicable values for "numbers" such as √ 2. That is, it is required for bridging the gap between purely logical concepts and worldly items to which they are meant to apply.Thus, Hegel will distinguish between the conceptual determinations of "particularity" (Besonderheit) and "singularity" (Einzelheit) [52] (pp.529-549), the former, as alluded to by Proclus [37] (pp.175-176), relating concrete elements in terms of their "samenesses"-that is, in terms of their common properties-the latter differentiating them in terms of their "differences".
In relation to these semantic issues, the logic master at the Tübingen seminary during Hegel's time there, Gottfried Ploucquet, would, from a generally Leibnizian perspective, make essentially the same distinction as Hegel's singular-particular distinction by reference to two varieties of "particularity": "exclusive" and "comprehensive" [12] (p.128).This effectively aligns with the modern modal distinction between proper names and definite descriptions, a distinction that had been collapsed in Russell's version of classical logic, but that was reintroduced in the second half of the twentieth century as modally relevant [54].
Hegel's distinction fits this modal model.Terms instantiating "particularity" link entities in terms of their common properties: "the particular has one and the same universality as the other particulars to which it is related. . .It has no other determinateness than that posited by the universal itself" [52] (p.534).By contrast, "singularity is the concept reflecting itself out of difference into absolute negativity", "self-referring determinateness is singularity" (pp.530, 540).That is, considered in its singularity, a thing is considered in the ways that it differentiates itself from other similar things-in Ploucquet's terms, excludes other instances of the universal it instantiates.An individual human, for example, might be comprehended as "a human", or "some human", their humanness being what unites them with others.But cognized as "this human", or, perhaps, as "Socrates", the person is grasped in terms of what distinguishes him or her from others such as "that person over there", or, perhaps, Aristotle.In modern modal terms, a particular description such as "the teacher of Alexander the great" picks out whoever fits this description in "all possible worlds", including ones in which this is someone other than Aristotle, while the proper name "Aristotle" picks out Aristotle in all possible worlds, including those in which he is not the teacher of Alexander. 44This is the same play of samenesses and differences that Proclus had linked to the harmonic and arithmetic means, respectively.Within an abstractly conceptual hierarchy ordered entirely on the principle of conceptual inclusion, in order for such concepts to be applied to the world, singular and particular terms must be inserted like the insertion of arithmetic and harmonic means in a hierarchy of octaves.
Hegel's singular-particular distinction among the subjects of predication is in turn linked to a similar distinction between the predicates predicated of those subjects, and this is expressed in the different ways each receives negation.Hegel thus distinguishes "qualitative" from "reflective" forms of judgment, or "judgments of inherence" from "judgments of subsumption", 45 and here, it is the predicates of such judgments that are differentiated as singular or particular.In the former [52] (p.557), [56] ( § 166), a predicate is affirmed of some perceptually given concrete singular subject, as when "red" is predicated of some specific observable rose, picked out with the singularly quantified demonstrative "this rose". 46And of course, the red exemplified by this rose will be a specific (i.e., singular) shade of red, opposable to the redness of that rose, over there.
Negation as described by Hegel in this type of judgment is what is usually discussed as "internal", as the negation applies within the judgment and only to the predicate: the rose is not red, but some other colour [52] (p.565), while that the rose is actually a rose is not brought into question and so is beyond the scope of the negation.However, negation necessarily involves generalization and sets cognition on a path to abstraction.In the original judgment, there had also been something singular about the predicate being affirmed, but this specificity is lost in the negative form in which the predicate attributes some non-redness to the rose.While there is a way in which a specific rose is red, there are many ways in which a rose might be not red for there are many non-red colours.For Hegel, negation provides a path for abstraction and takes the judgment from the form of singularity to particularity-from this A to some A or As-and then a second negation takes this abstraction one step further to a type of abstract universality of empirical laws about all As.
In the resulting fully developed "reflective" judgment, it is the whole proposition that becomes negated "externally".This is seen in the development of particular judgments, in the sense of particularly quantified judgments of the sort, "some As are B" to the universal form "all As are B". 47As Hegel points out (and reflecting the "problem of induction"), such judgments made on an empirical basis will by necessity be about "a mere plurality which is taken for allness".What such a universal judgment in effect claims is that "if no instance of the contrary can be adduced, a plurality of cases ought to count for an allness" [52] (p.573)-that is, "All As are B" becomes equivalent to "It is not the case that some As are not B".In short, a universal reflective judgment has the form of an externally negated particular (and hence itself negative) reflective judgment.
Such external negation is, in fact, the only type of negation operative in modern classical logic, as what is conceived as being affirmed in judgment is a complete proposition with a fixed and so eternal "truth-value".It is clear, however, from his criticisms of this type of judgment form as found in Leibniz's characteristica universalis, 48 that Hegel regards such a judgment type as not properly a judgment at all.Thus, stopping short at this degree of abstraction, we are left with a duality of mutually presupposing qualitative judgments on the one hand and reflective judgments shaping universal empirical laws on the other.
As noted above, in a way that might be thought to anticipate the modern generally "falsificationist" epistemology, Hegel has construed universal empirical laws as meaningful to the degree that they can be refuted, as when the universally quantified "All As are B" is refuted by a judgment asserting the existence of some A (or As) that is (or are) not B.But Hegel's singular-particular distinction makes the applicability of such a particular judgment about some A or As dependent upon judgments about some specific A, as in "this A".In short, qualitative judgments, that are clearly "perspectival" or "contextual", cannot be eliminated in the way they are in Russell's classical logic.Such "indexical" judgments are sometimes called "self-locating" [57] (p.128) because they locate the judging subject within the spatio-temporal world as the anchor point of the various indexicals such as "this", "now", and "here".At the same time, however, a singular claim about "this A" must itself coexist with those aperspectival, but clearly fallible, law-like claims about "all As".It is here that projective geometry's alternative notion of "infinity" promises a way forward.
(2) The logical analogue of "views" from points at infinity.In his "perspective science", Leibniz had clearly wanted to link the perspectivity of perceptually based judgments to the perspectival nature of vision in a way that allowed some form of orderly sequence of abstraction-a type of "Jacob's ladder" potentially leading to the "God's eye" point of view, a point of view onto the finite world from somewhere outside it, some point at infinity.This was meant to capture a type of judgment freed from the perspectival conditioning of those made about the world from somewhere within it.This idea of a judgment from a "God's-eye view" or "view from nowhere" sits neatly with the prototypical judgment of modern classical logic-a judgment whose meaning is spelt out truth-functionally and as free of any context-dependence.The difficulty is that such judgments seem more suited to gods than to we humans, finite creatures whose judgments are always made from somewhere within the world.
Projective geometry, however, had come up with a different conception of a point at infinity, because its points at infinity could be understood via the cross-ratio relation as standing in certain determinate relations with finite points in the way that Leibniz's God's-eye view did not.This, I suggest, is reflected in Hegel's approach to judgment in that the meaningfulness of any such aperspectival view "from infinity" becomes itself dependent on its relation to the limited views from the finite points of view located within space and time.Thus a "duality" of judgment forms is found at the heart of Hegel's Logic much like that existing between points and lines in projective geometry [58].Hegel deals with the ultimate mediation of these dual judgment forms in his treatment of syllogisms, which, I have suggested, he models on the role of the harmonic cross-ratio in the projective geometry, linking finite and infinite points within the projective plane.
The transition of judgment to syllogism is made by Hegel from a distinct form of judgment called the "judgement of the concept", a type of normative perceptual judgment in which, in its initial "assertoric" form, an evaluation is made about the goodness or otherwise of the way a singular object instantiates its universal: e.g., "this house is bad", "this act is good" [52] (p.583).In this context, there is no ambiguity about the "exclusive" reading of the subject term, and this is consistent with its typically comparative nature: this house is typically judged good in contrast to that one. 49But value judgments of this sort are "problematic" in that the subjective conditioning of these judgments easily induces disagreement, and so, in the face of some counter-assertion, a judge can resort to reason giving.This expands the judgment into an "apodictic" one in which the subject-predicate relation becomes mediated by a "middle term": "the house, as so and so constituted, is good" [52] (p.585).Here, the middle term is a particular allowing a general reason to be given for the judgment, as it implies that any house so characterized would be good.It is this expanded tripartite judgment with the structure singular-particular-universal (S-P-U) that is implicitly a syllogism: S-P (this house is so and so constituted); P-U (any house so and so constituted is good); therefore, S-U (this house is good).I have suggested that this is essentially a logical translation of the interrelated arithmetic, harmonic, and geometric means as understood by Proclus.Within Hegel's structure, the house in question is thus grasped simultaneously in its singularity and its particularity-he says, in its "being" and in its "ought"-so as to express the universal "good"."That this original division [Teilung], which is the omnipotence of the concept, is equally a turning back into the concept's unity and the absolute connection of "ought" and "being" to each other, is what makes the actual into a fact; the fact's inner connection, this concrete identity, constitutes its soul" [52] (p.586).
Given the structure of Hegel's logical presentation, his implicitly syllogistic "judgment of the concept" is meant to manifest something universal about all earlier forms of judging and cognition leading up to it.First, to self-consciously judge is to affirm a judgment such that is not only endorsed as true "for oneself" but that is true in some more general sense and so true for others as well, an assumption motivating reason giving.Thus, when I judge in the simpler mode that "this rose is red", other than affirming this specific content, I commit myself to the more indefinite statement "there is a rose that is red" or "some rose is red" that could be the object of the perception of others.Moreover, I commit myself to the counterfactual that this would be the case even were I not to have experienced this rose at all.But Hegel clearly does not want to simply reduce the former qualitative and perspectival judgment to the latter reflective aperspectival one, as in the mode of modern classical logic, because this would simply eliminate the underlying incommensurability of "being" and "ought" as instantiations of conceptuality.Both qualitative and reflective judgments must be retained as necessary "moments" of this form of reasoning.
These aspects of his attitude to judgment, I suggest, show similarities to that found in the modern intuitionist attitude to mathematics and its logic [59] in the intuitionists' opposition to modern classical logic.Thus, in the manner of the intuitionists, one might argue that, for every general aperspectival statement known to be true, there must exist some qualitative judgment about a specific "witness" that is also held to be true, just as the truth of "houses, as so and so constituted, are good" must presuppose a judgment of the form "this house (the witness) is good". 50For the intuitionist, there exists no independent way to access the truth of "aperspectival" contents-the contents of Hegel's reflective judgments. 51 This means that the status of all aperspectival judgments is something like that of being "existential generalizations" from some perspectival judgment.These aperspectival judgments, I suggest, are analogous to those viewpoints "at infinity" that exist for finite viewers in the projective plane despite the fact that they are locations they cannot themselves occupy.This means that singular witness judgments need their abstract equivalents just as much as the latter need the former.In both geometry and logic, these abstract, albeit unoccupiable "points of view" are required for the coherence of the "space" in question-three-dimensional physical space in the one case and the logical space of interconnected assertions about the world, in the other. 52
Conclusions
In the early years of the movement of analytic philosophy, Hegel's logic was thoroughly criticised by Bertrand Russell, the chief early proponent of modern classical logic.Russell's efforts here were largely successful, with Hegel being from then on, for the most part, eliminated from serious consideration within logic [60] (Intro.).However, the original form of the Frege-Russell logic used to denounce Hegel would itself need modification over the coming decades in ways requiring the incorporation of elements from rival approaches [5].This need was initially rooted in concerns with the lack of an adequate semantics within classical logic's original form, a concern motivating investigations into the relevance of projective geometry as noted in the introduction.
Among the resources of projective geometry deemed significant would be the principle of duality, and it is perhaps not surprising that, within the variety of non-classical approaches to logic that would return in the decades after the introduction of classicism, distinctly dual features are apparent, which echo some of the fundamental features of Hegel's "projective" transformation of Aristotle's syllogistic.For example, Kripke's rehabilitation of the distinction between proper names and definite descriptions [54] would overlap with Hegel's use of the categorial singular-particular distinction, and the same could be said of the duality of modal and nonmodal judgment forms, as found in the tense-logic of Arthur Prior, for example, which reflects Hegel's non-reductive duality of qualitative and reflective judgments [61].Elsewhere [12] (chs.9, 10), I have drawn attention to further ways in which logic over the last century has seemed to reinvent ideas that are easily detectable in Hegel's logic, but here, I want to conclude by drawing attention to a Hegelian analogue of a feature of projective geometry that has been invoked in contrast to the static universalism of Frege's logic, which is itself seen as underlying many of the semantic problems that faced its initial formulations.This concerns the idea developed by Gunther Edel [2], that logical systems must make possible the reinterpretation of the terms of their initial object languages.
Hegel was keenly aware of the central role of reinterpretation of the concept of number in the history of mathematics from the Greeks to the modern period [12].He was aware, for example, that the original Greek concept of number had come to be reinterpreted in modern times such that in the seventeenth century there existed numbers, negative numbers, irrationals, etc., which for the Greeks were not recognizable as numbers at all.He was also clearly aware that such conceptual extensions resulted from the dynamic of the development of the sciences themselves.For example, the extension of a numerical metric to Euclidean geometry by the development of analytic geometry would be bound up with the acceptance of negative numbers because, as continuous magnitudes, lines could be naturally understood as extending in two opposed directions. 53For Hegel, such essential reinterpretability applied to all scientific concepts, not just mathematical ones, and it is expressed in the methodological shape of the process of laying out the categories of his logic.This conceptual unfolding, as I have argued, follows a process in which some initially meaningful concept is at first disinterpreted because it is found to generate logical paradoxes.Reinterpretation allows that concept's application to a different but related range of phenomena.For Hegel, such reinterpretation enabled the resolution of those particular paradoxes, allowing thought to advance [12] (ch.9).
The variable historical relations between arithmetic and geometry in antiquity had provided a prime example of this for Hegel, and it is therefore not surprising that he would have been attracted to mathematical approaches that, like projective geometry, signalled a "new relation between algebra and geometry", which were then "linked in a dialectical process" [62] (p.237).Underlying all this, I have suggested, was a grasp of the relevance of an early precursor to projective geometry for logic, Plato's music-based account of the syllogism in the Timaeus. 54 In [12], I argue against the common assumption, found in both his supporters and critics, that Hegel's Science of Logic has nothing to do with logic as it is practiced now.6 The nature of these musico-mathematical "means" will be explained below.The "canon" was a measuring device attached to a monochord and so the title refers to the proportions in which the device's string was "sectioned" or divided in experiments.8 About two thousand years later, Leibniz would also assign numbers to concepts in order to portray inference as a type of transitivity of relations of "inclusion"."For example, since man is a rational animal, if the number of animal, a, is 2, and of rational, r is 3, then the number of man, h, will be the same as ar: in this example, 2 × 3 or 6" [28] (p.17). 9 In contrast to the Mesopotamian mathematicians, the Greeks had a poorly developed sense of algebra in the sense of an arithmetically based practice of solving equations.But it is commonly said that they had an equivalent geometric form of algebra.10 That is, ratios understood as numbers (fractions), rather than relations between numbers.11 Desargues's treatise, "Brouillon project d'une atteinte aux événements des rencontres d'un cône avec un plan" (Rough Draft of an Essay on the results of taking plane sections of a cone) [8], had appeared in 1639, just two years after Descartes's Geometrie.
12 It might be thought that projective geometry concerns relations among otherwise well-defined Euclidean objects, such as between circles and ellipses, but this is not the case.A circle and an ellipse are, as projectively equivalent, the same object from a projective point of view.Consistent with this, in the nineteenth century, Felix Klein argued on the basis of group theory that projective geometry was more fundamental than Euclidean geometry, with the implicit metric of Euclidean geometry able to be defined by projective methods.13 The idea of such invariants had also been introduced by Kepler in Astronomica Pars Optica, and, in the later nineteenth century, when a variety of non-Euclidean geometries had been proposed, they would be classified in terms of the invariants specific to each.The author of this work, traditionally attributed to Plato, is now thought to be Philip of Opus, a follower of Plato at the Academy.Significantly, Philip had himself authored two works on optics [21] (p.36).16 Hegel was familiar with and possessed key works of neo-Platonic authors such as Nicomachus of Gerasa, Iamblichus, and Proclus [9].17 The harmonic mean had, prior to Archytas, been called the "sub-contrary" [hypenantia] [40] (p.283), which, in the context of Greek geometry, referred to a triangle that was similar to another but inverted, in the sense of as if having been rotated 180 • through the third dimension.18 Archytas describes the harmonic mean as holding when "the part of the third by which the middle term exceeds the third is the same as the part of the first by which the first exceeds the second" [41] (p.42).In the musical tetraktys, the part of 12 by which it exceeds 8, i.e., 4 or 1/3 of 12, is the same as the part of 6 by which it is exceeded by 8, i.e., 2 or 1/3 of 6.
19
While traditionally the problem of incommensurability has been thought to have been a consequence of "Pythagoras's theorem" concerning squares built on the sides of a right-angle triangle, recent historians have argued that it was more likely to have emerged out of music theory itself [17] (Intro., ch.1); [40] (pp.291-292); [20].20 This had not been aided by Desargues, who had invented a decidedly non-intuitive "botanical" technical vocabulary with which to describe these relations.
21
In Greek style, Desargues talks of ratios and their compoundings, whereas for a modern reader, it is more intuitive to talk of fractions and their products.22 The ratio has the value of 1 when only the absolute value of the lengths is considered as in Desargues's presentation.In the nineteenth century, the relative directions of the line-segments would be taken into account, in which case the value of the harmonic cross-ratio would be given −1, because the direction of one of the segments will always be opposite to the directions of the other three.
23
A quick calculation shows that the harmonia or musical tetraktys instantiates the harmonic cross-ratio.24 Desargues had insisted on counting the four points as an instance of the involution despite the fact that the idea of a point at infinity is "incomprehensible" [30] (p.85).Remember that Desargues had not formulated the harmonic cross-ratio in terms of fractions but ratios, and so was not faced with the problem of "dividing" by infinity.Later in the nineteenth century, worries about calculating with "infinity" would be bypassed by the introduction of homogeneous coordinates.25 The Platonist heritage of such an idea of an infinite straight line as closed, as typical of a circle, is apparent in the fifteenth-century Platonist Nicholas of Cusa who had proposed the ultimate identity of a straight line and a circle, a "coincidence of opposites", suggestive of this feature of projective geometry [43].26 For the Greeks, multiplicative relationships were fundamentally geometrical, in that the multiplication of two numbers was essentially shorthand for the area of a rectangle with sides of lengths equal to those two numbers.27 For an account of the history of the spread of these algorithms, see [44].28 It is now known that the Pythagorean musical scale had originated in Mesopotamia.29 For the interval between 6 and 12, for example, the geometric mean ( √ 72 = ~8.48528. ..) falls between the harmonic mean, 8, and the arithmetic mean, 9.
30
This passage is commonly interpreted as if Plato is referring to the problem of finding two geometric means between extremes, as in finding b and c of the extended geometric proportion a:b::b:c::c:d, a well-known problem-the "Delian problem""-concerning the calculation cube roots [23] (bk 8, prop.12), for which Archytas had provided an elaborate geometrically based solution [16] (pp.66-70).Plato does refer to geometric sequences involving squares and cubes in this context [38] (35b), but he also explicitly alludes to the interpolation of harmonic and arithmetic means between the terms of such sequences (36a-b).31 This can be appreciated in relation to the fretboard of a modern guitar, where the perfect fourth is equivalent to five steps on the fretboard above the tonic, and the perfect fifth to seven.Adding to the interval of a fifth, say C to G (7 steps), a further fourth (5 steps), results in a full octave (12 steps).32 The modern diatonic musical scale effectively extends this feature evenly over 12 equal steps or "semitones".If the pitch of the root note is again given the value 1, that of the first semitone up the scale will have the value 1 × 12 √ 2, the next note 1 × 12 √ 2 × 12 √ 2, and so on.After twelve steps, 12 √ 2 has been multiplied by itself twelve times, giving the value of 2 to the octave above the root note.If one takes the distance between the twelfth and thirteenth frets as 1 unit, the distance between the eleventh and the twelfth will be 1 × 12 √ 2 unit and so on, along the neck until the distance between the "nut" and the first fret will have the value 2 units.33 Such studies can be traced back to the Greek study of optics, an early instance of which had been carried out by Archytas of Tarentum [21].34 Leibniz had, apparently, drawn schematic eyes as located at origins of such rays in his geometric diagrams.35 According to Acuña, something like this is implicit in Wittgenstein's Tractatus in as much as the "projection" involved in the picture theory is "performed in logical space by a transcendental subject" [6] (p. 14).
36
For example, the point O' in Figure 4 could simultaneously be regarded as a point at infinity and a viewpoint onto a scene like that portrayed in Raphael's painting, with point O representing the viewpoint of a figure in the painting.37 In fact, there seem to be parallels here to Georg Cantor's attitude to "transfinite" numbers towards the end of the nineteenth century, in that his "actual" infinites had determinable properties for humans, whereas traditionally, knowledge of the infinite had been the exclusive preserve of God.Thus, Cantor had attempted to quieten his Catholic critics by distinguishing between the traditional "absolute infinite" that was the preserve of God and the actual "transfinitum" that could be cognized by humans [49] (pp.144-145).38 Later, Kant would famously argue against even conceptual possibility here [51].Human rational thought, he believed, was ultimately tethered to empirical contents by the dependence on the contribution of empirical "intuitions" received by an individual subject located in the world.Thus, Kant's equivalent "ladder" would take the climber only as far as a view of the world as a totality of objectively justified appearances, the climber being metaphysically cut off from any view of reality "as it is in itself".This ladder analogy is clearly on view in Kant's treatment of "prosyllogistic" forms of inductive inference [51] (A307-8/B364-365; A331-332/B387-389).
39
Links between Escher's art and projective geometry were explored in the twentieth century by H. M. S. Coxeter. 40 In the mid-eighteenth century, Lambert had also been involved in a public dispute with Hegel's effective logic teacher while he was as student at the Tübingen seminary, Gottfried Ploucquet, over how to develop the diagrammatic, i.e., geometric, dimensions of Leibniz's logic.
Figure 1 .
Figure 1.The vanishing point of parallels in perspective (designated by red lines) in Raphael' School of Athens.14.
14
Figure 1.The vanishing point of parallels in perspective (designated by red lines) in Raphael' School of Athens.14.
Figure 1.The vanishing point of parallels in perspective (designated by red lines) in Raphael' School of Athens.14.
Figure 1 .
Figure 1.The vanishing point of parallels in perspective (designated by red lines) in Raphael's The School of Athens.14.The figure next to Plato is Aristotle, and the foreground figures are clearly divided into two groups.The group on Plato's side includes Pythagoras (the figure writing in an opened book in the bottom left-hand corner) and that on Aristotle's side, Euclid (the corresponding figure in the bottom right-hand corner, bending over a slate and holding a compass).The implied association of Plato with Pythagoras (and the corresponding contrast with Aristotle and Euclid) is reinforced by the references within the painting to Neopythagorean music theory.In addition to reference to the Timaeus, the work in which Plato had presented his musico-mathematical cosmology, the book in which Pythagoras is writing contains a reference to a sequence of four numbers, 6, 8, 9, and 12[32] (pp.419-420), the so-called "harmonia" or "musical tetraktys" structured as a double-ratio in which 6:9 is taken as equal to 8:12[33] (p.200).In the Epinomis, Plato, or one of his followers,15 had described this structure as "granted to the human race by the blessed choir of the Muses", adding that this gift had "bestowed upon us the use of concord and symmetry to promote play in the form of rhythm and harmony"[34] (991b).Later Neoplatonists such as Nicomachus of Gerasa[35] (pp.284-285), Iamblichus of Chalcis[36] (p.50), and Proclus[37] (pp.143-145) would identify this structure with that "most beautiful bond" that Timaeus, the mythical Pythagorean astronomer of Plato's Timaeus (perhaps based on Archytas), describes as being responsible for the unity of the parts of the living cosmos[38] (31b-32a).These numbers, 6, 8, 9, and 12, had represented the spacings among the points dividing a vibrating string into the three fundamental harmonic intervals of Pythagorean music theory: the tonic, here given the value 6, the perfect fourth above it, the value of 8, the perfect fifth above it, 9, and the octave, 12.In the Berlin Lectures on the History of Philosophy, Hegel would claim that Aristotle had based his own formal syllogism on a simplified distortion of this bond supposedly binding the various parts of the living cosmos into a unity[39] (pp.209-210).Being familiar with the relevant Neoplatonic interpreters of Plato,16 Hegel had most likely followed Iamblichus and others in identifying this bond with the musical tetraktys.Further evidence for this association appears in Hegel's discussion of "ratio" or "proportion" (Verhältnis) in Book 1 of The Science of Logic, where its most developed form, the "power-proportion", has exactly the features of the inverse double-ratio structure of the harmonic cross-ratio[12] (pp.70-79).
Figure 2 .
Figure 2. Dividing an interval internally and externally by variable points in the same proporti The successive diagrams (a-d) represents the rightward movement of Y (b) through the point a infinity (c) so as to return now from the left (d).
Figure 2 .
Figure 2. Dividing an interval internally and externally by variable points in the same proportion.The successive diagrams (a-d) represents the rightward movement of Y (b) through the point at infinity (c) so as to return now from the left (d).
Figure 3 .
Figure 3.The inverse foreshortening found in the neck of a guitar.Subfigure (a) shows tr in perspective; (b), a guitar neck in perspective; (c), train tracks seen from above; (d), a gu seen from above.
Figure 3 .
Figure 3.The inverse foreshortening found in the neck of a guitar.Subfigure (a) shows train tracks in perspective; (b), a guitar neck in perspective; (c), train tracks seen from above; (d), a guitar neck seen from above.
Figure 4 .
Figure 4.A pencil of rays, p, q, r, s, passing through point O, is sectioned by the (red) line l to form a range of 4 points, A, B, C, D. This pencil projects, from point O, this range onto corresponding points (A′, B′, C′, D′) on a further sectioning line, l′.Should p, q, r, s form a harmonic pencil, A, B, C, D, will form a harmonic range, as will A′, B′, C′, D′, on the line l′, onto which this first range is projected.In turn, the harmonic range, A, B, C, D, will be projected onto the pencil of lines passing through O′, p′, q′, r′, s′, making it a harmonic pencil, which in turn is projected onto the range A″, B″, C″, D″, ′ , and ′′ ′′ ′′ ′′ = ′′ ′′ ′′ ′′ . | 19,370.4 | 2024-01-22T00:00:00.000 | [
"Philosophy"
] |
INTRODUCING A NOVEL METHOD TO SOLVE SHORTEST PATH PROBLEMS BASED ON STRUCTURE OF NETWORK USING GENETIC ALGORITHM
: The shortest path problem is widely applied in transportation, communication and computer networks. It addresses the challenges of determining a path with minimum distance, time or cost from a source to the destination. Network analysis provides strong decision support for users in searching shortest path. A lot of algorithms are designed for solving shortest path but most of them did not considered the conditions of the networks. Genetic Algorithm is a kind of Algorithm that has a lot of efficiency and it can be used for solving many kinds of problems. It also can be used based on the condition of the problems. In this paper, Genetic Algorithm is used for solving shortest path in multistage process planning (MPP) problem. New mutations as well as crossover parameters are defined for each network based on the conditions of them. The results of our experimental demonstrate the effectiveness of the models.
INTRODUCTION 1.1 Introduction
Genetic algorithms (GAs) are one of the most powerful and optimization techniques based on principles from evolution theory (Li, Sun, Tseng, & Li, 2019;Zero, Bersani, Paolucci, & Sacile, 2019).Over the past few years, the GAs community has turned much of its attention toward the optimization of network design problem (Ergenç, Eksert, & Onur, 2019;Hanh, Binh, Hoai, & Palaniswami, 2019;Kaur, Singh, & Kaur, 2019).In this paper, we summarized recent research works on network design problem by using genetic algorithms (GAs), including multistage process planning (MPP) problem, a directed acyclic network problem a local network and a real network then some genetic algorithms are introduced for solving these types of optimization problems arising in the field of network planning these genetic algorithms are applied to find shortest path in network.
MULTISTAGE PROCESS PLANNING
Multistage process planning (MPP) problem is abundant in manufacturing system.It provides a detailed description of manufacturing capability and requirements for transforming a raw stock of materials into a completed product through multistage process.The MPP problem is to find the optimal process planning among all possible alternative given certain criteria such as minimum cost, minimum time, maximum quality or under multiple of these criteria (Ahmadi, Süer, & Al-Ogaili, 2018;Bäck, Fogel, & Michalewicz, 2018).This problem can be considered in transportation network as an exclusive network that an object is flowed in an optimal path in this network.Fig. 1 shows a simple example of the MPP problem by means of network flows.
Coding
In this case, there exists an ordered network which has some steps.Each step contains some nodes as the number of these nodes at each step is equal with each other and it is constant.Permutation encoding can be used in this problem.The position of a gene is used to indicate the stage and the value of the gene is used to indicate a state at that stage.Because the state for the first stage is always fixed, so we don't need to encode this state in a chromosome.It means that for a given problem with n stages, we have the length of an encoding n-1 (Tong, Wu, Jiang, Yu, & Rao, 2017;Xiao, Xie, Kulturel-Konak, & Konak, 2017).Each point in the map is given a unique integer value index like [MN], where, M is the number of stage in the map and N is the number of state in the stage (Behzadi, Alesheikh, & Poorazizi, 2008).
For example in instance given in Fig. 1, the final rout consists 22, 31, 43 and 51.The first section of these numbers corresponds to the stages of these numbers as shown [2 3 4] and the second section of them corresponds to the state of them in their stage that can be shown [2 1 3 1].
Initial Population
Initial population is generated randomly.As considered, the length of an encoding chromosome depends on the number of stages.Each gene of this chromosome is defined by choosing randomly one of the nodes in corresponding stage.
Fitness Function
The fitness function of each individual in this problem is the sum of the weights of the connected lines between two adjacency nodes (Deo, 2017;Dib, Manier, Moalic, & Caminada, 2017).For example is one individual that the length of it is 3 and it represents the connection between i,j and j,k as is its fitness function.
Fitness Scaling
Fitness scaling converts the raw fitness scores that are returned by the fitness function to values in a range that is suitable for the selection function.The selection function uses the scaled fitness values to select the parents of the next generation.The selection function assigns a higher probability of selection to individuals with higher scaled values (Bakirtzis & Kazarlis, 2016;Emary, Zawbaa, & Hassanien, 2016).After creating the initial population, fitness values for each individual are calculated.In this research, proportional scaling is used to make the scaled value of an individual proportional to its raw fitness score.
Genetic Operator
Genetic operators mimic the process of heredity of genes to generate new offspring at each generation and play a very important role in genetic algorithm (Askarzadeh, 2016).In this problem only mutation operator is used to produce offspring.It works with the following three major steps (Malawski, Juve, Deelman, & Nabrzyski, 2015;Yu, Li, Jia, Zhang, & Wang, 2015): 1. Determine the mutated gene randomly for a given chromosome.2. Create a set of neighbour by replacing the mutated gene with all its possible states.
3. Select the best one from the neighbours as offspring.
Fig. 2 shows as an example of the neighbourhood search-based mutation method.
Fig. 2. Illustration of mutation with neighbourhood search.
Experiments
To evaluate the performance of the outlined method, we performed experiments using an instance of the PMM problem with 17 stages and 98 nodes we generated.One attribute was defined on each arc.This attributes were considered as the weight of the arcs.The objective was to minimize the total weight of path.The result of this experiment is shown in Fig. 3.
CONCLUSIONS
The main advantage of genetic algorithms is their flexibility.One thing that is striking about genetic algorithms is the richness of this form of computation.What may seem like simple changes in the algorithm often result in surprising kinds of emergent behaviour.By changing the parameters and operators of this algorithm, we can find the best and fast solution for the problem.Recent theoretical advances have also improved the understand ability of genetic algorithms and have opened the door to using more advanced analytical methods.
The research is to propose some methods to solve the shortest path problem using genetic algorithms in multistage process planning (MPP).The solution aims to achieve an increased number of successful and valid convergence using evolutionary computing techniques.
Fig. 1 .
Fig. 1.Flow network for a simple MPP problem
Fig. 3 .
Fig. 3.The solution of the multiple objective MPP problem.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W18, 2019GeoSpatial Conference 2019 -Joint Conferences of SMPR and GI Research, 12-14 October 2019, Karaj, Iran | 1,615.6 | 2019-10-18T00:00:00.000 | [
"Computer Science"
] |
An Alternative Bootstrap for Proxy Vector Autoregressions
We propose a new bootstrap algorithm for inference for impulse responses in structural vector autoregressive models identified with an external proxy variable. Simulations show that the new bootstrap algorithm provides confidence intervals for impulse responses which often have more precise coverage than and similar length to the competing moving-block bootstrap intervals. An empirical example shows how the new bootstrap algorithm can be applied in the context of identifying monetary policy shocks.
Introduction
In structural vector autoregressive (VAR) analysis one strand of the literature uses external instruments, also called proxies, to identify shocks of interest (e.g., Stock & Watson, 2012;Mertens & Ravn, 2013;Piffer & Podstawski, 2018;Kilian & Lütkepohl, 2017,Chapter 15).The related models and methods are often labelled proxy VARs.In this context, frequentist inference for impulse responses is typically based on bootstrap methods.In some of the literature, the wild bootstrap (WB) is used (e.g., Mertens & Ravn, 2013;Gertler & Karadi, 2015;Carriero et al., 2015).However, work by Brüggemann et al. (2016) and Jentsch andLunsford (2019, 2021) shows that wild bootstrap methods are not asymptotically valid in this context and they propose a moving-block bootstrap (MBB) which provides asymptotically correct confidence intervals for impulse responses under very general conditions.It can cope, for example, with conditionally heteroskedastic (GARCH) VAR errors which is an advantage in many applied studies where financial data are of interest.On the other hand, (Lütkepohl and Schlaak, 2019) demonstrate by simulations that the MBB can result in confidence intervals with low coverage rates in small samples.
In this study, we propose an alternative bootstrap method for proxy VARs which is based on resampling not only the VAR residuals but also the residuals of a model for the proxy and is therefore signified as PRBB (proxy residual-based bootstrap).We show by simulation that it leads to quite precise confidence intervals for impulse responses in small samples.This makes it attractive for macroeconomic analysis where often smaller samples with less than 200 observations are available.A major advantage of the MBB is that it remains asymptotically valid even if the data exhibit conditional heteroskedasticity.Although the PRBB does not explicitly account for GARCH, we show by simulation that in small samples it may even outperform the MBB if the VAR errors are driven by a GARCH process.
The remainder of the paper is structured as follows.The proxy VAR model is presented in the next section.Estimation of proxy VAR models is considered in Sect.3. The alternative bootstrap methods considered in this study are presented in Sect. 4 and a small sample Monte Carlo comparison of the bootstrap methods is discussed in Sect. 5.An illustrative example is presented in Sect.6, Sect.7 concludes and Sect.8 discusses possible extensions.
The Proxy VAR Model
A K-dimensional reduced-form VAR process, ð2:1Þ is considered.Here m is a ðK Â 1Þ constant term and the A i , i ¼ 1; . ..; p, are ðK Â KÞ slope coefficient matrices.The reduced-form error, u t , is a zero mean white noise process with covariance matrix R u , i.e., u t $ ð0; R u Þ.The vector of structural errors, w t ¼ ðw 1t ; . ..; w Kt Þ 0 , is such that u t ¼ Bw t , where B is the nonsinguar ðK Â KÞ matrix of impact effects of the shocks on the observed variables y t .Thus, , where R w is a diagonal matrix.
If the first column, say b, of B is known, the structural impulse responses of the first shock, h i ¼ ðh 11;i ; . ..; h K1;i Þ 0 , can be computed as where the ðK Â KÞ matrices U i ¼ P i j¼1 U iÀj A j can be obtained recursively from the VAR slope coefficients using U 0 ¼ I K (e.g., (Lütkepohl, 2005, Chapter 2).In the following, the ðK Â ðH þ 1ÞÞ matrix of impulse responses, HðHÞ ¼ ½h 0 ; h 1 ; . ..; h H ¼ ½b; U 1 b; . ..; U H b; ð2:2Þ is of interest.It is assumed that the first shock increases the first variable by one unit on impact.In other words, the first component of b ¼ h 0 is assumed to be 1.
If b, R u and the reduced-form errors are given, the first structural shock can be obtained as (see Stock & Watson, 2018, Footnote 6, p. 933) or (Bruns & Lütkepohl, 2021, Appendix A.1). Suppose there is an instrumental variable z t satisfying ð2:5Þ These conditions imply that In other words, the proxy z t identifies a multiple of b.
In line with some of the proxy VAR literature (e.g., Jentsch & Lunsford, 2019or Bruns & Lütkepohl, 2021), the proxy z t is assumed to be generated as ð2:6Þ where D t is a random 0-1 variable which determines the number of nonzero values of the proxy.It is assumed to have a Bernoulli distribution, B(d), with parameter d, 0\d 1, and captures the fact that many proxies are measured only at certain announcement days or when special events occur.The D t are assumed to be stochastically independent of w 1t and the error term g t which is thought of as representing measurement error.This error term is assumed to have mean zero and variance r 2 g , i.e., g t $ ð0; r 2 g Þ, and it is distributed independently of w 1t .The parameter /, the error g t and the distribution of the Bernoulli random variable D t (i.e., the parameter d) determine the strength of the correlation between z t and w 1t and, hence, the strength of the proxy as an instrument.
The variance of Moreover, the covariance between w 1t and z t is Thus, the correlation between the proxy and the first shock declines with declining d and increasing r 2 g .
Estimation
Suppose an effective sample y 1 ; . ..; y T of size T is available for the model variables, plus all required presample values, y Àpþ1 ; . ..; y 0 .Moreover, a corresponding sample z 1 ; . ..; z T is available for the proxy.
Then the VAR(p) is estimated by bias-adjusted least squares (LS) giving estimates m; Â1 ; . ..; Âp , residuals û1 ; . ..; ûT and an error covariance matrix estimator t based on mean-adjusted residuals.Kilian (1998) shows that employing bias-adjusted LS estimators improves inference for impulse responses.Therefore we use the biasadjustment based on Pope (1990), as proposed by Kilian (1998), throughout the paper.
The first column b of B is estimated using the proxy z Moreover, the first shock is estimated as and / is estimated by LS from The estimate of / is denoted by / and the residuals are ĝt for t 2 T D and ĝt ¼ 0 for t 6 2 T D .
Bootstraps
As mentioned in the introduction, the WB and the MBB are the bootstrap methods most frequently used in the proxy VAR literature for frequentist inference for impulse responses.The WB generates asymptotically invalid confidence intervals while the MBB yields confidence intervals with the correct coverage level asymptotically under quite general conditions (Jentsch & Lunsford, 2019, 2021).It may be imprecise in small samples, however, and therefore we propose the PRBB which turns out to have better properties in small samples.The N bootstrap estimates b HðHÞ ð1Þ ; . ..; b HðHÞ ðN Þ are used to construct pointwise confidence intervals based on the relevant quantiles of the bootstrap distributions.Alternatively, percentile-t or Hall intervals could be used (see (Kilian & Lütkepohl, 2017, Section 12.2).However, the intervals based on quantiles are quite common in practice and the relative performance of the alternative bootstrap versions is not expected to depend on the type of interval used.
The samples are generated by one of the three alternative bootstrap methods, WB, MBB and PRBB, as follows: WB: For t ¼ 1; . ..; T, independent standard normal variates w t , w t $ N ð0; 1Þ, are drawn and bootstrap residuals and proxy variables are generated as The u WB t are de-meaned and multiplied by ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi T =ðT À Kp À 1Þ p , as in (Davidson and MacKinnon (2004, p. 597), and they are used to generate : The bootstrap residuals and proxy are re-centered columnwise by constructing z iþr for i ¼ 1; 2; . ..; ' and j ¼ 0; 1; . ..; s À 1. Then s ¼ T =' ½ of the re-centered rows of the matrix are drawn with replacement, where ½Á denotes the smallest number greater than or equal to the argument such that 's !T .These randomly drawn blocks are joined end-to-end and the first T bootstrap residuals and proxies are retained, . ..; T , starting from y MBB Àpþ1 ; . ..; y MBB 0 , which are obtained as a random draw of p consecutive values from the original sample.PRBB: Samples . ..; T , are generated, starting from y PRBB Àpþ1 , . .., y PRBB 0 , which are obtained as a random draw of p consecutive values from the original sample.Samples of the proxy are generated as where Dt is a random 0-1 variable following a Bernoulli distribution, Bð dÞ, with d being the share of non-zero observations of the proxy in the original sample.
We emphasize again that the WB does not result in asymptotically valid confidence intervals but is presented here and included in the simulation comparison in Sect. 5 because it has been used in the proxy VAR literature.The WB and the MBB draw the proxies directly from observed values and, hence, do not make assumptions on the exact DGP of z t .In contrast, the PRBB samples from the residuals of the assumed DGP for z t in (2.6) and constructs new proxy values in each bootstrap replication.In addition, while the WB by construction sets the share of non-zero observations of the bootstrap proxy equal to the share in the original sample, this is not generally the case in the MBB and the PRBB design.Apart from that, all three bootstraps are recursivedesign residual based bootstraps for generating the y t samples.In all three bootstrap algorithms, the initial values y ðnÞ Àpþ1 ; . ..; y ðnÞ 0 are a random draw of p consecutive values from the original y t sample.Alternatively, the original initial values y Àpþ1 ; . ..; y 0 could have been used as initial values for each bootstrap sample.If the y t are mean-adjusted, one could even simply use zero initial values if stationary models are under consideration.For example, Jentsch & Lunsford (2019) used zero initial values, generated more than T sample values and then dropped some burn-in values.
For the MBB a decision on the block length ' is needed.To make the asymptotic theory work, it has to be chosen such that ' ! 1 and ' 3 =T !0 as T ! 1 (see Jentsch & Lunsford, 2019, 2021).The choice is less clear in small samples.Choosing ' too small, the blocks may not capture the data features well and may result in poor confidence intervals.A small block length may not be a big problem if the VAR errors and the proxy are iid (independently, identically distributed) and, hence, no higher order moments and dependencies have to be captured within the blocks but a small ' may be a problem if there are higher order features and dependencies.On the other hand, choosing ' large undermines inference precision because there are too few blocks to choose from.Note that the number of available blocks is T À ' þ 1 and, hence, depends on the block length.Jentsch and Lunsford (2019) mention a block length of ' ¼ 5:03T 1=4 as a rule of thumb and we use this rule of thumb in our simulations in Sect. 5 and in the empirical example in Sect.6.
Small Sample Comparison of Bootstraps
In this section a small sample simulation comparison of the three bootstrap methods is presented.The simulation design is considered first and then the simulation results are discussed.
DGP 1
The first data generating process (DGP1) is similar to a DGP that has been used frequently in related work on comparing inference methods for impulse responses (e. g., Kilian, 1998;Kilian & Kim, 2011;Lütkepohl et al., 2015a, b).It is a twodimensional VAR(1) of the form: where 0\a 11 \1.The process is stable with more persistence for a 11 closer to one.The structural errors, w t , are normally distributed with mean zero and variances 4 and 1 such that w t $ N ð0; diagð4; 1ÞÞ, and u t ¼ Bw t with such that b ¼ ð1; 0:5Þ 0 .These u t errors are used to generate the y t as in Eq. ( 5.1), starting from a standard normal y 0 , i.e., y 0 $ N ð0; I 2 Þ.In the simulations, we fit VAR models of order p ¼ 1 and p ¼ 12, without constant term, to de-meaned data.
In line with the related literature (e.g., Jentsch & Lunsford, 2019), the proxy z t is generated as in Eq. (2.6), i.e., z t ¼ D t ð/w 1t þ g t Þ, where D t , / and the error g t determine the strength of the correlation between z t and w 1t and, hence, the strength of the proxy which is important for how well the impact effects of the shock can be estimated and these estimates are of central importance for estimating the impulse responses.The error term g t is generated independently of w 1t as g t $ N ð0; r 2 g Þ, with different values of r 2 g .The random variable D t has a Bernoulli distribution with parameter d, B(d), which specifies the average proportion of nonzero z t variables.D t is stochastically independent of g t and w 1t .For d ¼ 1, the proxy variable is nonzero with probability one for all sample periods t ¼ 1; . ..; T .
The parameter values used in our simulations are summarized in Table 1.We use propagation horizons up to H ¼ 20 to capture not only the short-term effects of a shock but also the longer-term effects which may still be a bit away from zero for the more persistent processes.Sample sizes T ¼ 100; 250, and 500 are considered.The Because the MBB is constructed so as to account for GARCH errors while this is not the case for the PRBB, we also generate DGP1 with GARCH errors to explore the performance of the bootstraps under conditions which may be unfavorable for the PRBB but may still be present in practice.The way the GARCH errors are generated is described in detail in Appendix A.Here we just mention that we use a bivariate GARCH process with high persistence in each component as it is often observed for financial data.
Our criteria for evaluating the bootstrap methods are the coverage precision and the lengths of the confidence intervals obtained from the bootstraps.These criteria capture main features of interest in related empirical studies and they have also been used in related small sample comparisons of bootstrap inference (e.g., Kilian & Kim, 2011;Lütkepohl & Schlaak, 2019).
DGP 2
Our second DGP (DGP2) mimics a VAR model from a study of Gertler and Karadi (2015).It is based on parameters estimated from their dataset.One of the models used by Gertler and Karadi is a four-dimensional US monthly model.We use their data from 1990M1 to 2016M6 and fit a VAR(1) model with constant term to the data.Using bias-adjusted estimates, the reduced-form parameters of DGP2 are m ¼ 0, A 1 ¼ 0:97 0:00 0:00 À 0:13 0:01 1:00 0:00 À 0:09 À0:03 0:00 1:00 À 0:53 0:02 0:00 0:00 0:91 The maximum eigenvalue of A 1 has modulus 0.9997.Thus, DGP2 is stable but very persistent.These parameters are used to generate the y t based on u t $ N ð0; R u Þ and starting from y 0 ¼ 0, the unconditional mean of the y t .
We also use a proxy with similar properties as the proxy for monetary policy shocks constructed by Gertler and Karadi (2015).More precisely, we estimate the b vector of impact effects of the first shock, giving a vector b ¼ ð1; À0:14; 0:70; 0:24Þ 0 , and estimate the parameters / and r 2 g of the model (2.6) as described in Sect. 3 from the Gertler/Karadi data with nonzero z t values and the first shock obtained from Eq. (2.3).This yields values / ¼ 0:1019 and r 2 g ¼ 0:0020 that are used for generating z t as in Eq. (2.6) with D t having a B(0.82) distribution.The parameter, d ¼ 0:82, of the Bernoulli distribution is chosen because the Gertler-Karadi proxy has nonzero values for 82% of the sample periods.The implied correlation between proxy and shock is 0.36 and, hence, it is rather low.
Note that the generation mechanism for DGP2 differs from that of DGP1, where the structural shocks are generated directly and the reduced-form data as well as the proxy are computed from the generated structural shocks and the generated g t series.
In contrast, we generate the reduced-form errors for DGP2, construct the first structural shock from the structural parameters b and the error covariance matrix R u as in equation (3.2) and then generate z t as in Eq. (2.6) with an additionally generated g t $ N ð0; r 2 g Þ.The rational behind using DGP2 is that we will also use the Gertler/Karadi data for an illustrative example in Sect.6 and, hence, the simulation results for DGP2 may be indicative of what to expect in the example.Moreover, it is, of course, of interest to see whether the results for our small bivariate process underlying DGP1 carry over to a higher-dimensional DGP.
For DGP2, we fit VAR models of orders p ¼ 1 and p ¼ 12, including a constant term, to samples of size T ¼ 200 and 500.The smallest sample size considered is a bit larger than for DGP1 to account for the larger model dimension.It is not far from the sample size used in the example in Sect.6.The number of bootstrap replications is again N ¼ 2000 and the number of Monte Carlo repetitions is R ¼ 1000, as for DGP1.
Results for DGP1
Some key findings from simulating DGP1 are presented in Figs. 1 and 2. Specifically, in Fig. 1 the implications of changing the VAR order p, the persistence of the VAR process (a 11 ) and the strength of the proxy reflected in the Bernoulli parameter d on the coverage and average length of the bootstrap confidence intervals can be seen for relatively short samples of size T ¼ 100.In Fig. 2, the impact of increasing the sample size is presented.
A main observation from Fig. 1 is that, for some designs and propagation horizons, there are clear differences in the coverage of the confidence intervals of the three bootstrap variants.The PRBB yields overall the coverage results closest to the desired 90% while the MBB tends to yield coverage rates a bit smaller and, in particular for short propagation horizons, the WB yields conservative intervals with coverage rates often larger than 90% and greater length than MBB and PRBB.Interestingly, the average lengths of the intervals of all three bootstrap methods are often very similar despite differences in coverage.For example, the PRBB intervals for short propagation horizons are in most cases very similar to the MBB intervals although the latter have a coverage which is below the PRBB coverage and it is also lower than the nominal 90%.Thus, Fig. 1 clearly shows that the PRBB tends to be more precise in terms of coverage and often it does so without sacrificing much interval length.Thus, under these two criteria it is preferable to the MBB and the WB.The latter bootstrap is often conservative and also yields larger confidence intervals than the other two bootstrap methods.
There are also some more specific results related to the VAR order and the proxy strength that can be seen in Fig. 1.
•
The VAR order p has an important impact on both the coverage and average lengths of the confidence intervals.In particular, considering the order p ¼ 12 of a short-memory process with a 11 ¼ 0:5 results in substantial over-coverage, especially for longer propagation horizons, for all three bootstrap variants.The interval lengths tend to increase for all propagation horizons and substantially so for the longer propagation horizons for all three bootstraps if the VAR order increases from p ¼ 1 to p ¼ 12 (compare the second and fourth columns of Fig. 1).
•
Comparing panels (c) and (e) as well as (d) and (f) in Fig. 1, it is apparent that the proxy strength does not have much of an effect on the coverage but partly leads to larger intervals (see in particular the average lengths of h 21 intervals for short horizons).In panels (e) and (f) in Fig. 1, the proxy has a lower correlation with the structural shock of interest due to the reduced number of event dates, d, for which the proxy is constructed.A similar result is obtained, however, if d ¼ 1 is maintained but the correlation between proxy and shock is reduced due to a larger variance r 2 g of the error term in Eq. (2.6), as can be seen in Fig. 6 in the Appendix. • The impact of higher persistence (a larger a 11 parameter) of the process can be seen by comparing panels (a) and (c) as well as (b) and (d) in Fig. 1.Generally the coverage is reduced and the intervals become larger, especially for longer propagation horizons, if a 11 increases from 0.5 to 0.95.The reduction in coverage is most severe for the MBB, while the PRBB continues to have acceptable coverage for persistent processes.In Fig. 7 of the Appendix, additional results for a 11 ¼ 0:9 are presented and it can be seen that the results for a 11 ¼ 0:9 are similar to those of a 11 ¼ 0:95.
In Fig. 2, the impact of the sample size on the confidence intervals is exhibited for the case of a persistent process with a 11 ¼ 0:95 and a relatively strong proxy with correlation 0.9 with the shock and d ¼ 1.As we saw in Fig. 1 already, in this situation the MBB has a coverage clearly smaller than the nominal 90% for p ¼ 1 and all three bootstrap methods tend to yield under-coverage for T ¼ 100.In Fig. 2 it can be seen that the coverage clearly improves for T ¼ 250 already and the coverage deficiencies largely disappear for T ¼ 500.Also, the interval lengths for all three methods become very similar and are reduced for larger sample sizes, as one would expect.Only the WB intervals for some short propagation horizons remain wider and less precise for larger samples.This result may be a reflection of the asymptotic inadmissibility of the WB.As the PRBB does not explicitly account for GARCH in the VAR residuals, we have also applied the three bootstraps to processes with GARCH errors to see how such features impact on their properties.Results corresponding to Figs. 1 and 2 are presented in Figs. 8 and 9 in the Apprendix.It turns out that for small samples of T ¼ 100 the relative performance of the three bootstraps in terms of coverage and interval length is not much affected.Despite the fact that the MBB is the only asymptotically valid procedure for this case, comparing its confidence intervals to the PRBB intervals, they have lower coverage and similar length as in the case of iid residuals.In Fig. 9 it can be seen that even for larger samples with T ¼ 500, MBB is not clearly superior to the PRBB.Thus, at least for our DGP1 the MBB does not have an advantage over PRBB even if data features such as GARCH are present that are not accounted for explicitly by the PRBB.These results suggest that for macroeconomic studies where rarely samples larger than T ¼ 500 are available, the PRBB may lead to superior inference as compared to MBB and WB.
Results for DGP2
Coverage and average interval lengths for DGP2 are depicted in Fig. 3 for sample size T ¼ 200 and in Fig. 4 for T ¼ 500.Even for the smaller sample size T ¼ 200, all coverage rates of the nominal 90% confidence intervals of WB and PRBB are between 80% and 100%, except for the long-run response of the third variable.In other words, the two bootstrap methods yield rather precise confidence intervals for three out of four variables across our Monte Carlo designs.Given the asymptotic inadmissibility of the WB, this result may, of course, not be generalizable to other simulation designs.
Even the MBB has coverage rates above 80% for variables 1, 2 and 4 and propagation horizons up to 30 periods when T ¼ 200.Thus, even the MBB is relatively precise in terms of coverage for three out of four variables.There are, however, differences in interval lengths among the three bootstraps.Typically, the WB intervals are a bit longer than the MBB and PRBB intervals which are often close together on average.Overall the performance of the WB is inferior to MBB and PRBB.Thus, although there is often not much to choose between MBB and PRBB in terms of coverage and interval length, it is remarkable that the PRBB tends to have typically coverage rates closer to 90% than the MBB.Thus, even for the higherdimensional DGP2, PRBB performs well relative to its competitors, at least for three of the four variables.
For the third variable, VAR order p ¼ 1 and T ¼ 200 all three bootstrap methods yield coverage rates below 80% for a propagation horizon of 48 periods.For T ¼ 500 and p ¼ 1, only the MBB still has a coverage below 80% for long horizons (see Fig. 4).The coverage rates for long horizons are actually a bit closer to 90% for p ¼ 12, although one might expect lower precision for the larger VAR order as the larger VAR order implies a model with substantially more parameters.Even for p ¼ 12, the PRBB outperforms the MBB in terms of coverage and is almost as good in terms of interval length.
In summary, our simulations show that the WB is often conservative and yields more than the nominal coverage.In turn, its confidence intervals are often considerably larger than those of the MBB and PRBB.Thus, the WB is overall inferior to the MBB and the PRBB.Between the latter two, the PRBB is preferable because it yields typically coverage rates closer to the nominal rate than the MBB.Moreover, the PRBB confidence intervals are often about as long on average as those of the MBB.Hence, our simulations show that the PRBB has merit.In the next section, it will be applied to an illustrative example model from the literature.
Empirical Example
We consider an example based on the study of Gertler and Karadi (2015) mentioned earlier to illustrate the differences between the three bootstrap methods.One of the models used by Gertler and Karadi is a four-dimensional US monthly model for the variables (1) one-year government bond rate, (2) log consumer price index (CPI), (3) log industrial production (IP) and (4) excess bond premium.They employ the three months ahead federal funds rate future surprises as the baseline proxy to identify a monetary policy shock and they find their proxy to be a strong instrument.We reestimate their model, shortening the sample to include only periods for which all four variables and the proxy are available.This leaves us with a sample running from 1990M1 through 2016M6, i.e., the sample size is T ¼ 270.As the Gertler-Karadi proxy is autocorrelated and predictable, we pre-whiten it by regressing it on its own lags and lags of the endogenous variables and use the residuals as our proxy. 1The proxy is available in d ¼ 82% of the sample periods.Following the Gertler-Karadi baseline model specification we include a constant and 12 lags in the VAR.
Figure 5 shows the pointwise 90% confidence bands of the impulse responses to a monetary policy shock that increases the one-year-rate by 25 basis points on impact.Such a shock corresponds roughly to a one standard deviation shock in Gertler and Karadi (2015).The point estimates of the impulse responses are qualitatively in line with the findings by Gertler and Karadi (2015).2A monetary tightening induces declining point estimates of the response of industrial production and consumer prices and an increase in the excess bond premium by slightly more than 10 basis points on impact.However, the bootstrap confidence intervals indicate that the responses of industrial production and the CPI may not be significant.Clearly, the choice of the bootstrap procedure affects the widths of the confidence intervals, in line with the simulation results reported in Sect. 4.
From Fig. 5a, it is apparent that the bands estimated via PRBB and MBB tend to be either very similar or the PRBB intervals are slightly larger than the MBB intervals.This outcome is consistent with the simulation evidence, see for example Fig. 3,panel (b).Recall, however, that the shorter MBB intervals in the simulations come at the price of a lower coverage rate which may be below the nominal 90% rate.Although the interpretation of the impulse responses does not depend on the choice of bootstrap in this case, it is, of course, desirable to employ the most reliable inference procedure.Figure 5b compares the PRBB intervals to the WB intervals and shows that the intervals estimated via WB tend to be larger than the PRBB intervals which is again in line with our simulations.
Conclusions
In proxy VAR models, an external proxy variable that is correlated with a structural shock of interest and uncorrelated with all other shocks, is used for inference for the impulse responses.In this study, we have proposed a new bootstrap algorithm for such inference.So far frequentist inference in this context is typically based on the WB or the MBB.The former is not valid asymptotically and often yields rather wide PRBB versus WB Fig. 5 Pointwise bootstrap 90% confidence intervals for the empirical example confidence intervals, whereas the latter has poor coverage properties in small samples as they are often encountered in macroeconomic studies.We have proposed an alternative bootstrap method which assumes a specific model for the DGP of the proxy variable and samples from the estimated reduced-form errors and the residuals of the proxy model to generate bootstrap samples.
We have shown by simulation that our new PRBB method works well in relatively small samples.Specifically, it yields bootstrap confidence intervals for impulse responses with more accurate coverage than and similar length for comparable coverage to the WB and the MBB.Thus, it has merit for empirical studies for which only relatively small samples are available.
One advantage of the MBB is that it also works asymptotically for conditionally heteroskedastic model errors while the PRBB is not designed for such data features.In our simulations we have found, however, that the PRBB also outperforms the MBB in small samples if such data properties are present.The price paid for the additional generality of the MBB is its reduced accuracy in small samples.
Discussion and Extensions
We have proposed a new bootstrap algorithm for inference for proxy VAR models which differs from its main competitors by utilizing an explicit model for the DGP of the proxy.Thereby we achieve more accurate small sample properties of confidence intervals for impulse responses than the WB the MBB.Clearly, for using the bootstrap algorithm for applied research, the superior small sample accuracy is the key advantage of the new bootstrap algorithm, while the need for modelling the DGP of the proxy is a limitation.So far, the setup of the new bootstrap algorithm does not account for heteroskedastic or conditionally heteroskedastic VAR processes.As the MBB remains asymptotically valid even under changing volatility, it is more general than the new bootstrap algorithm with respect to asymptotic properties.As we show by simulation, the new bootstrap algorithm may still yield more precise confidence intervals than the other two bootstrap algorithms if the data exhibit changing volatility.Thus, our new bootstrap algorithm can be recommended, in particular, for macroeconometric studies where only small or medium sample sizes are available, for example, if only 20 years of quarterly or monthly data are available.In that situation, its superior accuracy makes the new bootstrap algorithm an attractive alternative to the WB and MBB.
To account for more general data features, it may be of interest to explore more general models for the proxy variable in future research.In particular, allowing for serially correlated proxies may be of interest.Moreover, accounting explicitly for changing volatility by modelling such features and designing a bootstrap algorithm accordingly may be an interesting topic for future research.Given the variety of potential deviations of real economic data from the standard model setup, it may also be a fruitful topic for future research to investigate the accuracy of the simple setup of the new bootstrap algorithm considered in this study if more elaborate data features are ignored.
:1Þ where ût are the residuals corresponding to bias-adjusted LS estimation and û1t is their first entry.The impulse response matrix HðHÞ is estimated as b HðHÞ ¼ ½ b; Û1 b; . ..; ÛH b; i ¼ 1; . ..; H: for each Monte Carlo design is R ¼ 1000 and we use N ¼ 2000 bootstrap repetitions within each replication.
Fig. 1
Fig. 1 Coverage and average lengths of alternative pointwise bootstrap 90% confidence intervals for DGP1 with iid errors and T ¼ 100
Fig. 4
Fig.4Coverage and average lengths of alternative pointwise bootstrap 90% confidence intervals for DGP2
Fig. 7
Fig. 7 Coverage and average lengths of alternative pointwise bootstrap 90% confidence intervals for DGP1 with iid errors and T ¼ 100
Fig. 8
Fig. 8 Coverage and average lengths of alternative pointwise bootstrap 90% confidence intervals for DGP1 with GARCH errors and T ¼ 100
Fig. 9
Fig. 9 Coverage and average lengths of alternative pointwise bootstrap 90% confidence intervals for DGP1 with GARCH errors and d ¼ 1, a 11 ¼ 0:95, corr ¼ 0:9 The three bootstrap versions differ in the way they generate bootstrap samples of y t and z t .Based on N bootstrap samples y H bðnÞ and stored.
: A block length '\T has to be chosen for the MBB.The blocks of length ' of the estimated residuals and proxies are arranged in the form of the matrix MBB
Table 1
Design Parameters for DGP1 g Corrðw 1t ; z t Þ H | 8,092 | 2020-11-01T00:00:00.000 | [
"Economics",
"Computer Science"
] |
Using semantic web technologies to annotate and align microarray designs.
In this paper, we annotate and align two different gene expression microarray designs using the Genomic ELement Ontology (GELO). GELO is a new ontology that leverages an existing community resource, Sequence Ontology (SO), to create views of genomically-aligned data in a semantic web environment. We start the process by mapping array probes to genomic coordinates. The coordinates represent an implicit link between the probes and multiple genomic elements, such as genes, transcripts, miRNA, and repetitive elements, which are represented using concepts in SO. We then use the RDF Query Language (SPARQL) to create explicit links between the probes and the elements. We show how the approach allows us to easily determine the element coverage and genomic overlap of the two array designs. We believe that the method will ultimately be useful for integration of cancer data across multiple omic studies. The ontology and other materials described in this paper are available at http://krauthammerlab.med.yale.edu/wiki/Gelo.
Introduction and Background
The sequencing of the human genome 1,2 and subsequent annotation initiatives 3,4 are creating a large body of information on genome accessibility (methylation and histone modifications), transcription (mRNA, ncRNA expression), and structural variations (such as inversion, duplication and translocation). [5][6][7] The task of organizing such large volumes of data becomes increasingly complex, as do the subsequent analyses of the information. [8][9][10][11] In an effort to catalog this and similar information, well over 1,000 different databases are currently actively maintained in the realm of molecular biology. 12 The problem is that many of them are neither connected nor integrated. [8][9][10]13 The area of data integration using semantic web technologies remains under active development. [13][14][15][16][17] Compared to more traditional relational database systems, the use of semantic web technologies simplifies data integration through W3C-supported knowledge representation standards such as Resource Description Framework Schema, 18 and Web Ontology Language. 19 A growing list of standardized vocabularies and data sources in RDFS and OWL, such as Gene Ontology (GO), 20 Sequence Ontology (SO), 21 and other projects within the realm of Open Biological Ontologies (OBO), 22 allow the scientific community to move away from a plethora of home-built data models towards a situation where numerous data and knowledge bases share the same or related upper level schemas. This standardization of data models is desirable to facilitate the sharing of cancer data across multiple genomic data stores. Also, in the area of human genomics, where new facts and types of facts are discovered on a regular basis, a traditional relational model of storing data becomes less than optimal. For example, to include a new type of fact, a rigidly defined relational-database table would need to be updated with additional columns to accommodate the new information. In contrast, triple stores can easily add new properties to existing information by means of subject-predicate-object triples. Finally, an additional benefit of using semantic web technologies, albeit currently underutilized, involves the possibility of implementing reasoners that can logically infer relationships among the entities in the store. Triple stores allow for queries that are not easily performed in traditional databases, such as queries across hierarchies, as in ontologies. Reasoner software can also help in performing consistency checks over complex knowledge bases using logical rules. 8 In this study we discuss the use of semantic web technology for array annotation and alignment. Most of our data is derived from cancer microarray experiments. A critical step in the microarray data integration process is the alignment of the different microarray designs to perform integrated analyses. Mapping of array probes to genomic coordinates is essential for this task. The coordinates represent an implicit link between the probes and multiple genomic elements such as genes, transcripts, miRNA, and repetitive elements which are annotated using concepts from Sequence Ontology (SO). 21 By creating explicit links between the loci of genomic elements, we are able to derive which probes and elements align. The mapping of probes to elements achieves two goals: fi rst, it links probes to gene transcripts (elements in SO), allowing for the re-annotation of the array design with the transcripts covered. Second, we can establish the overlap between the probes of two different array designs, establishing the degree of alignment.
Materials and Methods GELO
Our project provides a unique approach to linking various data in the area of molecular biology using semantic web technologies. Unlike other approaches, such as Bio2RDF 14 that rely on database identifiers, names, and synonyms to link information, we use the genomic coordinates as a biologically-meaningful scaffold to attach and align information. Creating synonyms to link disparate sources of data is a useful approach but it requires time-consuming manual curation. Our approach allows the system to automatically infer that any two elements are the same if they map to the same coordinates in a particular genome build. Consequently, any annotation pertinent to one element can be applied to the other as well. Additionally, each genomic element can be automatically represented in the context of other elements by means of relationships such as "a upstream_of b", "c on_the_same_strand_ as d ", or "e contains f ". This allows for complex queries such as for exons that are contained in a particular transcript.
Our Genomic ELement Ontology (GELO) ( Table 1 and Fig. 1) is loosely based on an Open Biological Ontology 22 project called Sequence Ontology (SO), 21 a standard for annotating regions of the human genome. "Region", an (incomplete) sub-branch of SO, is used as the basis of the GELO ontology. Our "GenomicElement", a superclass of SO's "region", subsumes all terms predefined in SO (e.g. "repetitive element", "ncRNA", etc.). The class "GenomicElement" is flexible enough, however, so that it could be used to conceptually represent any of the following: "the entire genome", "a single chromosome", "a band on a chromosome", "an n-megabase-long region", "a specific gene", "an exon", or "a unique 50-mer within the exon". A novel class "GenomicLocus" was created to provide a facility to link any "GenomicElement " to its sequence in a particular assembly of the human genome ( Table 1). The relationships "locus_of" and "has_locus" were defined to link "GenomicElement " with its biological coordinates stored in a "GenomicLocus". To describe the relative position of two instances of "GenomicLocus" in the genome, "contains" and "contained_by" (subclasses of "has_part " and "part_of ", already defined in SO) were defined in GELO. Several other properties will be defined to facilitate the relative positioning of the regions: transitive "upstream_of " and its inverse: "downstream_of ", symmetric "on_the_same_strand_as" with an analogous "on_the_opposite_strand_from", symmetric "overlaps_with", and so on. The proof of concept described in this manuscript relies only on the relationships "contains" and "contained_by". As our repository grows, other relationships not discussed in this manuscript will be added as well.
Knowledge base
We describe the process used to construct our knowledge base using GELO and a set of genomic sequences. Two sets of sequences were used, both being lists of probes from commercial Nimblegen microarrays. The two sets indicate an evolution of the microarray design as the first one was generated in 2005 (2005-04-20_Human_60mer_1in2 array) and the second in 2006 (2006-08-03_ HG18_60mer_expr). Both sets of probes are available within Nimblegen design files. The fi rst step was to take the sequences of the probes and strip them of any existing annotation. The next step involved mapping the sequences to the most current build of the human genome (hg18,) 23 using the BLAST-Like Alignment Tool (BLAT). 24 We set BLAT to find probe alignments with 50% and better similarity scores. A resulting PSL file was then parsed using a custom python script, which converted the tabular format into Subject-Predicate-Object triples, stored in the N-Triples format. 25 The N-Triples format was chosen because of its simplicity in comparison to other formats such as RDF/XML, 26 Turtle 27 or N3. 28 Within the N-Triples representation, all sequences have been represented as individuals of RDF: type "probe" (defined in SO as "SO_0000051") and all BLAT matches were represented as of RDF: type "GenomicLocus" annotated with genomic coordinates found by BLAT. All "GenomicLocus" individuals were linked to their appropriate "probe" individuals using "locus_of " relationship from GELO. To ease the retrieval of probe locations from one design file over others, the concept "ProbeSet" was defined in a separate (helper) ontology called Probe (Table 1). Two instances of "ProbeSet" were created, one aggregating probe sequences from the 2005 design file, the other aggregating probe sequences from the 2006 design file. The "part_of " relationship was used to link particular probes with their corresponding probe design.
At the time of publication our knowledge base contained information about genes, their transcripts, and the locations of introns and exons. The import of gene information was performed as follows. A list of FASTA files containing the sequences of all human transcripts was acquired from the Refseq database at NCBI (RefSeq release 33). 29 The information linking the genes with gene names, gene symbols, synonyms, etc, was acquired from two files: homo_sapiens.gene_info and gene2refseq (both available via FTP from NCBI's gene database). 30 The sequences were aligned with the latest build of the human genome (hg18) 23 using BLAT. 24 Subsequently the tabular output files of BLAT together with transcript and gene annotation from NCBI were converted to N-Triples representation using a custom python script. Within the N-Triples representation all known genes were defined as rdf: type "SO_0000704 " and all known transcripts using rdf: type "SO_0000673" (both defined in SO). All BLAT matches were represented using rdf: type "GenomicLocus" defined in GELO. Finally all gene individuals were linked to their respective transcripts using "has_transcript" relationship (using an ENTREZ helper ontology created by our group to augment GELO with gene-specific relationships, see Table 1), and all transcripts were linked to their appropriate "GenomicLocus" (i.e. BLAT mappings) using the "has_locus" relationship defined in GELO. Thus, an indirect link from genes to their respective locations was achieved. As every line of the psl file contains information about "block start" and "block end," indicative of the intron-exon structure of the transcript, this information was also included in the N-Triples file, whereas the introns were created as SO-defined "SO_0000188" instances and exons as instances of "SO_0000147". Each intron and exon were linked to their respective BLAT-determined "GenomicLocus" instances via the "has_locus" relationship.
We decided to use BigOWLIM as our storage system based on published and unpublished LUBM Benchmarks. 31 Additionally, BigOWLIM uses the Sesame API which was successfully used in other semantic web projects. [32][33][34] After loading all elements (probes, genes, introns, exons) and their respective loci on the chromosomes, we needed to determine which elements' loci overlap along the chromosome. We defined an "a contains b" semantic relationship as a relationship between any two individuals of class "GenomicLocus" such that an entire genomic sequence of an individual b can be found within the sequence of an individual a. As we were mainly interested in exploring the short 60-mer probe sequences in the context of their belonging to relatively long transcript sequences, we focused on the "contains" relationship only. Other relationships, such as "overlaps", although equally important, were assigned a lower priority and will be added to the repository in the future.
To potentially link two loci using the "contains" relationship, the knowledge base was queried using a SPARQL 35 expression (Fig. 2) to construct a new graph linking pairs of loci. The rule engine of OWLIM allows to create logic rules equivalent to the SPARQL expression listed in Figure 2. Currently, OWLIM rules do not support "bigger than" or "smaller than" constraints, but future versions will do so (Personal Communication). The idea is that the rule engine will infer the "contains" and other relationships of GELO automatically upon insertion of new "GenomicLocus" data into the knowledge base. At this time, we resorted to constructing sub-graphs using SPARQL queries.
A simple validation of the repository was done by probing the genomic vicinity of the NFκB1 gene. A query was issued to retrieve all probes and their locations for all known transcripts of the NFκB1 gene. Figure 3 shows the region with all probes retrieved plotted using the UCSC genome browser. 24 A comparison between 1) a list of probes that was originally associated with the gene during the array The ontologies described in this paper can be accessed at http://krauthammerlab.med.yale.edu/ wiki/Gelo.
Results and Discussion
The goal of our knowledge base is the alignment of genomic data from cancer high-throughput experiments. We currently work with melanoma gene expression data from two different array designs, and we are interested in aligning the results of both designs. Having constructed our knowledge base of genes and their genomic locations, we attempted to re-annotate the sequences of the two microarray designs, the 2005 design (2005-04-20_Human_60mer_1in2) with 383,468 probe sequences, and the 2006 design (2006-08-03_HG18_60mer_expr) with 381,002 probe sequences, and to determine the genomic overlap. Figure 4 shows the SPARQL query used to align the two design files and determine how many individual transcripts and genes are probed in each of the designs.
The Venn diagram in Figure 5 illustrates the query result. Not surprisingly, the 2006 design features 1947 new genes that were not included in the previous year's design (to produce the graph from the results of the SPARQL query, we used Python and R). A further examination of the genomic overlap among the two design files revealed 365 genes that were not included in the newer design. The differences among the design files could reflect the changes in the assembly of the human genome sequence as well as changes in the annotation of the sequences provided by Refseq. 29 The alignment of the two design files will enable user to determine which of the gene-specific probe sets can be compared between the two different designs.
Next, we investigated the number of probes that are contained within each of the genes. An overwhelming majority of the genes had 10 probe sequences assigned to them in the 2005 design and 8 for the 2006 design. This agreed with the prior knowledge about these microarray designs. The histograms in Figures 6A and 6B revealed a periodicity in probe count distribution. For example, there are several genes for which 20, 30, etc., probes were selected in the 2006 design. This is a reflection of the number of transcripts covered per gene. The question is whether some of these probes are duplicates. To address this question, we investigated one of the genes, NFκB1, which had a probe count corresponding to two transcripts. The illustration in Figure 3 shows the NFκB1 locus and the different array probes. It is evident that quite a few of the probes overlap: ∼7734 and ∼7720, ∼7735 and ∼7721, and so on. A further examination of the repository revealed that NFκB1 is linked to two (2005 design) and three (2006 design) uniquely identified, although completely identical, transcripts. The probe sequences were selected for each of the transcripts independently, possibly without acknowledging that they were, in fact, the same. As a result, in the 2005 design, we observe duplicates of certain probes, and in the 2006 design, triplicates of probes. Querying our knowledge base revealed that several different locations on the microarray surface store the same probe sequence. A researcher can use the information provided by the knowledge base and compare the microarray surface locations storing the same probe sequence to detect variability in the microarray data.
The alignment of the two design files can now be used to revise and supplement the incomplete annotation of the original design files. Specifically, a closer look at the original annotation included in the 2006 design revealed that probes were designed for 41,621 unique transcripts identified by either Refseq ids (20,590 transcript) or GenBank accession ids (21,031 transcripts). Unlike the GenBank accession ids, the Refseq ids correspond to well-curated consensus messenger RNA sequences. A query to our knowledge base showed that probes which were originally mapped to the 20,590 Refseq sequences are re-mapped to 24,079 Refseq ids. The 21,031 transcripts with GenBank accessions ids are re-mapped to 17,332 refseq transcripts. Overall, the 41,621 transcript ids were re-mapped to 24,644 unique Refseq ids (versus 20,590 in the original design) based on probe sequence alignment to the human genome. Figures 6C and 6D show two histograms depicting the uniqueness of probes with respect to genes. As expected, the majority of the probes in both 2005 and 2006 designs are unique, i.e. they report on expression level of just one sequence.
However, the skew of the distributions suggests the presence of many "noisy" probes whose sequences match more than one, and sometimes even more than two or three genes. The re-annotation of the 2005 and 2006 design files can report on how "promiscuous" any given probe is, which is useful for signal normalization and de-noising of the microarray data.
Another aspect of our knowledge base is the inclusion of information describing the polarity of probe and gene sequences. Figures 6E and 6F show the presence of probes in gene regions where the probes are on the opposite strand from the gene. The polarity of probe sequences with respect to gene sequences may be relevant for some experimental designs. Alternatively, in an experimental design where the relative position of the probe should not matter, the repository can be queried to find additional probes that, although anti-sense with respect to the gene, can be examined to further strengthen the evidence coming from the other, correct-sense probes.
We would also like to discuss the performance of the knowledge base, which currently stores over 39,000,000 explicit statements (triples). The query in Figure 4 technologies are evolving, and the time it takes to complete the queries will surely decrease in the future. For the purpose of our research, however, the response time was satisfactory. Overall, our knowledge base provides a biologically meaningful framework for the examination of genomic data. The potential of the semantic web to link virtually any piece of information in the context of its genomic location provides an attractive strategy for data integration and analysis in the 21st century. | 4,205.6 | 2009-01-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Suppressing The Ferroelectric Switching Barrier in Hybrid Improper Ferroelectrics
Integration of ferroelectric materials into novel technological applications requires low coercive field materials, and consequently, design strategies to reduce the ferroelectric switching barriers. In this first principles study, we show that biaxial strain, which has a strong effect on the ferroelectric ground states, can also be used to tune the switching barrier of hybrid improper ferroelectric Ruddlesden-Popper oxides. We identify the region of the strain -- tolerance factor phase diagram where this intrinsic barrier is suppressed, and show that it can be explained in relation to strain induced phase transitions to nonpolar phases.
I. INTRODUCTION
Since the discovery of ferroelectricity in BaTiO 3 , perovskite oxides have been heavily studied and utilized in applications as ferroelectric materials. Versatility of the perovskite structure allows a large number of complex oxides to be synthesized, but among those, only a small fraction are ferroelectrics [1]. A major breakthrough in perovskite-related ferroelectrics is the discovery of hybrid improper ferroelectricity (HIF) as a materials design route in 2011, which led to an explosion in the predictions of novel ferroelectric oxides [2]. Among those, the list of examples that are experimentally verified includes A 3 B 2 O 7 HIFs (Ca,Sr) 3 Ti 2 O 7 [3], (Sr,Ca) 3 Sn 2 O 7 , Sr 3 Zr 2 O 7 [4][5][6], as well as a weak ferromagnetic (Ca 0.69 Sr 0.46 Tb 1.85 Fe 2 O 7 ) [7].
Despite the prediction of ferroelectricity and observation of a polar crystal structure in many compounds, experimentally observing the switching of polarization is challenging. For example, the original HIF Ca 3 Ti 2 O 7 was reported to have a polar structure 20 years before the idea of HIFs was introduced [8], but the direct evidence of polarization switching was not observed until 2015 [3]. The reason behind the absence of switching in these materials was initially believed to be large intrinsic coercive fields, or defects in the materials, which typically increase the coercive field [9,10]. The high experimental coercive field is not surprising, because the energy scale that needs to be overcome for switching is considered to be determined by the octahedral rotations, which often have an energy scale significantly higher than that of the ferroelectric distortions in typical perovskite oxides. Switching was observed in other HIF materials with coercive fields ranging from 120 to 200 kV·cm −1 [3,4,6], and very recently, the smallest coercive field of 39 kV·cm −1 was observed in single crystals of Sr 3 Sn 2 O 7 [11]. Though these coercive fields are comparable to values suitable for integration to silicon chips (E c ≈ 50 kV·cm −1 ), applications such as high-power actuators and low-voltage logic and memory elements ask for ferroelectrics with robust polarizations *<EMAIL_ADDRESS>that can be switched by a lower coercive field [12][13][14][15]. Ultra-low coercive fields as low as 5 kV·cm −1 were observed in pulsed laser deposition grown Ca 3 Ti 2 O 7 thin films, but the reason behind this reduction (and whether it is an intrinsic or an extrinsic effect) is not clarified yet [16].
Understanding the intrinsic mechanisms that affect the coercive field of HIF materials, and finding new design strategies to reduce these fields are important for their applications. In this paper, we illustrate that strain can be an effective means to achieve this. Epitaxial strain, obtained by growing thin films on lattice mismatched substrates, has been used extensively as a way to tune the ferroelectric and dielectric properties of perovskites [17,18]. Both the octahedral rotations, and the proper ferroelectric order parameter are strongly coupled with the biaxial strain in most materials, and strain is shown to change the switching energy barrier of ferroelectrics as well. [19] HIFs are shown to undergo interesting structural phase transitions under strain as well [20], but there is no detailed study of the switching behavior of HIFs under biaxial strain. The original study on HIFs [2] showed that the lowest energy switching path and energy (which is correlated with the coercive field) is strain dependent, but the recent work that illustrate the richness of possible switching paths makes it necessary to re-evaluate the polarization switching behavior of strained HIFs [21,22].
In this study, we perform density functional theory (DFT) calculations on 13 different A 3 B 2 O 7 Ruddlesden-Popper compounds to map out the strain-tolerance factor phase diagram, and show that the strain induced nonpolar or anti-polar phases emerge in compounds with a finite range of tolerance factors. We then show, by performing nudged elastic band (NEB) calculations, that the intrinsic coherent polarization switching energy barrier decreases as the compounds get closer to phase boundaries by biaxial strain. This suppression of switching barrier is not always accompanied with a decrease in the polarization, which makes strain tuning of HIF Ruddlesden-Poppers a viable tool to obtain low coercive field ferroelectrics with a robust polarization. We also show that the tensile and compressive strains favor different switching pathways, which can be intuitively understood in terms of which octahedral rotations or tilts are favored The n = 2 Ruddlesden-Popper Structure (a) The high symmetry body-centered-tetragonal phase (I4/mmm) of A3B2O7 RP-phase perovskites. (b) Compounds with tolerance factor less than one develop octahedral rotation/tilt distortions, which are usually associated with normal modes at the X point of the Brillouin zone. (The figure shows the X + 2 mode.) These distortions double the original unit cell and symmetry becomes orthorhombic. (c) Orientations of the crystal axes in the orthorhombic cell are different from those in the high symmetry tetragonal cell. Throughout this paper, we use the axes of a pseudo-tetragonal cell (shown in black) that can be defined within the orthorhombic cell (shown in light blue). by strain.
This paper is organized as follows: We start by explaining the crystal structures and important normal modes in Subsection II A. We then present and discuss the strain -tolerance factor phase diagram of HIF RP's in Subsection II B. In Subsection II C, we present the trends of the intrinsic switching barrier as a function of strain. We conclude with a brief summary and discussions in Section III.
A. Review of Crystal Structures
The A 3 B 2 O 7 compounds considered in this study are the n = 2 members of the Ruddlesden-Popper series [23,24]. They can be considered as layered perovskites with an extra AO layer inserted after every 2 perovskite bi-layers (i.e. 4 atomic layers) along the [001] direction (Fig. 1a). The extra AO layers cause a shift by (a/2, a/2, 0) on the ab plane, and hence the structure becomes body centered tetragonal with space group I4/mmm (#139). This shift also breaks the connectivity of the oxygen octahedra, and the AO double layer is held together by mostly ionic bonds between the A-site cations and O anions. The resulting dimensional reduction has important consequences on the electronic structure and lattice response (For example, Ref.'s [25][26][27][28]). Apart from the dimensional effects, the different periodicity of the Ruddlesden-Popper phases along the layering direction (c axis, or the [001] direction) leads to a smaller Brillouin zone than ABO 3 perovskites. The equivalents of various structural instabilities that are at different points of the Brillouin zone in the ABO 3 perovskites can fold back onto the same point in A 3 B 2 O 7 Ruddlesden-Poppers, which leads to interesting couplings between them as discussed below. (This point can be qualitatively understood in analogy to a subduction problem, where a zone boundary mode of the parent group corresponds to a zone center mode of the subgroup. For example, when the unitcell of a cubic perovskite is doubled along the [001] axis as a result of cation order, the spacegroup becomes P 4/mmm and the zone boundary While there is no direct group-subgroup relationship between the Ruddlesden-Popper and perovskite structures, the n = 2 Ruddlesden-Poppers have 2 perovskite blocks in their unitcells, and it is thus possible to recognize some phonon modes folded onto the k z = 0 plane.) By far the most common structural distortions that decrease the symmetry of oxide perovskites is the oxygen octahedral rotations: About 90% of all oxide perovskites have this type of distortion in their crystal structures, which reduces the symmetry of the parent P m3m phase [29]. These distortions can be described in terms of symmetry-adapted-modes, which can be classified by irreducible representations (irreps) of the parent spacegroup P m3m [30]. The phonon modes that correspond to these distortions are the M point mode M + 2 , which is an in-phase rotation of octahedra around one axis, and the R point mode R − 5 , which is an out-of-phase rotation of octahedra around one axis. The former is denoted by a '+' superscript in the Glazer notation, such as a 0 a 0 c + , and the latter is denoted by a '−' superscript, such as a − a − a − . The most common rotation pattern that more than half of all oxide perovskites have is a − a − c + , which leads to the space group P nma (#62) [31]. Another distortion that is often significant in the P nma structure is the X − 5 out-of-phase A-site displacement. Unlike the M + 2 and R − 5 , the X − 5 often does not show up as an unstable phonon mode in the high symmetry (P m3m) phase. Rather, it is an improper order parameter, which attains a nonzero magnitude only because of a trilinear coupling in the Landau free energy The presence of F trilinear in the free energy expansion, which is imposed by group theory, guarantees a nonzero X − 5 distortion whenever the octahedral rotations M + 2 and R − 5 are present, no matter the sign of the coupling γ. Instabilities in the A 3 B 2 O 7 Ruddlesden-Poppers that are similar to the M + 2 and R − 5 normal modes in the ABO 3 perovskites give rise to a wider range of different combinations and resultant symmetries. (For simplicity, we follow the convention to refer octahedral rotations around the out-of-plane (c) axis as 'rotations' (OOR), and the rotations around the in-plane axes as 'tilts' (OOT).) One reason for this is that there is a new degree of freedom, since the body-centered primitive cell now contains two oxygen octahedra. Also, the double AO layers break the connectivity of oxygen octahedra, and hence the relative phase of neighboring octahedra on either side of the double layer is not fixed. As an example, we consider the modes relevant to the A2 1 am phase observed in Ca 3 Ti 2 O 7 and many other HIF Ruddlesden-Popper compounds in Fig. 2. In ABO 3 perovskites, there are two possible rotation patterns around, for example, the c axis: , on the other hand, there are four possibilities: The X + 2 mode corresponds to an in-phase rotation of the two octahedra in one perovskite slab that consists of 5 atomic layers, and is the primitive unit cell. However, X + 2 is a two dimensional irrep, and depending on its direction a particular pair of octahedra on either side of a double AO layer can have either in-phase or out-of-phase rotations, as shown on the left two panels of Fig. 2c. Similarly, the rotations that are out-of-phase within one perovskite slab transform as the two dimensional irrep X − 1 , as shown in the right panels of Fig. 2c. The most relevant octahedral rotation modes in A 3 B 2 O 7 all have the same wavevector: they correspond to X point normal modes. This leads to a richer set of possibilities for the modes induced by trilinear couplings compared to ABO 3 perovskites. In the trilinear coupling terms in ABO 3 perovskites, an M and an R mode has to couple with an X mode due to the translational symmetry. In A 3 B 2 O 7 compounds, on the other hand, the trilinear couplings that contain two separate X modes can contain either an M mode or a Γ mode as the third mode. (M point is denoted as the Z point in the convention of Ref. [32].) The reason is that there are two separate X points on the Brillouin zone that are related to each other via a four-fold rotation, and depending on which pair of X wavevectors are chosen, their sum can either give the Γ or the M wavevector. In Table I, we list the possible trilinear couplings between two X modes and a third mode in the A 3 B 2 O 7 structure, and in Fig. 3, we display the polarization patterns of some of these structures.
Hybrid improper ferroelectricity in the A 3 B 2 O 7 compounds emerges due to the trilinear coupling between X + 2 and X − 3 modes, which induces a polar displacement Γ − 5 . In the HIF structure with space group A2 1 am (#36), each AO layer has a polarization, which are in alternating directions within each perovskite slab, and hence cancel each other -but only partially. As a result, every per-ovskite slab between the double AO layers have a net dipole moment. These moments order in parallel and give rise to a macroscopic polarization (Fig. 3a). A different combination of the same X modes can couple to the M − 5 mode, leading to anti-parallel slab dipoles, and hence to an anti-polar phase shown in Fig. 3b. (We refer to phases with nonzero dipole moments of each perovskite slab as either polar or antipolar.) Other combinations of the X modes couple with different M modes, such as M + 5 or M + 2 , and give rise to nonpolar phases, where the dipole moments of each atomic layer cancel each other within each perovskite slab between to double AO layers (Fig. 3c-d). (We refer to phases where dipole moments of each slab are zero as 'nonpolar'.) Many of these phases are observed to emerge in various A 3 B 2 O 7 oxides under biaxial strain or equivalent doping, and are also shown to be important as intermediate states in the coherent switching of polarization [5,16,[20][21][22]35]. This is in addition to single-tilt systems observed, for example, at finite temperature [7]. In the next subsection, we draw the strain-tolerance factor phase diagram of these compounds to identify regions where these antipolar and nonpolar multi-tilt phases emerge.
B. Strain Phase Diagram
Most -more than half-of oxide perovskites have a tolerance factor of τ < 1, and attain the space group P nma at low temperatures [29]. The corresponding octahedral rotation pattern a − a − c + is also common in A 3 B 2 O 7 Ruddlesden-Poppers, and gives rise to the polar space group A2 1 am observed in HIFs. In addition to the polar phase, strain phase diagrams of these compounds often abound with transitions to nonpolar phases introduced in the preceding subsection. As an example, in Fig. 4a, we present the energy of three lowest energy structures for Sr 3 Sn 2 O 7 as a function of biaxial strain [36]. The zero temperature DFT calculations reproduce the experimentally observed room temperature phase A2 1 am in the unstrained compound. Both tensile and compressive strain decrease the energy difference between this phase and the next lowest energy state, and there are phase transitions to nonpolar phases for strain 2.5% on either direction. Similar strain driven transitions have been predicted for Sr 3 Zr 2 O 7 and Ca 3 Ti 2 O 7 HIF compounds previously, and the pattern of octahedral rotations often change under strain in the ABO 3 compounds as well. A common trend in A 2+ B 4+ O 3 perovskites is that tensile biaxial strain suppresses OOR around the out-of-plane axis, whereas compressive strain enhances it. Sr 3 Sn 2 O 7 follows a similar trend: The transition under tensile strain is to the P 4 2 /mnm phase, which has only X − 3 tilts, whereas the transition under compressive strain is to the Aeaa phase, which has only the X − 1 rotations around the c axis. The transition to these nonpolar phases is not a result of a continuous suppression of polarization by strain: the magnitudes of polarization in the A2 1 am phase on both phase boundaries are sizable, and is even enhanced under tensile strain, as shown in Fig. 4(b).
In order to elucidate the behavior of different HIF compounds under strain, in Fig. 5 we map out the strain -tolerance factor phase diagram by considering 11 different A 3 B 2 O 7 compounds. (We do not include 2 compounds with larger tolerance factors, since they do not display any OOR or OOT. Most of these compounds have been studied from first principles before, but to the best of our knowledge, this is the first time that this information is compiled to display all compounds together. We consider a strain range of ±4%, which covers the experimentally feasible range. For most of the compounds with τ < 1 that we consider, the lowest energy unstrained structure is A2 1 am, which corresponds to the HIF phase.
For 0.92 τ 1, nonpolar structures emerge under both tensile and compressive strain. We observe three different nonpolar structures: P nab and P 4 2 /mnm under tensile strain, and P nab and Aeaa under compressive strain. They correspond to the following changes in the octahedral rotations and tilts: • Compressive strain induced OOT suppression (leads to Aeaa): This is observed in Sr 3 Sn 2 O 7 and Ca 3 Ge 2 O 7 . The OOT mode amplitude drops to zero and OOR mode phase changes under compressive strain as shown in Fig. 6(c),(d).
• Tensile strain induced OOR suppression (leads to P 4 2 /mnm): This is observed in Sr 3 Zr 2 O 7 and Sr 3 Sn 2 O 7 . Similar to the first situation, but the OOR mode drops to zero under Table I. tensile strain instead of OOT mode, as shown in Fig. 6(b-c).
• Tensile/compressive strain induced OOR phase change (leads to P nab): This is observed in Ca 3 Ti 2 O 7 under both tensile and compressive strain, in Sr 3 Zr 2 O 7 under compressive strain, or in Cd 3 Ti 2 O 7 under small tensile as well as compressive strains. (Fig. 6(a-b)). Amplitudes of both the OOR and OOT mode retain non-zero, but the inphase OOR mode changes into out-of-phase manner. This structure is shown in figure 3(b). The A-site cations around two interfaces move in the opposite direction, which cancels the polarization in bulk.
Some of these transitions are explained by local measures such as the global instability index (GII), which is known to predict the octahedral rotation patterns and angles in ABO 3 perovskites successfully [29,37]. For example, the transition to P 4 2 /mnm in Sr 3 Sn 2 O 7 is coincident with the strain value above which the GII of this phase is the smallest [38]. However, the GII by itself does not explain why the polar A2 1 am structure is preferred over the Aeaa one, for these two phases have very similar GII values under compressive strain. It is possible that the interplay Ca3Zr2O7
Tolerance Factor
Strain / % FIG. 5. Phase diagram of HIF A3B2O7 compounds under biaxial strain. Red color represents ferroelectric (HIF) phase, the others are all non-polar structures. Results for Ba3Ti2O7 (t=1.06) and Ba3Ge2O7 (t=1.10), which don't display any rotation or tilting, are not shown here. Proper ferroelectric phases of large tolerance factor compounds, such as the one in Sr3Ti2O7 under large tensile strain [25,40], are not displayed either.
of GII with the long-range Coulomb interaction (which is an important factor in stabilizing the polarization in proper ferroelectrics such as BaTiO 3 [39]) is responsible of the transition to the Aeaa phase.
The transition to a single-tilt system can be explained phenomenologically by the cross term between OOR and OOT -a large OOR might suppress OOT and vice versa. All compounds in the A2 1 am follow the same aforementioned trend as many ABO 3 perovskites that compressive strain enhances OOR, whereas tensile strain enhances OOT (Fig. 6a- [20,41,42].) This trend is likely the result of the strain reducing particular B-O bond lengths, which can be increased by the OOT or OOR distortions. The lowest order cross term between the OOR and OOT in the free energy is F ∼ βR 2 T 2 (where we denote the amplitudes of rotations and tilts by R and T respectively). For fixed value of R, this term renormalizes the coefficient of the T 2 term ∼ αT 2 as ∼ (α + βR 2 ), and hence for large OOR R 2 > −α/β, the tilting instability is suppressed, and it becomes energetically favorable to have no tilts, as is the case in compressively strained Sr 3 Sn 2 O 7 in the Aeaa phase.
A phenomenological explanation of the strain induced transition to nonpolar P nab structure requires not only the biquadratic terms between the OOR and the OOT modes, but also various trilinear terms that couple these modes to other antiferrodistortive displacements [20]. It is particularly interesting that in Ca 3 Ti 2 O 7 , this transition is re-entrant in the sense that it happens under both tensile and compressive strains. The GII does not have an obvious trend that explains this transition [38], and the electrostatic interaction between the O ions on different layers is possibly important [38]. We leave the microscopic explanation of this transition to a future study.
C. Strain tuning of the ferroelectric switching barrier
Enhanced susceptibilities near second order phase transitions can be exploited to design materials with large responses, for example, magnetic permeability or dielectric constants. While no such enhancement of linear susceptibility is mandated near first order transitions, it is nevertheless possible to obtain large response near a first order phase boundary if the external field can induce the transition. Examples of demonstrations of this approach include Terfenol, Pb(Zr,Ti)O 3 , and BiFeO 3 [43][44][45]. The phase boundaries of structural transitions depend on strain very sensitively, and as a result, this approach is a promising means to enhance the response of materials via strain.
The question we focus on in this subsection is whether the ferroelectric polarization switching barrier is affected when strain is used to tune the materials to the vicinity of the polar-nonpolar phase transitions. In order to answer this question, we use the minimum energy barrier for coherent polarization switching as a proxy to the coercive electric field. While in an actual experiment defects, domain structure, as well as size and shape effects significantly alter the coercive field, trends of coherent switching barrier can be used as a first principles proxy to the trends of the coercive field [46] as explicitly shown in HfO 2 [19]. (Finite element methods which take into account the domain structure provide much lower switching barriers [47].) In practice, the coherent switching field calculated from the first principles energy barrier by assuming that the dipole moment in every unit cell of an infinite crystal switches at the same time is a gross overestimate. As a result, we don't report the electric field required for switching, but instead report only the energy barriers.
Since in the hybrid improper ferroelectric A 3 B 2 O 7 compounds the polarization emerges as an improper order parameter through a trilinear coupling with rotation and tilting modes, switching one of these two modes is necessary to switch the polarization. It was recognized as early on as in the first HIF paper that this makes different switching pathways possible, and that the corresponding energy barriers can be tuned by strain [2]. Later, the work of Nowadnick and Fennie [21] analyzed the possible roles of different switching mechanisms, and Munro et al. used the idea of distortion symmetry groups to identify other switching pathways [22,48]. Since then, the energetics of switching in various HIF compounds have been studied, for example in Ref. [49]. However, to the best of our knowledge, a comparison of different compounds and their strain dependence have not been performed yet. In Fig. 7(a-d), we show four possible polarization switching pathways. We follow the convention of the distortion symmetry groups to name these pathways [48]. This process involves identifying not only the symmetry operations shared by all images on the pathway, but also those operations that reverse the distortion, which is the polarization in this case. The latter are referred to as distortion reversal symmetries, and are denoted by a '*' superscript. For example, P n * ab means that each image along the switching path has two glide planes with translations along a and b axes; and the glide plane n * reverses the distortion. Three of the switching paths we consider (P n * ab, P b * nm and P n * am) have a similar name as their intermediate phase (up to the asterisks), because the spatial symmetry elements of the intermediate phase either remain unchanged or become reversal symmetry operation for other images. But this is not the case for P n * 2 * 1 m. All of the four are so-called 2-step switching pathways, where there exists a local minimum of energy on the switching path, as seen from Fig. 8(a), and they are the lowest ones among such paths for the 3 compounds we considered. They each have distinct intermediate states, but the same initial and final states. Since the Ruddlesden-Popper structure consists of weakly bound perovskite blocks separated by an in-terface between two rock-salt AO layers, it is possible to consider supercells extended along the [001] direction, and polarization being switched in one perovskite block at a time. This, in principle, gives rise to an infinite number of different switching pathways, the barrier energy per formula unit can be arbitrarily small (since only one block out of arbitrarily many switches at each step.) This has been observed in Ref's [22,49], where typically the 4-step switching paths have lower (but comparable) barriers than the 2-step ones, which in turn have lower barriers than the single-step paths. (The path with a very large number of steps can be considered to be a simple model of domain wall in motion along the [001] direction.) However, this does not necessarily imply that the pathway with the highest number of steps determines the coercive field, because what is more important for the switching under an electric field is the slope of the energy vs. polarization curve [46]. For simplicity, as well as computational manageability, we focus only on 2 step switching pathways.
Each of the four pathways can be reproduced within the same doubled conventional cell as the polar structure. The P n * ab and P n * am pathways (Fig. 7(a-b) P b * nm pathway involves switching the direction of the OOT mode (X − 3 ), whereas in the P n * 2 * 1 m both OOR and OOT change directions, as shown in Fig. 7(c-d). Mode decompositions of these switching pathways are given in the supplementary information [38].
In Fig. 8(a), we plot the energy as a function of the reaction coordinate for these four switching pathways in unstrained Sr 3 Sn 2 O 7 . The energy barriers are comparable and the lowest one is for the P n * am pathway. Results presented in Fig. 8(b) show how the energetics of this path behaves under tensile strain: Tensile strain monotonically decreases the P n * am switching barrier, thus lowering the expected coercive field required for switching. This is not a surprising result, since the OOR's weaken under tensile strain, as shown in Fig. 6(c) and the P n * am pathway involves a change in the OOR character. What is interesting, and important for applications, is that this reduction in the switching barrier is not accompanied with a lower polarization under tensile strain (Fig. 4). Thus, strain can be used as a means to lower the coercive field of hybrid improper ferroelectrics.
The strong strain dependence of the switching barrier is not specific only to Sr 3 Sn 2 O 7 , or the P n * am pathway. In Fig. 9(a-b), we show the barrier for different switching paths of Sr 3 Sn 2 O 7 and Ca 3 Ti 2 O 7 as a function of strain throughout the strain range that the HIF phase is stable. While the error bars in the energy barriers from the NEB calculations cause the curves to be rather rugged, two trends are evident: (i) under tensile strain, the barriers for pathways that involve changing the direction of the OOR mode (P n * am and P n * ab) are lowered, and (ii) under compressive strain, the barrier for the pathway that only involve changing the direction of the OOT mode (P b * nm) is lowered. These are consistent with the tendencies towards OOR and OOT distortions becoming weaker under tensile and compressive strain as discussed earlier. Near 0% strain, the lowest barrier pathway switches from P b * nm or P n * 2 * 1 m to either P n * ab or P n * am, and either strain direction leads to a lower coherent switching energy barrier. The lowest barriers are obtained near the phase boundaries between the polar and nonpolar phases, and the maximum suppression is about 50% in both compounds. and Ca 3 Ti 2 O 7 , and it does not display a strain induced phase transition in the strain range we considered. It does not show a strain induced change in the switching pathway, or a significant decrease in the switching barrier either (Fig. 9(c)). This is likely because this compound is very far from the phase boundaries, and with its small tolerance factor, it has such large OOR and OOT that the strain induced changes in the instabilities are inconsequential.
III. DISCUSSION
Since its discovery about a decade ago, hybrid improper ferroelectricity have provided fertile ground for first principles materials by design approaches. Experiments have also been been catching up rapidly, verifying theoretical predictions. Multiple hybrid improper ferroelectric Ruddlesden-Popper phases have already been synthesized using bulk methods (for example [4][5][6]11]). Although thin film growth of Ruddlesden-Popper phases, especially for thermodynamically unstable compositions and at large strain values, is usually challenging because of the required stoichiometry control, there has been successful demonstration of switchable HIF in PLD grown films [16], and both hybrid and conventional oxide molecular beam epitaxy have been used to synthesize phases that are not thermodynamically stable [40,50]. Current efforts focus on understanding more than the emergence of ferroelectricity, and to find ways to optimize properties such as the coercive field required for polarization switching.
In this study, we used first principles calculations to shed light on the strain-tolerance factor phase diagram of n = 2 Ruddlesden-Popper HIF's, and to come up with a design strategy for obtaining lower coherent switching energy barriers. This quantity, which we used as a proxy for the coercive field, decreases significantly when strain is used to tune the HIF's to the nonpolar phase boundaries, because of the weakening of one of the rotation or tilt modes. We further showed that this weakening, and the resulting decrease in the switching barrier, is not always accompanied with a decrease in the polarization magnitude, for example in Sr 3 Sn 2 O 7 , verifying the point made early on in Ref. [9] that a lower barrier does not necessarily mean a lower polarization. Our results thus show that biaxial strain, which has historically been used to induce ferroelectricity in many oxides, can also be used as a means to tune the coercive field of hybrid improper ferroelectrics.
A. First Principles and Other Calculations
Density functional theory calculations are performed using the projector augmented wave approach [51] as implemented in the Vienna Ab-initio Simulation Package (VASP) [52,53], and using the PBEsol generalized gradient approximation [54]. All calculations are done in a 48-atom (4 formula unit) supercell, which can be viewed as a √ 2 × √ 2 × 2 multiple of the primitive cell of the reference I4/mmm structure. A Γ-centered 6 × 6 × 2 grid of k-points is used for the Brillouin zone integrals.
We consider all A 3 B 2 O 7 compounds with A = Ca, Sr, Ba and B = Ti, Zr, Sn, Ge, as well as Cd 3 Ti 2 O 7 [4-6, 35, 55-58]. These compounds are all band insulators with sizable gaps, so using the PBEsol generalized gradient approximation is expected to reproduce the crystal structures with reasonable accuracy. Biaxial strain boundary conditions are simulated by fixing the in-plane lattice constants, and allowing the out of plane component, as well as internal atomic positions, to relax with an force threshold of 2 meV /Å. The zero strain is defined for each compound by the a lattice constant obtained by completely relaxing the structure in the reference high symmetry structure I4/mmm.
The Goldschmidt tolerance factor [59], which is used as a simple measure to predict tendency towards octahedral rotations, and is originally defined in terms of the ionic radii r using is instead calculated using the bond lengths for 12 coordinated A-site (d AO ) and 6 coordinated B-site (d BO ) ions from the bond valence model as (This approach is following Ref. [29].) In order to calculate the minimum energy barrier for polarization switching, climbing-image nudged elastic band (CI-NEB) method was used to further relax linearly interpolated switching paths to the minimum energy path [60]. The spring constant was set to 5 eV/Å 2 , and a convergence criterion of 1 meV per supercell was used. Distortion symmetry groups [48,61] are used to enumerate and name the possible initial pathways following Ref. [22] with the help of the DiSPy package [62]. All the switching pathways reported in the text retain their symmetry for all values of the reaction coordinate under NEB calculation.
As various points in this paper, symmetry and group theory related arguments are built using the Isotropy Software Package [63] and the Bilbao Crystallographic Server [64][65][66]. VESTA software was used for visualization of crystal structures. [67] V. DATA AVAILABILITY Data for the phase diagram and the switching paths are available at the Data Repository for University of Minnesota at https://doi.org/10.13020/hvr3-bg02 . | 8,066.8 | 2020-09-26T00:00:00.000 | [
"Physics"
] |
Superfluid transition and specific heat of the 2D x-y model: Monte Carlo simulation
We present results of large-scale Monte Carlo simulations of the 2D classical x-y model on the square lattice. We obtain high accuracy results for the superfluid fraction and for the specific heat as a function of temperature, for systems of linear size L up to 4096. Our estimate for the superfluid transition temperature is consistent with those furnished in all previous studies. The specific heat displays a well-defined peak, whose shape and position are independent of the size of the lattice for L>256, within the statistical uncertainties of our calculations. The implications of these results on the interpretation of experiments on adsorbed thin films of He-4 are discussed.
Introduction
The two-dimensional classical x-y model is the simplest model to exhibit a Kosterlitz-Thouless (KT) transition [1][2][3][4]. The KT universality class includes the superfluid phase transition in two dimensions (2D) which is a subject of ongoing experimental and theoretical investigation, chiefly in the context of thin films of 4 He adsorbed on a wide variety of substrates [5][6][7][8][9][10]. Theoretical results obtained by studying the x-y model, typically by computer simulations, are utilized both to ascertain whether a particular physical system experimentally investigated, believed to be in the same universality class, conforms with the KT paradigm, as well as to predict the behavior of systems yet unexplored [11][12][13][14]. Decades of computer simulation studies of the 2D x-y model, carried out on square lattices of size as large as L = 2 16 [15], have yielded very precise estimates of the superfluid transition temperature T c and of the critical exponents associated with the transition [15][16][17][18][19][20][21][22][23].
Less extensively investigated is the behavior of the specific heat, which displays an anomaly at a temperature ∼ 17% above T c , in numerical simulations of the x-y model on square lattices of size L = 2 8 [19]. The position of the peak appears to depend weakly on the size of the simulated lattice, but to our knowledge no systematic study has yet been carried out, aimed at establishing whether such an anomaly occurs in the thermodynamic limit, and its actual location. There have been also speculations that the width of the peak may narrow in the thermodynamic limit, and the peak itself may evolve into a cusp [17]. There is presently no consensus regarding the physical interpretation of such an anomaly, which does not appear to signal the occurrence of any phase transition. Interestingly, experiments on 4 He monolayers [24], as well as computer simulations [25] (including of 2D 4 He [26]) have also yielded evidence of a peak in the specific heat at temperature above the superfluid transition temperature.
To our knowledge no further studies have been carried out of the specific heat, beyond that of Ref. [19], aimed at establishing any possible shift in temperature of the peak, as the lattice size is increased, as well as the general shape of the curve. One reason for this state of affairs is that the calculation of the specific heat in direct numerical (Monte Carlo) simulations is affected by relative large statistical uncertainties, due to the inherent "noisiness" of the presently known specific heat estimators. However, the (almost) three decades elapsed since the publication of Ref. [19] have witnessed both an impressive increase in the available computing power, as well as the development of more efficient and sophisticated simulation methods. In light of that, it seems worthwhile to revisit this issue, which is of potential experimental relevance, as the x-y model is sometimes invoked in the interpretation of measurements of the specific heat of thin 4 He films, as well as utilized predictively, in the same context [13,14].
In this article, we report results of large scale computer simulations of the 2D x-y model on the square lattice, performed using the Worm Algorithm in the lattice flow representation [27]. We carried out simulations on square lattices of size up to L = 2 12 . Our main aim is to study the specific heat, and provide robust, reliable information about its behavior in the thermodynamic limit; in order to validate our study, we also computed the superfluid transition temperature and spin correlations, comparing them to the most recent theoretical estimates. We obtain a value of T c consistent with that of Ref. [15], which is presently the most accurate published result. We present strong numerical evidence confirming the presence of the specific heat anomaly in the thermodynamic limit, its shape remaining essentially unchanged with respect to that on a lattice of size L = 2 8 . We estimate the position of the peak of the specific heat in the thermodynamic limit to be at temperature 1.043(4) (in units of the coupling constant).
This paper is organized as follows. In Sec. 2, we briefly describe our computational methodology, in Sec. 3, we analyze the MC data and present the results. We outline our conclusions in Sec. 4.
Model and Methodology
The Hamiltonian of the classical x-y model is given by where the sum runs over all pair of nearest-neighboring sites, and s i ≡ s(cos θ i , sin θ i ) is a classical spin variable associated with site i. We assume a square lattice of N = L × L sites, with periodic boundary conditions. Henceforth, we shall take our energy (and temperature) unit to be Js 2 . We investigate the low-temperature physics of this model by carrying out classical Monte Carlo simulations, based on the methodology mentioned above, which is extensively described in the original reference [27] and will therefore not be reviewed here. Details of our calculations are standard. As mentioned above, an important part of this work consists of studying the superfluid transition, in order to compare our results with those of existing studies and gauge therefore the accuracy of our methodology. We determine the superfluid transition temperature T c in two different ways. The first consists of computing the superfluid fraction ρ s (L, T) on a lattice of size L, as a function of temperature, using the well-known winding number estimator [28]. We then determine a size-dependent transition temperature T c (L) based on the universal jump condition [29] ρ s (L, where f r = 1 − 16πe −4π [30]. Eq. 2 can be used to obtain an estimate of the transition temperature (T c (L)) on a lattice of finite size. In order to extrapolate the value of T c (L) to the thermodynamic (L → ∞) limit (referred to as T c ), we fit the results for T c (L), obtained for different system sizes to the expression [31]: where a, b are constant. It should be noted that other expressions have been proposed, aimed at extracting T c [23]; we come back to this point when discussing our results. The superfluid transition temperature can also be inferred from the behavior of the spin correlation function [32], specifically from the divergence of the correlation length ξ as T → T c , namely [1][2][3]: where A, c are constant (c ≈ 1.5 [32]), and t = (T−T c ) T c is the reduced temperature. Above T c , the correlation length ξ(T) can be obtained from the computed correlation function by means of a simple fitting procedure, illustrated in Ref. [16]. Using the best fit to Eq. (4), an estimate of T c is obtained; the accuracy of the estimated T c increases with the size of the system studied. The estimates of T c obtained in the two ways illustrated above are consistent within their statistical uncertainties; however, we find that the first procedure, based on the universal jump of the superfluid fraction, affords a considerably more accurate determination of T c .
Moreover, we calculate the specific heat (i.e., the heat capacity per site) through the direct estimator of the heat capacity [33], based on the mean-squared fluctuations of the total energy E: where β = 1/T is the inverse temperature. This estimator is numerically "noisy", and for this reason the numerical differentiation of the computed energy values with respect to the temperature has often been preferred [16]. In our case, however, we found it possible to obtain reasonably accurate estimates of the specific heat using Eq. (5), thanks to the available computing facilities and the methodology adopted.
Results
We begin the presentation of our results by illustrating our estimates for the superfluid fraction as a function of temperature for the various lattice sizes considered, and by discussing the determination of the transition temperature, which we compare to those provided in other works. Fig. 1 shows the computed value of ρ s (L, T); the critical temperature T c (L) for a given system size is determined by the universal jump condition, namely the intersection of the ρ s (L, T) curve with the straight line given by the right hand side of Eq. 2. The intersection point is estimated by drawing a straight line between the two adjacent values of ρ s (L, T) within which the intersection can be established to take place, within the precision of our calculation.
As expected, both T c (L) and ρ s (L, T c (L)) display a slow decrease as a function of L. In order to extrapolate the value of T c in the thermodynamic (L → ∞) limit, we fit the computed T c (L) to Eq. (3), as proposed in Ref. [15]. This procedure is illustrated in Fig. 2. Our estimate is T c = 0.8935 (5), which is consistent with that of Ref. [15], namely 0.89289(5), even though their quoted uncertainty is a factor ten smaller than ours, a fact to be ascribed to the significantly (sixteen times) greater system sizes adopted therein. Our estimate of T c is also in perfect agreement with the more recent result of Ref. [23], in which the same computational methodology utilized in this work was adopted, on the same system sizes. Their results are of precision comparable to ours; they make use of a different, more elaborate fitting form for T c (L), but their resulting estimate for T c is entirely consistent with ours, and has the same uncertainty.
As mentioned in Sec. 2, as a further check of our results we estimated the critical temperature T c independently through the spin correlation length. In this case, the computed spin correlation function for a given system size yields an estimate of T c , obtained first by extracting the correlation length ξ(T) as a function of temperature, and then fitting the results to Eq. 4. An example of this procedure is shown in Fig. 3, for the largest system size considered here, which is the one that yields the estimate of T c with the smallest uncertainty. Such an estimate, namely 0.893 (3), is consistent with that obtained from the superfluid fraction, but it is considerably less accurate. Our result for T c gives us sufficient confidence on the reliability of our data and simulation. Therefore, we now discuss the most important part of this work, namely the behavior of the specific heat C(T). It is worth restating that previous numerical studies of the 2D x-y model [16,19] have yielded results for this quantity only for square lattices of size up to L = 256. Such studies yielded evidence of a peak in the specific heat at a temperature above T c ; the position of this peak depends fairly strongly on system size for L ≤ 128. On the other hand, the shape of C(T) appears to change little going from L = 128 to L = 256, suggesting that the anomaly may indeed be a genuine physical feature of the model and not an artifact of numerical simulations carried out on finite systems of small size. Fig. 4 shows our results for the specific heat for the various system sizes, showing that the curve indeed appears to stabilize for L > 256. The inset of Fig. 4 shows the position of the peak, which, within the statistical uncertainties of our calculation, is independent on system size. Our best estimate of the peak position is T P = 1.043(4) = 1.167(1) T c . The height of the peak is approximately 1.45. Altogether, therefore, our simulations aimed at characterizing quantitatively the specific heat anomaly, carried out on lattices of significantly greater size than those of all previous studies (of the specific heat), revise the position of the peak to a slightly higher temperature, and reduce its height by a few percent. However, the presence of the anomaly, its overall shape, the fact that it remains broad (i.e., it does not approach a cusp in the thermodynamic limit) and that it occurs at a different temperature than the superfluid transition, can in our view be regarded at this point as well-established. It is worth reminding that the presence of such an anomaly has been theoretically predicted using different approaches, furnishing results in quantitative agreement with those of Monte Carlo simulations [34].
Conclusions
Summarizing, we have carried out extensive Monte Carlo simulations of the 2D x-y model, making use of the Worm Algorithm. The twofold purpose of our study was that of assessing the effectiveness of the methodology, which to our knowledge has not yet been applied this particular model (we became aware of Ref. [23] while this work was in progress), as well as consolidating existing theoretical results for the specific heat. We have simulated the model on lattices of linear size up to L = 4096, obtaining for the superfluid transition temperature results of accuracy comparable to that yielded by the most recent numerical simulations, making use of standard computational resources. For the specific heat, the largest system size for which we report results is sixteen times greater than that for which Monte Carlo estimates have been published. Our results confirm the existence of a specific heat anomaly, namely a peak, occurring at a temperature ∼ 17% higher than that at which the superfluid transition temperature takes place. It is interesting to compare this to 2D 4 He, for which the peak in the specific heat observed in computer simulations [26] is located at T ∼ 1.6 T c .
It has been suggested [16,35,36] that the temperature dependence of the specific heat correlates with that of the vortex density above the critical temperature. In this case, one could expect a similar specific heat anomaly, which is not indicative of a phase transition, to occur in physical systems such as atomically thin 4 He films, which approach the 2D limit and display superfluid transitions that conform with the KT paradigm. Indeed, this may help in the interpretation of specific heat data for 4 He films adsorbed on graphite, where Figure 3. The correlation length ξ as a function of the temperature, for a system of size L = 4096. The solid line is a fit to the data using expression (4). Inset shows the computed spin correlation function G(r) for a temperature T = 0.96. similar features (peaks) are often interpreted as signalling phase transitions (e.g., melting of commensurate solid phases, see for instance Ref. [25]).
Author Contributions:
The authors contributed equally to this work.
Funding: This research was funded by the Natural Science and Engineering Research Council of Canada. Computing support from ComputeCanada is gratefully acknowledged.
Data Availability Statement:
The computer codes utilized to obtain the results can be obtained by contacting the authors.
Acknowledgments:
The authors wish to thank Prof. Y. Deng for sharing the results of his independent investigation. | 3,635.8 | 2021-05-28T00:00:00.000 | [
"Physics"
] |
Nanocoral-like Polyaniline-Modified Graphene-Based Electrochemical Paper-Based Analytical Device for a Portable Electrochemical Sensor for Xylazine Detection
A portable electrochemical device for xylazine detection is presented for the first time. An electrochemical paper-based analytical device (ePAD) was integrated with a smartphone. The fabrication of the ePAD involved wax printing, low-tack transfer tape, and cutting and screen-printing techniques. Graphene ink was coated on the substrate and modified with nanocoral-like polyaniline, providing an electron transfer medium with a larger effective surface area that promoted charge transfer. The conductive ink on the ePAD presented a thickness of 25.0 ± 0.9 μm for an effective surface area of 0.374 cm2. This sensor was then tested directly on xylazine using differential pulse voltammetry. Two linear responses were obtained: from 0.2 to 5 μg mL–1 and from 5 to 100 μg mL–1. The detection limit was 0.06 μg mL–1. Reproducibility was tested on 10 preparations. The relative standard deviation was less than 5%. The applicability of the sensor was evaluated with beverage samples spiked with trace xylazine. Recoveries ranged from 84 ± 4 to 105 ± 2%. The developed sensor demonstrated excellent accuracy in the detection of trace xylazine. It would be possible to develop the portable system to detect various illicit drugs to aid forensic investigations.
■ INTRODUCTION
Drug abuse is a common public health problem, threatening body, life, and property. The misuse of veterinary drugs has been recently reported, and one specific, non-opioid, sedative drug conventionally used for analgesia, hypnosis, and muscle relaxation 1 has been highlighted, namely, xylazine. This veterinary drug has been used by criminals in robbery and rape cases due to its colorless, odorless, and tasteless nature. A victim might not be able to detect the drug in a spiked drink.
A strong depressant on the human central nervous system, xylazine [N-(2,6-dimethyl phenyl)-5,6-dihydro-4H-1,3-thiazin-2-amine] was initially synthesized for use in the treatment of hypertension. 2,3 It induced bradycardia, hypotension, and transient hyperglycemia. 4 Due to its effects and implications, the Food and Drug Administration restricted xylazine for human use. 3,5 Currently, xylazine can only be used for analgesic, anesthetic, and sedative purposes in cattle, sheep, goats, horses, cats, and primates. However, it has been illegally traded and used in crime. 6,7 In humans, xylazine causes drowsiness, diarrhea, muscle relaxation, and pain relief. 3,8 It primarily affects the central nervous system, and depending on the dosage, it causes exhaustion, sleepiness, muscle weakness, and a reduction in the respiratory rate. 8−10 Generally, xylazine is metabolized, absorbed, and excreted rapidly. 11 Symptoms due to the administration of xylazine appear within minutes and can last up to 4 h. 3,12 Xylazine has been reported to cause initial hypertension, which then decreases, stabilizes, and leads to arrhythmia. 9 The effects of xylazine could be attenuated, blocked, and reversed with the α2-adrenergic antagonist yohimbine. 13 In the past decade, xylazine became a popular recreational drug worldwide 3,14 and has been widely used to adulterate illicit drugs such as cocaine, heroin, and speedball (a mixture of cocaine and heroin). 15 However, the toxic effects of xylazine in combination with heroin and/or cocaine or other drugs in humans have remained unexplored due to the restriction of its administration to humans. 15,16 The analytical techniques to determine xylazine proposed in the literature have included high-performance liquid chromatography with ultraviolet absorbance detection, 17 gas chromatography coupled with mass spectrometry, 6 and liquid chromatography with mass spectrometry. 11,18 Although these techniques are very sensitive and selective, they are timeconsuming and require costly instrumentation, sophisticated analyses, and specialized operators. For that reason, electrochemical methods have gained increasing attention for their simplicity, rapidity, sensitivity, cost-effectiveness, and suitability for field analysis. However, current research on the electrochemical detection of xylazine is rare. A glassy carbon electrode (GCE) 19 and a modified carbon paste electrode 20 were developed for the electrochemical determination of xylazine. However, the peak oxidation of xylazine reported in these works was observed at high potentials (0.85 and 1.00 V, respectively), and it also had limited practicality for on-site analysis. Motivated by the above limitations, our group previously developed a screen-printed carbon electrode (SPCE) modified with graphene nanoplatelets for on-site analysis that could detect xylazine oxidation at a potential of 0.73 V. 21 However, the instruments were still quite large and the electrode was expensive. Therefore, we developed and designed a smaller, portable electrochemical device that is more convenient and practical to use.
Electrochemical paper-based analytical devices (ePADs) have great potential for on-site analysis and cost-effective. These devices utilize paper as a substrate for analytical measurements. Their low cost, light weight, flexibility, portability, and suitability for large-scale production make them useful in forensic applications, particularly in resourceconstrained countries. 22,23 Also, the graphene ink used to create the three-electrode system on the paper substrate exhibits excellent electrical conductivity, a high specific surface area, thermal stability, and interesting mechanical properties. 24 Our previous study highlighted the benefits of graphene, where π···π interactions between aromatic molecules of xylazine and graphene greatly increased adsorption in the pre-concentration step of electrochemical measurement. 21 Nanostructures of very highly conductive polymers such as polyaniline (PANI), 25−27 poly pyrrole, 28 poly(3,4-ethylenedioxythiophene), 29 and their composites have already been used as electrode surface modifiers. In this work, PANI was chosen to improve the performance of the electrochemical sensor. The synthesis procedure of PANI was easy, and it was highly conductive as well as electrochemically and environmentally stable. 25,26 The aim of the present work is to establish a novel strategy to determine xylazine using an ePAD based on graphene ink modified with PANI. A small, convenient, and practical portable electrochemical sensor is proposed, as illustrated in Scheme 1. The device resembles a USB drive and connects to a smartphone to control the analytical procedure and display the results. It can support a wide variety of users and does not require a lot of analytical skill. It is hoped that this easy-to-use portable device could enable on-site analysis and direct detection of xylazine.
■ EXPERIMENTAL SECTION
Reagents and Apparatus. Xylazine hydrochloride standard was purchased from U.S. Pharmacopeia (Rockville, MD). Aniline monomer (ANI, 99%, Sigma-Aldrich, USA), hydrochloric acid (HCl, 37%, Merck, Germany), N,N-dimethylformamide (DMF, Ajax, Australia), acetic acid (100%, Merck, Germany), boric acid (Ajax, Australia), and phosphoric acid (85.8%, JT Baker, USA) were used as received. Chemicals for interference testing and other substances were obtained from Sigma-Aldrich (St. Louis, USA). Britton−Robinson (BR) buffer was prepared based on a previously reported procedure. 30 All chemicals used were prepared with deionized (DI) water with a resistivity of 18.2 MΩ cm (Barnstead EasyPure II water purification system, Thermo Scientific, USA). Chromatography paper (CHR paper, Whatman grade 1 CHR, Cat no. 3001-917) was used to construct the ePAD. Low-tack transfer tape (LTT, Fushun Sticker) was purchased from a local market store. Graphene ink (C2131121D3) and Ag/AgCl ink (C2090225P7) were purchased from Gwent Electronic Materials Co., Ltd. (United Kingdom). A wax printer (Xerox ColorQube 8570, Xerox, USA) was used to create wax barriers. A printer/cutter (Silhouette Cameo, Silhouette, Brazil) was used to create the electrode pattern drawn via Silhouette Studio v. 4.3 software. The ePAD electrode morphology and structure were investigated by scanning electron microscopy (SEM, Quanta 400 and FE-SEM, Apreo, FEI, USA) operating at 20 and 50 kV. A Fouriertransform infrared (FTIR) spectrometer (VERTEX 70, Bruker, Germany) was used with a KBr pellet, and absorbance was captured at wavenumbers between 400 and 4000 cm −1 at a resolution of 4 cm −1 . All electrochemical determinations were performed with the lab-built portable device for xylazine analysis (Scheme 1). ePAD Fabrication Process. The ePAD was fabricated following a previously reported method 22 (Figure 1). In brief, rows of hydrophobic barriers were created by printing wax on a CHR paper that was then heated with a hot air dryer. LTT was placed on top of the CHR paper to cover the hydrophobic barriers, and using a printer cutter, negative masks of the ePAD electrode [three-electrode system consisting of a working electrode (WE), a pseudo-reference electrode (RE), and a counter electrode (CE)] were cut out of the LTT to expose the CHR paper. The LTT was then coated with graphene ink using a squeegee. The ink was forced through the mask onto the CHR paper, creating a series of graphene ink electrode patterns 1.25 cm wide and 3.75 cm long. After curing the ink for 30 min at 70°C, Ag/AgCl ink was applied with a paintbrush to the RE areas to create the RE on each electrode pattern. The paper was then heated in an oven at 70°C for 30 min. The LTT mask was carefully peeled off, and the electrode patterns were cut out of the CHR paper. Finally, the individual electrode patterns were modified with coral-like PANI to create a modified ePAD ready for use in a portable electrochemical sensor (Scheme 1) for on-site xylazine detection and analysis.
The portable electrochemical sensor used in this work was adapted from our work. 22,31 The sensor housing was designed in a pill bottle box case (Scheme 1) to accommodate the disposable ePAD sensor and a USB connector that can plug into a mobile phone loaded with the appropriate software application. The developed device was made up of three parts ( Figure S1). The sensing device was equipped with an Emstat Pico Module potentiostat to provide the potential to the PANI/ePAD sensor and to measure the generated current. The xylazine sensor software application, developed from Software Development Kits (SDKs) for.NET (www.palmsens. com/oem/sdkdotnet/), was installed on the portable monitoring device and controlled the operation of the sensing device. The third part was the PANI-modified ePAD sensor that detected the presence of xylazine and generated the electrochemical signal.
Synthesis of Coral-like PANI. A coral-like PANI composite was synthesized by the polymerization of the aniline monomer in the presence of a 25% NaCl solution. To 100 mL of the NaCl solution were added 18 mL of concentrated hydrochloric acid (HCl) and 1.82 mL of aniline monomer. The precipitate of NaCl was then dissolved in a few drops of DI water. In the next step, 4.56 g of ammonium persulfate (APS) was added dropwise to 100 mL of the NaCl solution for about 15 min, and the mixture was stirred for 12 h. The product was filtered, washed first with 500 mL of DI water and then with 250 mL of ethanol, and dried in an oven at 60°C for 12 h. Finally, coral-like PANI was suspended in DMF to a concentration of 2.0 mg mL −1 , dropped onto the WE area of the ePAD, and allowed to dry at 70°C for 5 min.
Electrochemical Measurements. The electrochemical measurements were performed by dropping 30 μL of BR buffer (pH 7.00) containing various concentrations of xylazine covering three electrodes in the detection zone of the ePAD. Cyclic voltammetry (CV) was carried out by scanning a potential from +0.30 to +1.00 V at a scan rate of 0.05 V s −1 . The analysis of xylazine in beverage samples was performed using differential pulse voltammetry (DPV) under the following conditions: E pulse 0.20 V, t pulse 250 ms, E step 0.02 V, and scanning between +0.20 V and + 0.90 V at a scan rate of 0.03 V s −1 . Electrochemical impedance spectroscopy (EIS) was also performed with a frequency range from 0.05 to 50,000 Hz, a frequency number of 50, an E dc of +0.25 V, and an E ac of +0.01 V.
Sample Analysis. Xylazine was spiked at 5, 10, 20, 30, and 40 μg mL −1 into separate beverage samples that comprised non-alcoholic and alcoholic products available in supermarkets. The products included Calpis Lacto (pH 4.76), OISHI (pH 6.67), Pepsi Max (pH 3.35), Yanhee Vitamin water (pH 7.25), Soda Rock Mountain (pH 7.24), Smirnoff Gold (4% alcohol, pH 3.43), and Jinro Chamisul Soju (17% alcohol, pH 7.85). A 2 mL aliquot of the spiked sample was added to 2.0 mL of BR buffer at pH 7.00 and manually shaken. A 30 μL aliquot was transferred onto the detection zone of the ePAD, and the detection and quantification of xylazine were carried out via the portable electrochemical sensor.
■ RESULTS AND DISCUSSION ePAD Fabrication and Characterization. The ePAD was fabricated by a simple procedure using an inexpensive craft printer/cutter and LTT to create a mask template for the screen-printing process. Figure 2 shows digital and SEM images of the ePAD. The three graphene ink electrodes of the ePAD showed a well-defined geometry (the individual ePADs, which are approximately 1.25 cm wide and 3.75 cm long), indicating the suitability of LTT as a template mask. Figure 2a shows a digital image of the fabricated ePAD. The successful construction of simple and flexible electrodes can be seen. The WE (diameter = 3 mm; geometric surface area = 0.071 cm 2 ), RE (diameter = 0.75 mm; geometric surface area = 0.015 cm 2 ), and AE (diameter = 0.75 mm; geometric surface area = 0.058 cm 2 ) were well defined, as shown in Figure 2b, where the blue region is the wax barrier. In Figure 2c, the detection zone can be seen completely filled with water inside the wax barrier. Figure 2d displays an SEM image showing the morphology of the screen-printed graphene ink WE on an ePAD. The rough surface provided a large, active, and electrically conductive surface area for electrochemical analysis. The cross-sectional image in Figure 2e reveals the thickness of the graphene ink layer compared to the thickness of the CHR paper. Figure 2f shows the average thickness of the graphene ink layer, measured at 25.0 ± 0.9 μm.
Nanocoral-like PANI/ePAD Morphology and Electrochemical Characterization. The surface morphology of the coral-like PANI was characterized using FE-SEM. Figure 3a shows a coral-like structure with a highly porous and interconnected network produced by the polymerization of PANI in the NaCl solution. The average diameter and length of the coral-like PANI structures (inset Figure 3a), measured with an electronic digital caliper on an enlarged FE-SEM micrograph, were 271 ± 56 and 649 ± 115 nm, respectively. Drop-casting PANI onto the WE surface of the ePAD successfully incorporated the coral-like structure into the rough surface of the graphene ink, as shown in Figure 3b. Figure 3c displays the FTIR spectrum of PANI. The peaks at 1564 and 1482 cm −1 were attributed to the CC stretching of quinoid and benzenoid rings, respectively. The peaks at 1300 and 1245 cm −1 were produced by C−N stretching vibrations, and the peaks at 1143 and 815 cm −1 were, respectively, due to C−C stretching and C−H out-of-plane bending in the chemical structure.
Because the amount of coral PANI on the electrode surface could influence the adsorption capacity, sensitivity, and the limit of detection (LOD) of the sensor, the PANI loading was optimized by measuring the electrochemical signal toward xylazine at electrodes loaded with 0.0, 0.5. 1.0, 1.5, and 2.0 μL of PANI. The current signal increased with coral PANI loading from 0.0 to 1.0 μL and decreased at higher loadings ( Figure 3d). An increase in the volume of PANI resulted in more adsorption sites. The adsorption of xylazine on PANI mainly occurred on the amino groups of the chemical structure by hydrogen bonding and on the benzene ring by π−π stacking. 32 Higher loadings resulted in lower current generation, probably because the increased thickness of the modified electrode inhibited the electron transfer. Therefore, the optimum dropcast solution volume of PANI was determined at 1.0 μL.
Additionally, the electrochemical properties of the PANImodified ePAD were studied using CV to compare the electrochemical activities of an SPCE and a bare ePAD and a PANI/ePAD in 0.1 M KCl containing 10 mM [Fe-(CN) 6 ] 3−/4− . As shown in Figure 3e, the redox peak current of PANI-modified ePAD (red line) showed significantly higher redox peak currents (I p = 303 μA) than the bare ePAD (I p = 269 μA), indicating that the PANI modified on the ePAD significantly increases the electrochemical sensitivity of the system. 33 Interestingly, the peak-to peak potential separation (ΔE p ) of [Fe(CN) 6 6 ] 4− was 6.67 × 10 −6 cm 2 s −1 , 34 based on the plot of anodic peak current versus the square root of scan rate. Therefore, the active surface areas of the bare SPCE, bare ePAD, and PANI/ePAD were, respectively, 0.321, 0.352, and 0.390 cm 2 . These results confirmed that the PANI/ePAD had a larger effective surface than the other electrodes and should perform well in xylazine sensing.
EIS is an effective technique to monitor the electrochemical properties of electrode surfaces. Figure 3f displays the impedance plots (Nyquist plots) of an SPCE, a bare ePAD, and a PANI/ePAD recorded in 0.10 M KCl containing 10 mM [Fe(CN) 6 ] 3−/4− . The obtained Nyquist plots (imaginary impedance -Z″ vs real impedance Z′) were analyzed by the Randles equivalent circuit, as shown in the inset of Figure 3f. The equivalent circuit compatible with the EIS data consists of R S , R CT , W, and CPE dl , symbolizing the resistivity of the solution, charge transfer resistance, Warburg impedance, and constant phase element corresponding to the capacitance of the electric double layer, respectively. Using such an equivalent circuit, R CT values are determined. In the case of ePAD, the R CT value is 84.9 Ω, indicating a lower resistance than SPCE (R CT = 160.6 Ω), which means that it also displayed the high electrical conductivity of ePAD. Moreover, after the modification ePAD with PANI, the diameter of the semicircle is found to decrease by exhibiting a massive reduction in the R CT value of 11.6 KΩ. The results confirmed the considerably higher conductivity of the PANI/ePAD, where electron transfer at the electrode had been improved.
Electrochemical Oxidation of Xylazine at PANI/ePAD. The electrochemical behavior of xylazine was evaluated at SPCE, bare ePAD, and PANI/ePAD. CV was applied using potentials from +0.20 to +0.90 V at a scan rate of 0.05 V s −1 . The voltammograms of 10 μg mL −1 xylazine produced at all three electrodes indicated that the oxidation of xylazine was an irreversible electrode reaction mechanism (Figure 4a). The peak potential of xylazine at SPCE was +0.74 V, at bare ePAD was +0.64 V, and at PANI/ePAD was +0.60 V, as presented in Figure 4b. A significant increase in the peak current of xylazine was correlated with the surface area and conductivity of the electrodes. In addition, with regard to xylazine oxidation, PANI/ePAD produced a greater anodic peak current than the bare ePAD and SPCE, showing that modification with PANI augmented the electrode function toward the oxidation of xylazine.
Effect of pH. The electrochemical behavior of xylazine in BR buffer at different pH values is shown in Figure 4c. The influence of pH on the current response of xylazine in BR buffer at pH 4.00−7.00 was established using DPV at the PANI/ePAD (Figure 4di). The peak current of xylazine decreased with reductions in pH from 7.00 to 4.00, which was presumably due to the partial protonation of secondary amines in the xylazine structure at a lower pH (pKa of xylazine: 6.94). In contrast, at pH higher than 7.00, the solution tended to turn turbid, which was perhaps related to the hydrolysis or degradation of the compound. 19 Thus, we chose BR buffer at pH 7.00 for xylazine electro-oxidation at the PANI/ePAD surface. The change in the anodic peak potential (E p ) for the oxidation of xylazine as a function of pH is presented in Figure 4dii. A negative shift was observed in the oxidation peak potential with the increase in pH, which suggests that protons participate in the electrode reaction process.
Kinetic Mechanism of Xylazine on PANI/ePAD. We applied CV at scan rates from 20 to 200 mV s −1 to evaluate the electrochemical kinetic behavior of xylazine on the PANI/ ePAD by studying the influence of the scan rate on the peak current and peak potential for 10 μg mL −1 xylazine in BR buffer at pH 7.00 (Figure 5a). The relationship between the log peak current and log scan rate (log I p vs log υ) was used to evaluate the kinetic behavior of xylazine at the PANI/ePAD interface. The linear relationship of log I p versus log υ, shown in Figure 5b, was log I p = (0.64 ± 0.02) log υ − (0.48 ± 0.02); r = 0.998. The obtained slope value was between 0.5 (purely a diffusion-controlled process) and 1.0 (purely an adsorption- controlled process). This result indicated a combination of diffusive and adsorptive behaviors. The good linearity of both I p versus υ (adsorption-controlled process) and I p versus υ 1/2 (diffusion-controlled process), shown in Figure S2a,b, corresponds to the results obtained from the Randles−Sevcik equation. The slight difference between the linear relationship of I p versus υ 1/2 (r = 0.993) and the linear relation of I p versus υ (r = 0.995) was probably due to the combination of diffusion and adsorption processes that typified the electrochemical behavior of xylazine upon oxidation at the surface of the PANI/ePAD. Table 1 From the Tafel slopes of the totally irreversible process (Figure S2c), the value of b for the PANI/ePAD was 0.427 V dec −1 . The Tafel value at the PANI/ePAD was higher than the theoretical value of 0.118 V dec −1 for a one-electron process involved in the rate-determining step. 35 Therefore, the high Tafel value suggested the adsorption of xylazine or its reaction intermediate at the electrode surface. In the literature, high Tafel values have been attributed to the adsorption of reactants or intermediates on electrode surfaces and/or reactions within an electrode structure. 26,35 In addition, from the linear relationship of I p versus υ, the slope could be used to estimate the surface concentration of electroactive species (Γ) by using eq 1. 36 The value of Γ on the surface of the PANI/ePAD was found to be 2.04 × 10 −7 mol cm −2 .
The number of electrons involved in the oxidation reaction of xylazine on the surface of the PANI/ePAD was calculated from Laviron's eq 2, 37 based on the slope of the plot of E p versus log υ, as shown in Figure 5c.
where F, R, T, α, and n are the Faraday constant, the gas constant, the temperature, the charge transfer coefficient, and the number of electrons, respectively. The slope value of the oxidation peak was 0.115 ( Figure 5c). Thus, the n value was calculated to be ≈1, indicating that one electron was involved in the oxidation of xylazine on the PANI/ePAD. This result was in agreement with previous reports. 20,21 To evaluate the diffusion coefficient of xylazine at the PANI/ ePAD, xylazine at 10 μg mL −1 was measured in BR buffer at pH 7.00 by chronoamperometry at +700 mV. Cottrell's eq 3 was applied to calculate the diffusion coefficient (D) 35 where D is the diffusion coefficient of the analyte (cm 2 s −1 ), C b is the analyte bulk concentration (mol cm −3 ), F is the Faraday constant, n is the number of electrons, and A is the electrode geometric area. Plotted from the raw chronoamperometric data (Figure 5d), Figure 5e shows the linear curves of I versus t −1/2 . The diffusion coefficient of xylazine on the PANI/ePAD was calculated to be 7.74 × 10 −6 cm 2 s −1 .
The electrocatalytic performance of the PANI/ePAD for the electrochemical oxidation of xylazine was evaluated from the catalytic rate constant (k cat ), calculated using Galus's eq 4 38 where I cat is the catalytic current for xylazine, I L is the limited current in the absence of xylazine, t is the time elapsed, and C 0 is the bulk concentration of xylazine. From the slope of I cat /I L versus t 1/2 (Figure 5f), the k cat value of the PANI/ePAD for xylazine oxidation was determined to be 1.48 × 10 5 M −1 s −1 .
The relatively high values for the diffusion coefficient and catalytic rate constant, which indicated greater electrocatalytic efficiency for xylazine detection, could be attributed to the coral structure of PANI on the porous graphene ink of the ePAD. Thus, the use of the PANI/ePAD in the developed electrochemical sensor had enhanced sensitivity toward xylazine.
Optimization of the Electrochemical Parameters. Electrochemical parameters of the developed xylazine sensor were optimized to improve the performance and efficiency of the system. Optimizations were carried out by changing one parameter while keeping the other parameters constant. Parameters tested included the DPV conditions, namely, the pulse potential, pulse time, applied scan rate and step potential, and the accumulation step covering both the potential and the time. The highest current signal obtained from the measurement of 10 μg mL −1 xylazine at each setting was considered to indicate the optimal condition.
Effect of Differential Pulse Parameters. In this study, DPV was applied for its intrinsic high current sensitivity and low charging toward the formation of background current. Pulse time (t pulse ), pulse potential (E pulse ), step potential (E step ), and applied scan rate were investigated with the aim of increasing the current response.
The effect of t pulse (50−250 ms) was studied by measuring the current response of 10 μg mL −1 of xylazine at the PANI/ ePAD using a constant E pulse (40 mV), E step (20 mV s −1 ), and applied scan rate (40 mV s −1 ), as shown in Figure 6a. The current signal was observed starting at t pulse of 100 ms and increased from 100 to 250 ms. At longer pulse times, the background current was found to decrease, forming a sharper anodic current peak. At a t pulse of 250 ms, the effect of E pulse was evaluated in the range from 20 to 200 mV (Figure 6b).
The current signal was found to increase with increments in the pulse potential within the tested range. However, the peak width was also found to increase along with a negative shift in the anodic peak potential. Evaluation of the effect of E step (20− 40 mV) and applied scan rate (10−50 mV s −1 ) showed that the analysis time was shorter with a larger step potential and scan rate, but the decaying charge current was also low, causing higher background current in the DPV. Therefore, an E step of 20 mV, an applied scan rate of 30 mV s −1 , a t pulse of 250 ms, and an E pulse of 200 mV were the parameters used in the next experiment.
Effect of Accumulation Potential and Accumulation Time. The sensitivity and LOD of xylazine were greatly improved by using adsorptive stripping voltammetry (AdSV). The effects of the accumulation step, comprising accumulation potential and time, were investigated. The accumulation potential was investigated between −0.20 and +0.20 V over an accumulation time of 180 s. Figure 6c shows the voltammograms of xylazine oxidation at different accumulation potentials, where the background current increased when the accumulation potential was increased to a negative potential (+0.20 to −0.20 V). This behavior could be caused by the increase in the charging current on the electrode surface when the accumulation potential was more negative. It also led to a significant increase in the background current, mainly due to the catalytic decomposition of the electrolyte. 39 The highest current was recorded at 0.00 V (vs Ag/AgCl) (inset Figure 6c), and this potential was used in the subsequent analysis of accumulation time from 60 to 360 s. Figure 6d shows the voltammograms of xylazine oxidation at different accumulation times, where a continuous increase in the xylazine current with the accumulation time was evident from 60 to 240 s and no significant change was observed beyond 240 s (inset Figure 6d). This result was probably due to the saturation of xylazine on the PANI/ePAD surface. Another explanation could be the increase in the background current that occurred with the increase in the accumulation time. At extended accumulation times, adsorption on the electrode surface was no longer limited to xylazine but also included the charging ion, producing a large increase in the background current and making the determination of xylazine by electro-oxidation more difficult. 39 Therefore, an accumulation potential of 0.00 V and an accumulation time of 240 s were chosen as the optimal conditions. Analytical Performances. Analytical performances of the PANI/ePAD for xylazine detection were investigated using AdSV based on the optimized conditions. Figure 7a shows the anodic peak current of xylazine at concentrations from 0.2 to 100 μg mL −1 . The anodic peak current of xylazine was observed at a potential of +0.52 V. The current increased linearly with increments in xylazine concentration, and two linear ranges of xylazine detection were presented at concentration ranges of 0.2−5 and 5−100 μg mL −1 . The occurrence of two linear ranges was due to the adsorption behavior of xylazine. At lower concentrations, the target substance was adsorbed as a monolayer on the PANI/ePAD surface, and at higher concentrations, xylazine was adsorbed as a double layer or multilayer. 40 The LOD and limit of quantitation (LOQ) of the developed method were calculated from the equation LOD = 3 (S.D blank /slope) and LOQ = 10 (S.D blank /slope), respectively. Here, the LOD and LOQ were found to be 0.06 and 0.21 μg mL −1 , respectively. In comparison to other techniques for determining xylazine reported in the literature (Table 1), our PANI/ePAD provided a wide linearity and low LOD. In addition, the developed sensor was simpler to fabricate and use, easily portable and can be used forensically to determine xylazine in beverages.
The reproducibility of the PANI/ePAD was assessed through the evaluation of 10 electrode preparations. When comparing the peak current from 10 repetitions of the electrode, good reproducibility was reported with relative standard deviations (RSD) from 1.52 to 4.79% (Figure 7b). The reported RSDs were within an acceptable range according to the guidelines of the Association of Analytical Communities (AOAC). 41 The effects of interferences on xylazine determination with the developed electrochemical sensor were evaluated by measuring various interfering compounds that might be present in beverage samples (citric acid, fructose, sucrose, glucose, glycine, ethanol, Na + , K + , and Cl − ) in the presence of 10 μg mL −1 xylazine ( Figure S3a). The results (Figure 7c) showed no interference in the presence of 1200-fold of citric acid, 1000-fold of glucose, 100-fold of glycine, 200-fold of ethanol and Na + , and 500-fold of fructose, sucrose, K + , and Cl − . The results of this study indicated the good antiinterference property of the proposed device. Furthermore, the selectivity of the PANI/ePAD was investigated under the optimal conditions by comparing it to other similar compounds such as benzodiazepine class (i.e., alprazolam, diazepam, and clonazepam), pseudoephedrine, and methamphetamine. Figure S3b shows that there is no significant current signal using the PANI/ePAD sensor. This finding indicates that the PANI/ePAD electrode is highly selective for xylazine.
Xylazine Detection in Samples. The practicability of the proposed portable sensor was demonstrated by measuring the levels of xylazine in selected alcoholic and non-alcoholic beverage samples spiked with standard xylazine. The matrix effect of each beverage sample was studied in the optimized condition by comparing the slope of a standard curve of each beverage compared with a standard xylazine calibration curve. The data were analyzed by a two-way ANOVA. The results showed no significant difference at a confidence level of 95%, which indicated that there was no matrix effect. Consequently, the amount of xylazine in each beverage sample could be deduced using the linear regression equation of the standard curve, and the percentage recovery values were then calculated.
The recovery values among all the tested samples ranged from 84 ± 4 to 105 ± 2% (n = 3), as shown in Table 2. The good recovery results suggested that the proposed portable electrochemical sensor had the potential to be applied to determine xylazine in beverage samples.
■ CONCLUSIONS
We introduced a portable electrochemical xylazine sensor for on-site analysis that integrated a smartphone and a PANImodified ePAD. The ePAD was successfully fabricated using craft printer/cutter and low-tack transfer tape to create the template mask for a screen-printing process. A uniform electrode pattern was coated on a chromatography paper with graphene ink. A large conductive surface area was produced, which was modified with coral-like PANI. A large number of adsorption sites were produced, facilitating the interaction between xylazine and the electrode surface. In the optimized condition, this portable sensor was used to directly detect xylazine by DPV. The sensor also exhibited good performances in terms of its linearity, detection limit, and reproducibility. Moreover, we successfully applied the easy-touse, portable sensor to determine xylazine spiked in beverage samples. The sensor demonstrated its potential and suitability for use in real-case forensic scenarios: particularly where xylazine-spiked beverages are involved.
Components of the portable electrochemical sensor; body of the portable electrochemical device, monitor/ software, and sensing part; linear relationships of current (I p ) versus scan rate (υ), current (I p ) versus square root of scan rate (υ 1/2 ), and potential versus log of current; CV conditions; scan rates from 20 to 200 mV s −1 at PANI/ePAD in the BR buffer at pH 7.00 containing 10 μg mL −1 xylazine; and DPV response of possibly interfering species and other drugs on the peak current of 10 μg mL −1 xylazine (PDF) | 7,791 | 2022-04-12T00:00:00.000 | [
"Materials Science"
] |
Games between stakeholders and the payment for ecological services: evidence from the Wuxijiang River reservoir area in China
A gambling or “game” phenomenon can be observed in the complex relationship between sources and receptors of ecological compensation among multiple stakeholders. This paper investigates the problem of gambling to determine payment amounts, and details a method to estimate the ecological compensation amount related to water resources in the Wuxijiang River reservoir area in China. Public statistics and first-hand data obtained from a field investigation were used as data sources. Estimation of the source and receptor amount of ecological compensation relevant to the water resource being investigated was achieved using the contingent valuation method (CVM). The ecological compensation object and its benefit and gambling for the Wuxijiang River water source area are also analyzed in this paper. According to the results of a CVM survey, the ecological compensation standard for the Wuxijiang River was determined by the CVM, and the amount of compensation was estimated. Fifteen blocks downstream of the Wuxijiang River and 12 blocks in the water source area were used as samples to administer a survey that estimated the willingness to pay (WTP) and the willingness to accept (WTA) the ecological compensation of Wuxijiang River for both nonparametric and parametric estimation. Finally, the theoretical value of the ecological compensation amount was estimated. Without taking other factors into account, the WTP of residents in the Wuxi River water source was 297.48 yuan per year, while the WTAs were 3864.48 yuan per year. The theoretical standard of ecological compensation is 2294.39–2993.81 yuan per year. Under the parameter estimation of other factors, the WTP of residents in the Wuxi River water source area was 528.72 yuan per year, while the WTA was 1514.04 yuan per year. The theoretical standard of ecological compensation is 4076.25–5434.99 yuan per year. The main factors influencing the WTP ecological compensation in the Wuxi River basin are annual income and age. The main factors affecting WTA are gender and attention to the environment, age, marital status, local birth, and location in the main village.
INTRODUCTION
Environmental services such as natural purification of water, erosion control, and habitat for wildlife are public goods that have value to society but are difficult to assign a market value to. In the relevant market, benefits provided by natural resources can be expressed as values to human well-being (Arrow et al., 1995;Costanza et al., 1997;Wackernagel et al., 1999;Ouyang, Wang & Miao, 1999;Daily et al., 2000;de Groot, Wilson & Boumans, 2002;Xu, Liu & Chang, 2013). Ecological compensation is the institutional arrangement for regulating and protecting the interests of stakeholders based on the protection and sustainable utilization of environmental services (Jin, Li & Zuo, 2007). Global drinking water resources face enormous challenges (Turner et al., 2003;World Health Organization, 2005), and ecological compensation is a vital mechanism by which water resources and land are equitably protected (Wunder, 2005;Shen & Gao, 2009;Lai, Wu & Yin, 2015). Compared with most countries in the world, the disparate systems and mechanisms in China make the relationship between the stakeholders of the ecological compensation of water resources more complex, with intertwined relationships. Source and receptor gambling is defined as a conflict of interest with compromise between sources and receptors, which is the key problem to be solved (Wang, Su & Cui, 2011;Zhang, Ming & Niu, 2017). Ecological compensation is divided into government compensation and market compensation. The majority of government compensation is relatively simple, while the market compensation is relatively diverse (Pretty & Ward, 2001;Zbinden & Lee, 2004;Ferraro, 2008). Due to the complexity regarding providing ecological compensation of water sources, one cannot simply implement ecological compensation according to the general principle of "whoever pollutes will pay." Based on the theory of externality (Shen & He, 2002), compensation should start from an analysis of the beneficiaries of the watershed to identify who will compensate whom (Shen & Yang, 2004;Shen & Gao, 2009).
We can judge the subject and object of compensation through the division of powers. If the beneficiary object is determined, the beneficiary will be required to make compensation. If the social benefits or the beneficiary object cannot be determined, the government will make compensation (Qiao, Yang & Yang, 2012). Stakeholder analysis rules in ecological compensation state that based on the importance of initiative, decisiveness, and interest in each decision, the government, farmers, and enterprises can be defined as core stakeholders (Brown et al., 2014;Chen, 2014). Many scholars think that the government and residents in the upstream area of the water source are the compensable subjects (He, 2012). From the perspective of fairness and stability, residents should be compensated prior to the government receiving compensation. In terms of the goal of maximizing social wealth, the government is the ecological compensation object for the water source, not the residents. However, some scholars hold the opposite view that government is neither a beneficiary of ecological compensation nor a loser, so they should not be included in the scope (Wang et al., 2010).
In the region of study, the Wuxijiang River is the main water resource, which aids in regional sustainable development. In recent years, the rapid development of China's economy has led to more and more serious environmental problems in Wuxijiang River, as the basin has the dual attributes of being both a water source and an economic development zone (Landell-Mills & Porras, 2002;Pagiola, Arcenas & Platais, 2004;Shen & Yang, 2004;Gao & Wen, 2004;Shen & Lu, 2004;Thapa, 2016) The issues of upstream environmental impacts and downstream effects may negatively affect environmental protections. To a large extent, these impacts are caused by the contradictory goals of economic development and ecological optimization. An issue that requires urgent attention is determining how to create ecological benefits both upstream and downstream of a water source while promoting stable development. At present, researchers mainly analyze how to make use of ecological compensation mechanisms to realize the comprehensive management of the ecological environment of the water source. One of the most important problems concerning the mechanism of ecological compensation is the standard of compensation acquisition, that is, the conflict between willingness to pay (WTP) and the willingness to accept (WTA) (Yang, Wang & Sun, 2014;Zhao, Li & Peng, 2016). There are many methods that attempt to account for the amount of ecological compensation that is appropriate for a given water source area (Ouyang, Wang & Miao, 1999;Research Group on China's Ecological Compensation Mechanism and Policy, 2007;Ruan, Xu & Zhang, 2008;Fen, Wang & Yang, 2009;Lu & Ding, 2009;Sagona et al., 2016;Zhou, Sun & Cui, 2017). Through the investigation of WTP and WTA, lack of attention to the direct contributors and protectors of the ecosystem, or the inadequacy of compensation can be avoided, which are in turned caused by source and receptor gambling (Wunder, 2005).
The Wuxijiang River is the first water source to have protections legislated by the Zhejiang Provincial People's Congress (Xie, 2003). Ecological protection and compensation involve many stakeholders. There is an administrative/subordinate relationship among the stakeholders, which can be regarded as a multi-level principal agent relationship. Under this multi-level principal agent relationship, the conflicts of interests between the main stakeholders-such as the government, industry, and the residents-are constantly escalating. The central and local governments have a distribution conflict of income rights for the development of resources, and hold different priorities concerning the promotion of development of the local economy. It is valuable to analyze the interests of all core stakeholders concerning regional water conservation. Using the Wuxijiang River in China as an example, this study investigates the residents' WTP and the WTA as analyzed using the contingent valuation method (CVM) (Zhang et al., 2002;Zhang & Zhao, 2007). This method allowed an estimation of the amount of ecological compensation from water resources, as well as insight into how to better establish and improve equitable compensation mechanisms in the future.
Research region
The Wuxijiang River reservoir was chosen as the research region. Its administrative region mainly involves the four townships of Hunan, Jucun, Lingyang, and Huangtankou, in the Qujiang district of Quzhou, as shown in Fig. 1. Its position in China is shown in Fig. 2. The watershed system of Wuxijiang River is shown in Fig. 3.
The Wuxijiang River watershed has reservoirs in Hunan and Huangtankou Townships, a primary tributary in Qujiang district, and a water diversion project in Wuxijiang River. The Wuxijiang River watershed has high ecological value and is a good source of quality fresh water. It not only has high forest coverage and abundant biological diversity, but also has a state-level wetland park. However, the Wuxijiang River watershed is facing ecological and environmental problems that are associated with a relatively fragile ecosystem, impacts and pollution from livestock and poultry breeding, negative environmental impacts from tourism and aquaculture, industrial pollution, and farm run-off.
Data sources
Data were derived from public statistics and data collection through a household survey of 12 villages in four townships and 15 blocks on the Wuxijiang River in December 2015 by the project group. At least 30 households were randomly selected from each village for survey. The project group issued 385 surveys, of which 383 were returned and three were invalid. In total, there were 380 valid surveys. The contents of the survey included information about the economic conditions of farmers and householders, suggestions by the local government for ways to improve the ecological protection of the water source, and historical data. The urban community survey on respondents' WTP for water received a total of 552 valid surveys. The contents included respondents' personal information, family water use, their understanding of ecological protection of water sources, WTP for water, and the mode of payment.
STAKEHOLDER ECOLOGICAL COMPENSATION AND THEIR BENEFITS VIA GAMBLING ABOUT THE WUXIJIANG RIVER WATER SOURCE
Analysis of stakeholders of ecological compensation in the water source area of the Wuxijiang River Generally, the impacted research subjects analyzed are the government, industry, and the residents (Lami, Masetti & Neri, 2016). As the first government protected water source in Zhejiang province, Zhejiang provincial government and local government have implemented nearly 20 years of ecological protection policy. The Wuxijiang River power plant is the only remaining large industry. Most other industries have been closed or moved. To quantify the ecological compensation of the water source, we solve for the amount of gambling of the enterprises, government, and residents are willing to engage in. The Wuxijiang River power plant pays the water resources fees and the reservoir funds. There are no direct relationships between industry, local governments, and water sources. Therefore, the relationship between industry, government and residents is relatively simple. The only conflict of interest between the government and industry which needs to be coordinated is that of water use and supply. Therefore, the ecological compensation entities for Wuxijiang River are mainly the local governments and the residents.
The primary impacts on residents in the water source area
There are three levels of water source protection; the core area, the secondary core area, and the water's edge. Because water resources development occupies a large amount of land, regional environmental protection can lead to limitations in further developing industry, thus reducing the livelihood opportunities for local residents, which negatively affects economic development. The interests of residents in water areas need to be further determined, and require government departments to create effective coordination programs.
The primary impacts on governments at all levels in Quzhou city, Qujiang district and township governments
There is a question as to which level of government is most impacted by claims of compensation throughout the watershed. This study argues that ecological compensation is mainly to internalize the relevant production costs so as to optimize ecosystem service function. For the ecological compensation of Wuxijiang River, the external costs are the investment cost of ecological protection and construction. Therefore, the government level most relevant to the target of ecological compensation is related to the compensation degree of the opportunity cost of development. Wuxijiang River is a tributary on the upper reaches of the source of the Qiantangjiang River, and development costs can be compensated in Quzhou city. With Quzhou city's focus on ecological protection and construction, its GDP value will not have a great impact on Zhejiang Province. The loss of opportunity for the development of the Wuxijiang River's water source protection area is fully shared across the area under the jurisdiction of the Quzhou city government (Yang, Cai & Zhang, 2017).
Payment for gambling on environmental services in the Wuxijiang River source area
The main stakeholders' benefit for playing the game with Wuxijiang River water resources includes the benefit games between the local government and the central government, local governments at all levels, and residents and the government. The benefit game between the local government and the central government is reflected in the mismatch of the financial and administrative power controlling the water resource, which shows the trend of financial right above power. The status of central versus local governments differs greatly. The administrative power of the local government is passively increased. The interest's game between the local governments at all levels is mainly reflected in that the financial and administrative rights do not match between the superiority and inferiority within the administrative hierarchy. If the county government manages directly, while giving the corresponding financial power, the system will be smoother (Song & Liu, 2005). The interest game between the residents and the government in the water source area is mainly caused by the misunderstanding of the compensation methods by residents in water sources. At present, the water source ecological compensation in China is basically government compensation. The mode of compensation is divided into two types: transfusion and hematopoiesis. The former means to give the compensation materials to the residents to sustain their basic lives, and the latter means the compensation materials given can further help them increase income and improve their living standards. The residents tend to prefer the transfusion type, hoping to direct compensation with money. They have low recognition of the hematopoietic compensation type such as policy compensation and industrial compensation (Gen et al., 2010), which leads to the lack of resources, insufficient compensation, and psychological satisfaction, resulting in an objective benefit game.
Estimating compensation amount using CVM-based ecological methods
According to the results of the CVM survey, the ecological compensation standard of Wuxijiang River was determined using conditional value evaluation. The sample area included 15 blocks in the lower reaches of the Wuxijiang River Reservoir Area and the 12 villages in four towns in Wuxijiang River. In the two cases of nonparametric and parametric estimation, we estimated the WTP and WTA for ecological compensation in Wuxijiang River in order to obtain the ecological compensation theory value.
Design of CVM survey
In the design of the survey, WTP and WTA were investigated. The survey on WTP covered 15 blocks in the Wuxijiang River water source. The detailed sample distribution is described in Table 1.
The survey covered three areas. The first area included detailed respondent information, to describe and understand the basic social characteristics of the interviewees and provide the basis for the further analysis of the data. The detailed socio-economic characteristics of the WTP sample is described in Table 2.
The second area was analyzed to better understand the use of water resources, including the price of water in Quzhou, the monthly water consumption of interviewees, and the level of concern about water-related environmental problem of the interviewees. This data was analyzed to understand if the local water quantity and water quality meet the overall demand, as well as to improve the overall understanding of environmental protection in the Wuxijiang River watershed.
A survey of WTA was conducted to investigate the 12 administrative villages with the shortest distance between Wuxijiang River reservoir in the four townships of Lingyang and Jucun townships in the upper reservoir, while Hunan Township and Huangtankou Township in the lower reservoir. The detailed socio-economic characteristics of WTA interviewees is described in Table 3.
EMPIRICAL RESEARCH AND RESULTS
Nonparametric estimation of the average WTP Table 4 shows WTP for water source ecological compensation of the Wuxijiang River.
The survey results of WTP were analyzed, and the expectation of the average WTP was computed on the basis of the value and frequency of WTP. The mathematical expectation model of discrete-time variable WTP was used in the computation of WTP (Kong, Xiong & Zhang, 2014;Guan et al., 2016), expressed as: where B i is the amount of tender, P i is the probability of the amount chosen by the interviewee, and m is the number of tenders that can be selected, which is set to 11 in this paper.
According to the investigation, there were 46.2% zero payment wishes in the sample. For this reason, the calculation model of the WTP was corrected. The formula is described as follows: where, E 0 WTP represents the expected value of nonnegative WTP; E þ WTP represents the expected value of positive WTP; and m WTP 0 represents the ratio of zero payment intention. It can be calculated that: E 0 WTP is 24.79 yuan/(month Á household), that is, 297.48 yuan/(year Á household).
If one household unit operates with three individuals, the E 0 WTP is 8.29 yuan/ (month Á capita); if one household unit operates with four individuals, the E 0 WTP is 6.20 yuan/(month Á capita). The mathematical estimation model was used to obtain the per capita ecological compensation standard for local residents (Yu & Cai, 2015). Q denotes the amount of ecological compensation payment, W WTP was used to describe the maximum willingness of payment and N to describe the number of people who use running water in the city. Among them, 831,060 people are living in the downtown area, while 126,170 people who use the same water supplies from the Wuxijiang River are living in urban areas. M was used to stand for the population in the four towns of Wuxijiang River water source: Lingyang Town (5,467), Jucun Town (4,063), Huangtankou Town (9,652) and Hunan Town (11,868). All population data came from public statistics released in December of 2015. Then, the upper limit of the ecological compensation standard was given by: The lower limit of ecological compensation standard is given by: Nonparametric estimation of the average WTA Table 5 shows the WTA of water source ecological compensation of Wuxijiang River. The survey results of the WTA are analyzed. The expectation of the average WTA was calculated on the basis of the value of the affordable WTA, and the frequency of the WTA was given by: where B i was the amount of tender, P i was the probability of the amount chosen by the interviewee, and m is set to 48 in this paper. Through the survey, it was found that there 47.1% of the surveyed population have a zero WTA. Therefore, according to the Spike model (Du et al., 2013) of econometrics, the computation model of WTA is corrected by: where E 0 WTP is the expectation of non-negative WTA, E þ WTA is the expectation of positive WTA, and m WTA 0 is the rate of zero WTA. By computation, the E 0 WTA is 3864.48 yuan/(year Á household). Most of the residents in this area are farmers, and in this area, if one household includes four persons, E 0 WTA is 3864.48 yuan/(year Á household). Then, the average WTA of the local residents in Wuxijiang River was determined.
Parametric estimation of the average WTP STATA software (Yan, Zhang & Jiang, 2016) was used to analyze the economic factors that affect WTP and WTA in the survey (Xu, Yu & Li, 2015). Regression processing was implemented with stepwise regression analysis and the least squares method (Pan, 2014;Liu & Wang, 2017) to obtain variables with the greatest impact on WTP and WTA. Table 6 shows WTP of the interviewee and the regression results of the related variables. Table 7 shows the WTA of the interviewee and the regression results of the related variables.
By analyzing the regression results, it was found that annual income, age, gender and the amount of environmental concern were the main factors affecting WTP for ecological compensation. There was a positive correlation between the annual income of residents and WTP under consideration for only one influencing factor. The regression coefficient of the annual income was greater than zero, which represented the higher annual income and the greater amount of WTP. The age of the resident was inversely proportional to WTP. The regression coefficient of the annual income was less than zero, which represents older interviewees and a correspondingly smaller WTP. The greater the regression coefficient of the gender, the greater the amount of WTP. WTP of women was assigned to zero, while men were assigned to one. Thus, men had greater WTP. The "concern degree" of the environment was directly proportional to the descriptive statistics. The regression coefficient of the concern degree for the environment was less than zero, which indicated that the greater the value of the descriptive statistics, the greater the WTP. According to the obtained regression results, the average WTP was obtained by: For the 46.2% of the population in which there was zero payment intention, the above formula needs to be adjusted as: The mathematical model was used to obtain the per capita ecological compensation standard for local residents. In this model, x was used to describe the main factors affecting the willingness of the respondents to pay, annual income, age, sex and ecological environmental concern for the respondents, ε was used to describe the regression coefficient for each factor, and d was used to describe random perturbation term with a normal distribution of [0, d/2]. According to the above, lnW WTP+1 was also a function that follows the normal distribution (Brown et al., 2014), s represents the standard For example, in a family of three, E WTP is 176.24 yuan/(year Á capita), while in a family of four, E WTP is 132.18 yuan/(year Á capita).
The upper limit of the ecological compensation standard was given by: The lower limit of the ecological compensation standard was given by: Parametric estimation of the average WTA Based on the analysis of relevant variables, regression analysis and the least squares method were used to regress the number of WTA with the factors related to the WTA. Based on the results, it can be seen that five variables (age, gender, marital status, whether a respondent was born locally and lived in the village) had the most influence on the respondent's WTA compensation. Among the above factors, the regression coefficient of the age of the interviewees was greater than zero, indicating that the older the subject, the greater the value of the willingness to be paid. Putting aside the other factors, the older the respondents were, the higher their WTP for the protection of the environment. The respondents' age had a greater impact on their WTA. Whether the family lived in the main village had a significant impact on their willingness to receive compensation. The regression coefficient was below zero, indicating that the higher the statistical value, the lower the value of the willingness to be paid. The gender regression coefficient of the interviewees was higher than zero. The descriptive statistics for female and male are zero and one, respectively, indicating that men have a stronger WTP than women. The regression coefficients of marital status and whether the respondent was born locally were less than zero, which was positively related to their willingness to receive compensation and had a negative correlation, and had less influence on their willingness to be paid. Whether the married pair was affected by the willingness to receive compensation was very small, and therefore was not tested by a level of significance test.
With analysis of the related variables, regression processing was implemented with stepwise regression analysis and the least square method for WTA values and the related factors. Regression results were used to calculate WTA: As there was 47.1% of the surveyed population with zero WTA, the above equation was corrected to: where w are the main factors affecting WTA, which are age, gender, marital status, if they were born locally and live in the main village, ε′ is the regression coefficient, and d′ is a random perturbation with normal distribution of [0,s′/2] (Rao, Lin & Kong, 2014;Li & Li, 2017). lnW WTA+1 was also a function with the normal distribution. s ′ is the standard deviation, and s ′ =4.454884. Then: If one household includes four persons, E WTA is 378.51 yuan/(yuan/capita Á year), which was the annual per capita compensation WTA of local residents in Wuxijiang River.
DISCUSSION
It is necessary to try to improve the accuracy of the data, due to the lack of available data relevant to the social and economic development characteristics of the Wuxi River Basin and the ecological compensation of water sources. While obtaining primary data from on-the-spot research, several problems crop up, such as; respondents' refusal to cooperate, respondent's poor perception of water ecological compensation, and the failure to conduct the study in part of the basin area, thus undermining the scientific nature and authoritativeness of the survey data. In a follow-up study, organization of the investigation needs to be strengthened and more supplementary investigations need to be carried out. However, according to the results conducted by (Zheng & Zhang, 2006;Xu, Liu & Chang, 2013) based on the research data from 2015, the WTP value calculated in this paper was 74.4-176.24 yuan/(year Á capita). The WTA value was 378.51-966.12 yuan/(year Á capita). Income level was the most important factor affecting WTP and WTA. In 2015, the per capita disposable income of Zhejiang residents was 35,537 yuan, compared with 18,265 yuan in 2006. The ratio of the two is 1.95. Considering the increase of the per capita disposable income of Zhejiang residents, the conclusion of this paper is in good agreement with the two articles that were conducted using robust methods. Comparatively, the data of this paper is also reliable and robust.
The determination of ecological compensation standards for water sources calls for further comparison and selection. Through field investigation, we used the CVM to estimate ecological compensation standards for the Wuxi River Basin. In addition to the CVM, the ecosystem service value assessment method and the protection of the cost method were used to determine the compensation standard, without using other ways to estimate the compensation standard at the same time. More importantly, these estimation methods have many defects. It is hard to tell if finding the standard for ecological compensation estimate using the condition value assessment method is the most scientific. The effect of this ecological criterion to guide the practical compensation remains unknown. In a follow-up study, the optimization and improvement of the method for estimating the compensation standard also needs to be strengthened.
At present, it's relatively difficult to implement the market payment method for environmental services in the water source areas in China. Wuxijiang River, a tributary of Qiantang River in Zhejiang, is an important ecological barrier upstream of Qiantang River. The Zhejiang government holds the majority of the responsibility to protect this important ecological barrier. The Wuxjiang River Dam and the Wuxijiang River Diversion Project are two strategic projects led by the government that consider both local economic and social developments, which involves a large population of residents, making the payment issue much more complex. Ecological compensation in China is typically complex, with several hierarchies of stakeholders. Since the government is playing the major role with government compensation as the major compensation method, the market compensation method accounts for a low percent. As government compensation features both limited capital support, low efficiency, and unsuitability, this leads to low and unstable benefit payments to the ecological protectors, which has become the major cause of the property in the water source area.
CONCLUSION
We used the CVM to estimate the amount of compensation for subjects of ecological water supply impacts for the Wuxijiang River watershed. After in-depth analyses of the socio-economic characteristics of both subjects and the receptors, and calculation of WTP and WTA for each group, we found that the freshwater ecological compensation for the Wuxijiang River area was with the local governments at all levels and the residents of the water source area. The main loss for the residents in the water source area lies in the large amount of land that should be occupied for the development of water resources, and the limited industrial development that has resulted from environmental protection of the region. The main loss for the governments in Quzhou and the reservoir areas are the high external costs.
Despite the loss for both the residents and the local governments, these findings also suggest that there is a disagreement between the local and central government which is mainly reflected in the mismatch between the two parties in the water resources-related financial rights and powers. The disagreement among local governments is mainly reflected in the mismatch of financial power and administrative power between the upper and lower levels, produced by the administrative class. The disagreement between the people and the government in the water source area is mainly due to misunderstandings related to methods of compensation for the residents.
The resulting estimates of water source ecological compensation are as follows; using nonparametric estimation that ignored other factors, the WTP of residents in the Wuxijiang River water source was 297.48 yuan/(household Á year), and the WTA was 3864.48 yuan/(household Á year). The theoretical ecological compensation standard was 2294.39-2993.81 yuan/(yuan/capita Á year). With parametric estimation of the other factors, WTP was 528.72 yuan/(household Á year) and WTA was 1514.04 yuan/ (household Á year). The theoretical ecological compensation standard is 4076.25-5434.99 yuan/(yuan/capita Á year). Regression analysis of socioeconomic variables of the compensation willingness and interviewees showed that annual income, age, gender, and environmental concern are important factors determining WTP. Age, gender, marital status, being locally born, and living in the main village are important factors determining WTA. | 6,972 | 2018-03-08T00:00:00.000 | [
"Economics",
"Environmental Science"
] |
Quantifying Change in Buildings in a Future Climate and Their Effect on Energy Systems
Projected climate change is likely to have a significant impact on a range of energy systems. When a building is the centre of that system, a changing climate will affect the energy system in several ways. Firstly, the energy demand of the building will be altered. Taken across the entire building stock, and placed in context of technological and behavioural changes over the same timescale, this can have implications for important parameters such as peak demand and load factors of energy requirement. The performance of demand-side, distribution/transmission and supply-side technologies can also alter as a result of changing temperatures. With such uncertainty, a flexible approach is required for ensuring that this whole energy system is robust for a wide range of future scenarios. Therefore, building design must have a standardised and systematic approach for integrating climate change into the overall energy assessment of a building (or buildings), understanding the implications for the larger energy network. Based on the work of the Low Carbon Futures (LCF) and Adaptation and Resilience In Energy Systems (ARIES) projects, this paper overviews some of the risks that might be linked to a changing climate in relation to provision and use of energy in buildings. The UK is used as a case-study but the outputs are demonstrated to be of relevance, and the tools applicable, to other countries.
Introduction
Future climate models (such as the General Circulation Model-based Hadley Model) are now demonstrating the scale of change that might be expected in local weather conditions [1] in the coming decades.Simultaneously, with persuasive arguments around the embodied energy and carbon involved in building construction projects, there is now an onus on industry to ensure buildings last for several decades; a timescale within which we might expect to see significant climate change for a given location.Quantifying this change in a meaningful way, whilst portraying the true nature of these climate models, is quite a challenge.Climate models are not designed, or capable, of giving precise and accurate predictions of a deterministic future.Rather, as demonstrated by the UK Climate Projections 2009 (UKCP'09) [2], climate models can give a spectrum of possibilities that relate to different probabilities of occurrence.Furthermore, these projections will change with specific time period, location, and greenhouse gas emission scenario (as discussed in Section 3).
Any user of building models, whether detailed dynamic simulation or energy compliance modelling, will understand the importance of weather (and by association climate) data.The balance between the heating and cooling requirements of a building, and risk of overheating, could be substantially different near the end of a building's life than when compared to the assessment of that building during the design stage [3,4].However, this should not just be a concern for building designers.Those responsible for designing infrastructure for delivering energy to the built environment, and understanding the relationship between energy generation and energy demand, also need to have some understanding of how climate-driven energy use might change.
Designers, both in areas of building design and energy provision, therefore need adequate tools to help inform decisions that, although made now, will have an impact on energy use later within the life cycle of that building.To propose methods of dealing with this dilemma, this paper will collate the findings of two research projects that, respectively, look at incorporating probabilistic climate projections into building modelling and translating such outputs for use in the energy sector.Such tools should aim to improve the robustness and understanding of how energy could be delivered to buildings in a changing climate.The work also intimates the importance that decisions made in areas of the built environment will have on wider energy issues in the future, and how communication between the different actors involved (and the use of the same information/assumptions) is vital.For example, if a prediction is made for (or policy is used to encourage/subsidise) a growth in electric heat pumps and home-charged electric vehicles, assessments of future domestic energy use must account for this in the same way as for assessments of required electricity grid infrastructure.In that way, there is more likely to be a consistency between decisions relating to energy demand (e.g., building design) and decisions made for generating and supplying that energy (e.g., infrastructure required to supply that energy, reflecting the portfolio of generation technologies that might be expected at that time).This paper will describe the use of probabilistic climate projections in building analyses, and therefore will be centred on a country (i.e., UK) that has this information currently available.However, the methodology is directly transferrable to other countries with similar multi-climate descriptions available.The existing approaches of the Low Carbon Futures (LCF) tool and the Adaptation and Resilience In Energy Systems (ARIES) method are reviewed to provide context, but the added functionality achieved from combining the two approaches is explored by describing the overlap between these two projects.Finally, an application is produced that applies both these methodologies (from the two respective projects) in a new way with a specific end goal of multi-climate, multi-building energy assessments.
Climate Change and Buildings
The risks posed to the built environment by climate change are diverse.In some parts of the UK, rising flooding risks are of prime concern [5], whereas other locations are witnessing changes in land management that impact how that land should be used [6].The impact on the thermal performance of buildings, in terms of overheating and/or changes in energy use, have also been investigated by several research projects, for example within the Adaptation and Resilience in a Changing Climate (ARCC) network [7][8][9][10].Recognising this particular risk, and making building users aware of its existence, requires a long-term view as, for example, persistent overheating is something that will become apparent over several years of occupancy in a building, rather than a one-off extreme event (and therefore is visible and experienced in a different way than flooding).
There are various challenges to quantifying whether a building will not function as designed in a future climate.Firstly, the designer must assess what constitutes a failure.For a thermal assessment, this is likely to be associated with either overheating or excess energy use linked to a change in building cooling load.This latter factor can, in theory, cause a cooling plant to be undersized, or might merely change the energy targets of that building-though the point at which the plant is using "too much" energy might become subjective in this case.If overheating is highlighted as the issue, quantifying this is non-trivial.As discussed elsewhere [11,12], although standardised definitions of overheating do exist (such as the threshold of 1% of occupied hours exceeding 28 °C [13], as used in this paper), the exact nature of overheating is more complicated than this, bringing together issues of adaptive comfort, location, and building services that are used to provide that comfort.Building modellers tend to use the more simplified definitions of overheating that can, at least, be quantified and compared between different cases, though adaptive comfort algorithms can be applied to building simulation [14] in an attempt to account for a more nuanced understanding of thermal comfort.
Having defined a failure criteria (whether based on internal temperature or energy consumption), the designer may choose to apply adaptations to the design of the building.This will be aimed at increasing the probability that the building will still function adequately in the future.As discussed later, the climate projections of UKCP'09 enable this probabilistic approach, where the probability of an external (i.e., weather) condition being met can be related to the probability of an internal building metric (e.g., temperature threshold) occurring.This is also likely to have an effect on the energy consumption of the building (both in heating and cooling seasons), and therefore the adaptation or change to that building will have a secondary effect on the infrastructure used to provide that energy.
This therefore leads to a desire to recognise climate-associated risks that relate to the whole energy system of building, energy infrastructure and energy generation.Is there a "perfect storm" of future climate, energy and technology scenarios that a wide range of practitioners (from those working in large-scale energy production to those designing building) should be aware of?For example, it is likely that increased frequency of extreme summer temperatures will lead to higher summertime electrical loads (due to increased cooling), that could be exacerbated by a continued growth in IT equipment (and higher resulting internal heat gains).At the same time, higher temperatures can have a negative effect on the efficiency and performance of power transmission lines in the national grid [15], and it may be that this type of coincidence of different, but related, problems magnifies the overall risk that an existing approach will not be suitable in the future.In this case, the future scenario being proposed will produce a higher electrical demand from a supply structure that is not functioning as efficiently as before.
Of course, predicting such future scenarios is highly subjective.Even if climate model outputs are deemed suitable, making similar assumptions about future energy (generation) and technology (on the demand-side from the built environment) scenarios is open to interpretation.A more suitable approach is to recognise that many different future scenarios are possible (combining future climate, energy and technology options) and any tools and methods used to test these different projections should be flexible enough to account for this potential variation.Therefore, while a final "answer" corresponding to a future scenario can be subjective, the method used to achieve that answer should be robust and justifiable.In terms of application of method, this should also be replicable and sensitive to concerns of the practitioners likely to require the outputs and advice from such methods.
Using Climate Change Projections in Building and Systems Design
The use of future climate projections for building energy assessment is not particularly novel, though its use is rare in more standardised assessments (e.g., relating to energy performance certificates).Previous forms of climate projection, such as that provided by the UK Climate Impact Programme (UKCIP'02) [16], provided deterministic scenarios for assumed future conditions.This enabled weather information that had been formatted for building-based assessments (such as Test Reference Years and Design Summer Years [17]) to be morphed for some future climate scenario.Although this gave, perhaps, a misleading picture as to the nature of climate model outputs, it did allow for relatively simple future climate assessments to be carried out for building performance.
With the introduction of probabilistic climate projections, like UKCP'09, a slightly different approach is required to accommodate this new form of information.Rather than providing a deterministic description of future climate, a spectrum of probabilities is generated for any possible future climate change that is projected by the climate model.As discussed elsewhere [18], in its most basic form, this is not immediately amenable to building modelling and simulation.Design guides for building services engineers [13] have introduced this in a way that can be used for, amongst other applications, heating, cooling and ventilation system sizing; however, this is usually more appropriate for "steady-state" assessments of buildings and, even then, are more suitable for those with some previous experience in the application of future climate projections.
For some assessments of buildings, more detailed modelling is required that requires (for example) an hourly description of weather across an entire year that is in some way indicative of the conditions that a given location might experience, for a given climate scenario (NB This paper will use future greenhouse gas emission scenarios of "Low", "Medium" and "High", respectively corresponding to the B1, A1B and A1F1 scenarios from the Intergovernmental Panel on Climate Change, as introduced elsewhere [1]).This is the case for overheating assessments, where the number of hours above a certain threshold of overheating might be required.It is also necessary for understanding transient energy demand profiles of a building (or buildings), where the peak demand (e.g., gas usage or electricity) and variation in this demand can be crucial for identifying future issues that energy suppliers need to be aware of.
Therefore, with these quite different assessments requiring similar forms of information, there is the potential to manage and translate this probabilistic climate information into a format that is suitable for different applications.Previous work by the authors [19] on the Low Carbon Futures project demonstrated a tool that could emulate dynamic building performance simulations such that the equivalent of thousands of building simulations, using thousands of different weather files generated by the UKCP'09 "Weather Generator" [2], could be used towards decision-making related to the future performance of an assessed building (NB Data underpinning this Weather Generator has been updated since this work was completed, but the suitability of the methodology remains unchanged).Specifically, the tool provided a quantification of risk regarding the probability that a modelled building might "fail" in the future.The definition of failure was centred around overheating, or exceeding a cooling load should mechanical cooling be present, with examples of the output of the tool shown in Figures 1 and 2. For this example, the probability curve (Figure 1) is showing the risk of a dwelling becoming overheated, specifically exceeding an overheating threshold of 1% of hours above 28 °C [13].This dwelling (a two storey, three bedroom detached house) is just used to provide an example of the model output, but is described in previous work [8].A simplified output (Figure 2) shows the probability of just this particular exceedance occurring, essentially returning the values found on the dotted vertical line in Figure 1. Figure 2 also shows equivalent values for an altered version of the building, using adaptations.
This information can therefore, firstly, be used to identify any current/baseline risk of a problem occurring.The tool then suggests how this problem might be exacerbated (or a new problem developed) for different future climate scenarios, where each climate scenario is based on (at least) 100 separate weather files generated from UKCP'09 projections-it is these multiple, equally probable weather files that give the tool the ability to assign a probability for a particular scenario.A designer can then investigate the effect of adaptations (i.e., changes to the building to reduce a projected risk) on those various future scenarios.This particular paper is focussed on the application of this method, but the description and validation of the tool is discussed at length elsewhere [19,20].Establishing an appropriate form of output has also been investigated [21].Caution should still be applied when using such a tool; this is the product of both theoretical building modelling and climate modelling, both of which have quite high levels of uncertainty.It is, however, suggested that such tools can still perform a role in suggesting likely areas of risk, and suitable actions to deal with such risks, within a timescale that is commensurate for most building design projects.The design team, for a non-domestic building project, will already be carrying out a detailed building simulation as part of their analysis.The LCF tool allows the user to convert this single simulation, with just one set of building performance outputs (e.g., temperature or energy profiles), into the multiple-climate simulations necessary for outputs such as Figure 1.The tool is equally applicable to the domestic sector, though such analyses tend to be less common in industry for such buildings.While the LCF project focussed on the modelled (usually overheating) performance of a single building in a future climate, the tool and wider approach can be applied to energy assessments as well.For example, Figure 3 shows a distribution curve of cooling energy consumption for a building, based on probabilistic future climate descriptions.This is analogous to Figure 1, using similar climate information, but rather than "failure" being based around a probability that a number of hours exceeds a threshold internal temperature, failure is now based around the probability that an annual cooling usage (in kWh/year) might be exceeded.Alternatively, the tool can estimate the probability that a peak cooling demand (in kW) might be exceeded, perhaps linked to the size of an installed cooling system.
The mechanism for achieving these projections is identical to the overheating example, except simulation outputs are used that refer to cooling loads, rather than internal temperatures.Taken a step further, and with the ability of the tool to also treat heating loads in the same way, the projected total energy consumption of multiple simulations could similarly be modelled such that probabilistic future
Modelling Future Scenario Projections on Building Demand
The research project ARIES (Adaptation and Resilience In Energy Systems) [22] is investigating the effect of climate, and other changes, on the relationship between energy supply and demand associated with buildings.Climate change can affect the yield from renewable energy generation, the performance of key parts of the energy transmission/distribution infrastructure, and the type of energy demand profiles produced from the building stock.It is important to investigate this confluence of different climate-instigated changes to fully appreciate how adaptations to the building stock should be carried out in the coming decades.This is likely to be particularly true for the electricity grid.Electric vehicles, low-carbon electric heating (specifically heat pumps) and growth in consumer/IT appliances could be stimuli for quite different patterns of electricity use from our building stock in the future.It is impossible to predict exactly what technologies will become commonplace, but it is possible to investigate the sensitivity of energy networks to possible changes based on defined future scenarios.
The approach taken by the ARIES approach is semi-empirical, and is different for electrical demands than for thermal demands (though there is clear crossover between the two).In both cases, as the effect on the wider energy network is of prime importance, the key step is producing demand profiles that could correspond to, at least, a community-scale of buildings.Changes in demand of large regions of building stock are far more important than changes observed (or predicted) at an individual building level.
Electrical Demand Profiles of Groups of Buildings
Electrical demand profiles of buildings, particularly dwellings, can be most interesting when observed at a high temporal resolution.These profiles, such as Figure 4 (discussed elsewhere [23]), exhibit clear features that are directly related to the types of technologies being used throughout the day (such as kettle spikes or refrigeration cycles).When the profiles of many dwellings are aggregated together, a different profile shape emerges (such as Figure 5) that is due to the effect of After Diversity Maximum Demand (ADMD).In simple terms, the aggregated profile is much smoother than that of an individual dwelling profile as individual actions over periods of just a few minutes (seen as almost stochastic events on an individual dwelling profile, such as a boiled kettle) become less noticeable, whereas common practices (such as typical times that people across the country switch lighting on during the winter) become superimposed and create the key characteristics of the aggregated profile.This process of aggregation is crucially important to understanding how changes to the built environment might affect energy provision in the future.The individual dwelling demand profiles allow specific technologies to be observed, and indeed modelled for future scenarios (such as adding to Figure 4 an electric vehicle being charged).The aggregated profiles allow for an understanding of how such changes across a building stock might result in different energy demand characteristics that then have to be met by those involved with energy generation.While it is quite common, for example, for a high penetration of electric heat pumps and electric vehicles to be proposed in future low-carbon scenarios [25], the effect such technologies might have on transient demand profiles and, therefore, peak demand is less well investigated.The techniques proposed here are still in development, but provide an approach to estimating this for future electrical demand profiles.
Future Thermal Demand Profiles of Groups of Buildings
The approach taken by the ARIES project for thermal demand profiling of buildings is more based in theoretical building simulation than empirical data.Building stock models (such as [26]) are well-used tools in many countries for identifying the impact that changes in building design, and the introduction of low-energy retrofits, might have on energy use and carbon emissions of the buildings.These models tend to have relatively simple, steady-state building physics behind their assumptions, and are therefore not designed for the calculation of transient energy demand profiles.As such profiles are required for the objectives of the ARIES project, a transient dynamic model (specifically Integrated Environmental Solutions-Virtual Environment (IES-VE)) is used instead of a traditional stock model.The shortcoming of this approach is that complex, dynamic simulations of a small number of buildings cannot be extrapolated to the building stock of an entire country.However, the methods proposed here will suggest that it is feasible to simulate a sample of buildings that can be upscaled to represent a "region" of buildings and an associated energy demand profile for heating.
Accounting for Multiple Buildings
The compromise proposed by the ARIES project, between a detailed thermal model of an individual building and a stock model of many buildings, is a dynamic, local-scale stock model.Reasonable detail can be provided for building archetypes within the simulation environment, but simplified to such a level that multiple buildings (of the order of hundreds, but then extrapolated to thousands) can be practically simulated at the same time.These archetypes can be thought of as more detailed versions of what might be modelled within a traditional stock model.They can be tailored to the local stock of a region, using available data (see below) to represent the building typology, efficiency levels and activities expected in that region; the reduced geographical scale also means that a single weather file is a more acceptable compromise than for a country-wide assessment.
The methodology is described in detail elsewhere [27] but involves the following key steps: 1. Build up a series of dwelling archetypes based on national stock information (construction, typology, occupancy, heating schedules etc.); 2. Simulate these within a dynamic simulation software (nominally IES-VE, but other software can be used) to obtain a series of transient hourly heating profiles, using a weather file representing the local area under investigation; 3. Choose a more localised area and weight a chosen sample of simulated variants to represent that local stock of dwellings; 4. Post-process data to obtain an aggregated hourly thermal demand profile for a given time period (e.g., heating season or entire year); Figure 6 provides an example of this multi-dwelling aggregated thermal demand, for the equivalent of 1271 simulated dwellings located within an Intermediate Geography Zone (IGZ) in Edinburgh (simulated using the Edinburgh Test Reference Year (TRY) weather file [17]).Scotland is comprised of 1235 IGZs, where these represent a statistical geography used to collate and report data at a much higher spatial resolution (broadly equivalent to Middle Layer Super Output Areas (MLSOA), used in England).Edinburgh is represented by 101 IGZ's, therefore a single simulation exercise using Edinburgh weather data can be used to determine data for these 101 localised areas.
Figure 6.
Example of an hourly aggregated thermal demand profile of 1271 dwellings as simulated using described method.
The profile represented in Figure 6 is based on the information summarised in Table 1, a virtual case-study with a mix of building types, orientations and occupancies-some of the chosen values are for demonstration purposes while others are informed by census data and national averages.Specifically, the chosen mix of buildings for the IGZ applies the following assumptions: • "Build Type" is taken from Census data [28], with an assumed split of 40:60 between end-and mid-terrace dwellings (as census data has no such distinction); • Age bands and construction type are assumed from national averages of the chosen building types [29], based on an assessment of common construction materials relative to building form and construction period; • Orientation is part-randomised but informed by dwelling types."Detached" has no orientation where distribution of glazing is equal on all sides ("N/A" in Table 1), mid-terrace dwellings have an East/West ("E-W") or North/South ("N-S") exposure where stated orientations indicate sheltered sides relative to other connected dwellings.An end-terrace dwelling can be situated East ("E") or West ("W") of an E-W mid-terrace, or North ("N") and South ("S") of an N-S mid-terrace; • Occupancy is informed by census data [28], albeit simplified for demonstration purposes; • House temperatures are based on simplified use of social indicators of reference occupants, again from census data [28].
The approach can be repeated with a completely different selection of criteria to match a specific case-study, so the above assumptions can (and should) vary on a case by case basis.
Accounting for Multiple Future Climates for Multiple Buildings
With a method for accounting for multiple future climates, and a separate method for simulating multiple buildings simultaneously, the logical progression is to combine these methods to produce a probabilistic climate, multi-building simulation environment that can produce aggregated heating requirements (and other types of energy requirements) for a community of buildings in a range of future climate scenarios.This combination of methods is presented in this paper for the first time.
The aggregated heating load of Figure 6 is processed through the LCF tool, where the tool emulates the effect of the weather file used in the original building simulation (of Section 4.2.1) on the aggregated heating load for an entire year.As the tool effectively produces a new aggregated heating profile for every climate file used, this can result in many hundreds of heating load profiles which, if being used to make decisions on building energy efficiency, would not be desired in their entirety.There is also the issue of what a user of a tool wishes to find out; are they concerned with peak heating load exceedance or annual energy consumption?The tool can deal with both these requests, though post-processing is currently needed to draw meaning from such results.
Figure 7 provides just one type of output to demonstrate the potential of the method being described.This shows the percentage probability that a peak heating load (simulated hourly) will be exceeded, for different magnitudes of exceedances.Three future climate scenarios (from UKCP09 definitions) have been used relating to "medium" greenhouse gas emissions for the years 2030, 2050 and 2080, and these are compared to a current baseline climate for Edinburgh (where this is a slightly different source of weather data than the Edinburgh TRY used in Section 4.2.1 but, broadly, describing the same location and timescale).
Nominally, a heating load of 7000 kW (aggregated for all 1271 dwellings) has been chosen for examination-the user of the tool might choose a value that, if exceeded, would be undesirable or simply a high percentile value when compared to the baseline simulation.From Figure 7 it can be estimated, for example, that there is a 100% probability that an aggregated heating load is exceeded for 2% of hours throughout the year.However, the equivalent probabilities for the 2030, 2050 and 2080 scenarios are 77%, 68% and 43% respectively-this represents a significant difference in the frequency of high percentile heating loads.This could, amongst other things, help a designer think about a replacement district heating system or just more generally understand times of high heat demand, and how this might change in the future, in more detail.
As mentioned, there might also be a desire to produce similar output but for annual energy consumption (e.g., percentage probability that the group of dwellings will exceed an MWh/year value and how this might reduce for future climate scenarios).This is a simpler task in that it does not require the interrogation of detailed hourly heating loads, merely a summed value over the entire year.The example of output selected (Figure 7) is therefore chosen to highlight the ability of the tool to use hourly heating (or indeed cooling) loads that emanate from a large number of buildings.
There are current limitations when applying the LCF tool to multiple dwellings.The tool was originally designed for single dwellings where a specific occupancy (i.e., an hourly profile of occupant presence in the dwelling) could be input.However, when using an aggregated profile of hundreds of dwellings, the ideal occupancy profile would indicate, for example, the percentage of dwellings occupied every hour of the day.The tool does not allow for such an input to be accommodated at present, though later versions are planned to have such a feature to better distinguish between times of low and high occupancy.Despite this, the current version of the tool does demonstrate the type of output possible, allows researchers to think about the type of information that can be processed and, ultimately, what functions could be offered to practitioners.
Decision Tools for Future Design
Combining the approaches of the LCF and ARIES projects gives Figure 8.This, in a simplified form, presents the ability of the tools/methods to influence decision making for future design.The approach is flexible enough to account for different future scenarios, and can assist a designer in choosing features (or adaptations) that might suit a building (or buildings) for a future climate.Furthermore, the impact that this might have on an energy network, for that given future scenario, can be explored.As previously discussed, more work is required to expand the application of this method (particularly in terms of spatial and geographical scale), but the concept and initial simulations suggest the information provided would be suitable for practitioners in these respective industries.
Discussion
It is important to recognise common practice in industry when it comes to future building design, and the methods used in this area.Though several examples of applying future climate information to building design exists (albeit mostly in the non-domestic sector for the UK), there are different avenues that a designer might take depending on their end goal.The use of future probabilistic climate information is just one avenue, and the described research in this paper (and other sources referenced) provides a rationale for why and how such information can be used.
For building modellers interested in future climates, it is usually the case that the focus is on overheating (or under-cooling) of buildings, and the LCF tool provides a function in this respect.Applying the modelling to wider issues of energy supply, and factoring in future technologies that are likely to have an effect on electrical and thermal demand profiles across a country, is less common.The initial approach proposed here, collating the work of the LCF and ARIES projects, is therefore particularly novel and requires further work for such a method to be incorporated into industry practice.The work is not merely about merging two different tools/methods; it also provides a means for merging the different, but related, disciplines of building and energy system engineering.
It would also be remiss to ignore the fact that changes to the built environment, and the energy infrastructure serving this built environment, occur over timescales of decades.It could be argued that this gradual change of building stock, adaptations, mitigation refurbishments and resulting energy demand, will mean that both building designers and those involved with energy provision have time to respond, even within the context of changing climate and building technology.However, to ensure that designers are heading in the right direction, decisions for future decades need to be thought about now, and tools such as those proposed here can play an important part in this.
At the same time, the outputs of climate models should be used responsibly.Projections such as UKCP'09 provide a new approach in that they are slightly more faithful to the outputs of the climate models, but then have the disadvantage of being more complex and less immediately accessible than other projections of climate.The processing of this form of climate projection, in a way that it becomes useful for an end user, is paramount to ensuring that it becomes a standard and practical approach to estimating how to approach climate change in the built environment.
Conclusions
The work of the LCF and ARIES projects have demonstrated how probabilistic climate projections can be used to provide a quantified projection of climate-associated risk for both buildings and the energy systems used to serve those buildings.The methods have been developed with end-users in mind, harnessing the results of advanced dynamic simulation models for building performance and climate model outputs.The aim is to produce a simple, usable framework that encapsulates the complexity of climate and building modelling, without allowing that complexity to decrease the functionality of the method.Ensuring that decision-making is informed by the flexibility of such methods to multiple climate eventualities will increase the likelihood of buildings being designed that are low-carbon, resilient and able to function in a future climate.
Although producing outputs of increased complexity, requiring processing and translation if they are to be used in industry, future climate assessments of buildings are more representative of actual climate models if the metric of probability is used.This added complexity does not necessarily make the assessments less useful for industry.This specific study combines two validated methods and shows how dynamic simulation can be used to estimate the performance of groups of building across many different future climate scenarios, but without the need for multiple building simulations.The suite of integrated tools presented here attempt to manage the problems of calculation time and clarity of output simultaneously, with future work continuing to test the applications (and provide further validation) of the combined method.The results suggest that the equivalent of thousands of simulations does not necessarily equate to vastly increased simulation/calculation time when carrying out future transient energy assessments of groups of buildings.
Figure 1 .
Figure 1.Example of Low Carbon Futures (LCF) tool output for quantifying risk of future building failure in an example dwelling.
Figure 2 .
Figure 2. Simplified risk matrix from LCF tool for quantifying risk of future building failure in an example dwelling.
energy demand profiles of that building could be constructed.This could be used to model the impact of future technologies and future uses of buildings, all within the context of climate change.
Figure 3 .
Figure 3. Cooling energy consumption projections in a future climate for an example building.
Figure 4 .
Figure 4. Example of an individual dwelling demand profile at minutely resolution taken from an empirical dataset [24].
Figure 5 .
Figure 5. Example of an aggregated electrical demand profile of nine different dwellings at minutely resolution from an empirical dataset [24].
Figure 7 .
Figure 7. Result of future probabilistic heating load assessment of case-study dwellings.
Figure 8 .
Figure 8. Use of LCF and Adaptation and Resilience In Energy Systems (ARIES) methodologies for identifying future forms of energy provision for the built environment.
Table 1 .
Overview of building and occupancy types used to construct the aggregated thermal demand profile of Figure6. | 7,988.4 | 2015-08-28T00:00:00.000 | [
"Engineering"
] |
Congestion Control for Nonlinear TD-SCDMA Discrete Networks Based on TCP/IP
—A successive approximation approach (SAA) is developed to obtain a new congestion controller for the nonlinear TD-SCDMA network control systems based on TCP/IP. By using the successive approximation approach, the original optimal control problem is transformed into a sequence of nonhomogeneous linear two-point boundary value (TPBV) problems. The optimal control law obtained consists of an accurate linear feedback term and a nonlinear compensation term that is the limit of the solution sequence of the adjoint vector differential equations. By using the finite-time iteration of nonlinear compensation term of optimal solution sequence, we can obtain a suboptimal control law for TD-SCDMA network control systems based on TCP/IP.
I. INTRODUCTION
It is well known that the insertion of the TD-SCDMA network based on TCP/IP in the feedback control loop makes the analysis and design of a network control systems complex because the network imposes an undetermined communication delay [1] . Therefore, conventional control theories with many ideal assumptions must be revaluated before they can be applied to network control systems. For instance, the stochastic optimal controller and the optimal state estimator of a network control system whose network induced delay is shorter than a sampling period have been proposed by Nilsson [2] . In Ref. [3] a model-based network control system was introduced. This control architecture has as main objective the reduction of the data packets transmitted over the network by a networked control system.
TD-SCDMA Network control systems based on TCP/IP can be described by nonlinear systems [4] . An amount of literature related to the analysis and controller design of such systems has been developed over the past decades. The stability region estimation and controller design for nonlinear systems with uncertainties are considered [5] . While for the quadratic cost functional in the state and control, the optimal state feedback control problem often leads to solving a Hamilton-Jacobi-Bellman (HJB) equation or a nonlinear two-point boundary value (TPBV) problem. But for the general regulation problem of nonlinear systems, with the exception of simplest case, there is no analytic optimal control in explicit feedback form. This has spirited up researchers to develop many methods to obtain an approximate solution to the HJB equations or the nonlinear TPBV problems as well as obtain a suboptimal feedback control [6][7] .
Since TD-SCDMA network control systems based on TCP/IP are an integrated research area, which is not only concerned about control, but also relevant to communication, we must combine the knowledge of control and communication together to improve the system performance. Following this direction, in this paper, we address a novel scheme that integrates control technology with communication technology for a class of nonlinear network control systems [8] . We consider the TD-SCDMA networked control systems based on TCP/IP consisting of a collection of nonlinear plants whose feedback control loops are closed via a shared network link, as illustrated in Figure 1. All sample values of plant states are transmitted in one package [9] .The k-th plant is given by
II. PROBLEM FORMULATION
where x is an n -dimensional real state vector, u an rdimensional real control vector, B an r n ! constant matrix, 0 x a known initial state vector. Assume that Nonlinear function sequence g may be expanded into the series form where f is the nonlinear term whose order size is larger system (1) may be rewritten as The control objective, in an optimal control sense, is to find a control law ) ( * k u , which may make the quadratic performance index where R is an r r ! positive-definite matrix and Q Q f , are n n ! semi-positive-definite matrices.
III. PRELIMINARIES
As we know, we may get the optimal control law of the quadratic performance index (4) if and only if the system in (1) satisfies the following two-point boundary value problem: with the boundary conditions: Since (5) is a nonlinear two-point boundary problem, in a general way, it is difficult to get the solution whether the exact solution or the numerical solution.
We will propose a sensitivity approach to simplify the two-point boundary value problem in (5) and help get the optimal control law. Construct the following twopoint boundary value problem, in which a sensitive with boundary conditions . Obviously, when 1 = ! , the two-point boundary value problem in (6) (i denotes the i th-order derivative of the series with respect to ! when 0 = ! . In order to guarantee convergence for the series in According to the same reasoning process, we may also get the conclusion that Substituting (7), (11), and (12) into the two sides of (6), we may obtain ! " Substituting (17) into (16), we obtain the 0th-order optimal control law as follows: are known functions which are the solutions obtained in the (i-1)th iteration, two-point boundary problem in (20) is a linear nonhomogeneous one. In order to solve this problem, let V. AN ILLUSTRATED EXAMPLE Consider the optimal control problem for a bilinear model of a TD-SCDMA networked control system based on TCP/IP described by (1) and (3) The state variables 1 | 1,199.8 | 2015-08-31T00:00:00.000 | [
"Business",
"Mathematics"
] |
A novel computer-aided diagnostic system for accurate detection and grading of liver tumors
Liver cancer is a major cause of morbidity and mortality in the world. The primary goals of this manuscript are the identification of novel imaging markers (morphological, functional, and anatomical/textural), and development of a computer-aided diagnostic (CAD) system to accurately detect and grade liver tumors non-invasively. A total of 95 patients with liver tumors (M = 65, F = 30, age range = 34–82 years) were enrolled in the study after consents were obtained. 38 patients had benign tumors (LR1 = 19 and LR2 = 19), 19 patients had intermediate tumors (LR3), and 38 patients had hepatocellular carcinoma (HCC) malignant tumors (LR4 = 19 and LR5 = 19). A multi-phase contrast-enhanced magnetic resonance imaging (CE-MRI) was collected to extract the imaging markers. A comprehensive CAD system was developed, which includes the following main steps: i) estimation of morphological markers using a new parametric spherical harmonic model, ii) estimation of textural markers using a novel rotation invariant gray-level co-occurrence matrix (GLCM) and gray-level run-length matrix (GLRLM) models, and iii) calculation of the functional markers by estimating the wash-in/wash-out slopes, which enable quantification of the enhancement characteristics across different CE-MR phases. These markers were subsequently processed using a two-stages random forest-based classifier to classify the liver tumor as benign, intermediate, or malignant and determine the corresponding grade (LR1, LR2, LR3, LR4, or LR5). The overall CAD system using all the identified imaging markers achieved a sensitivity of 91.8%±0.9%, specificity of 91.2%±1.9%, and F\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{1}$$\end{document}1 score of 0.91±0.01, using the leave-one-subject-out (LOSO) cross-validation approach. Importantly, the CAD system achieved overall accuracies of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$88\%\pm 5\%$$\end{document}88%±5%, 85%±2%, 78%±3%, 83%±4%, and 79%±3% in grading liver tumors into LR1, LR2, LR3, LR4, and LR5, respectively. In addition to LOSO, the developed CAD system was tested using randomly stratified 10-fold and 5-fold cross-validation approaches. Alternative classification algorithms, including support vector machine, naive Bayes classifier, k-nearest neighbors, and linear discriminant analysis all produced inferior results compared to the proposed two stage random forest classification model. These experiments demonstrate the feasibility of the proposed CAD system as a novel tool to objectively assess liver tumors based on the new comprehensive imaging markers. The identified imaging markers and CAD system can be used as a non-invasive diagnostic tool for early and accurate detection and grading of liver cancer.
. To the best of our knowledge, the developed CAD system is the first of its kind to integrate novel morphological markers with rotation invariant textural markers and functional markers to differentiate malignant from intermediate, and benign tumors and determine the grade of the tumor to enable optimal medical management.
Materials
Study design and patients population Liver tumor patients with a high risk of developing HCC without a history of loco-regional treatment plan were included in this study. Patients with cirrhosis, chronic hepatitis, and patients with prior HCC were included. For multiple liver tumors in the same patient, separate analyses were performed for each tumor. The methods were carried out in accordance with relevant guidelines and regulations. All experimental protocols were approved by the University of Louisville, USA and Mansoura University, Egypt. Contrast-enhanced MR images were obtained for 97 participants in the period between November 2018 and January 2021. All participants were fully informed about the aims of the study and provided their informed consents. However, two patients were excluded from the study due to withdrawal of consent. The remaining 95 patients with liver tumors (M = 65 and F = 30) ranged in age from 34 to 82 years old (average 56 y ± 10 y). Using a secondary work station (Phillips Advantage windows workstation with functional tool software), three expert radiologists, blinded from each other, with more than 10 years of hands-on experience in liver imaging analyzed all CE-MR images of all participants according to LI-RADS v2018 5 . The image analysis was performed for four major markers including: nonrim arterial phase hyper-enhancement (APHE); non-peripheral wash-out appearance; enhancing capsule appearance; and size of the liver tumor. For each subject, three decisions were provided and the final decision was taken based on an agreement of at least two of them. Among the participating patients, 38 MR data acquisition protocol CE-MR images were obtained for the aforementioned patient population (N = 95) using a 1.5T Philips Ingenia scanner with phased-array torso surface coil. Extracellular contrast agent (gadolinium chelates) with a dose of 0.1 mmol/kg was injected at rate of 2 ml/s using an automated MR injector followed by a 20 ml saline flush. The abdomen MR scanning includes four different phases: pre-contrast (at t = 0 s), late arterial (at t = 35 s), portal venous (at t = 50 s), and delayed-contrast phase (at t = 180 s). All patients were asked to hold their breath during image acquisition to minimize possible respiratory effects. MRI acquisition parameters are summarized in Table 1.
Methods
The proposed CAD system to detect and grade liver cancer tumors is illustrated in Fig. 1. The CAD system performs the following steps: (i) extract morphological markers from the segmented liver tumors by using a new parametric spherical harmonic model, (ii) calculate textural markers that have been estimated by using a novel rotation invariant models, (iii) estimate the functional markers that have been calculated by estimating the wash-in/wash-out slopes to quantify the enhancement characteristics across different CE-MR phases, and (iv) model a two-stage random forest-based classification using the fusion of the identified markers to classify the liver tumor to benign, intermediate, or malignant and its corresponding grade (LR1, LR2, LR3, LR4, or LR5).
Features/markers extraction. The features/markers extraction step is a core component of the machine learning pipeline. A marker in machine learning is an independently measurable property or attribute of an observation. Selecting good markers that clearly distinguish between object classes increases the predictive power of the machine learning model. So, this process aims to reduce the raw data into standardized, distinctive, and machine understandable markers that the learning algorithm can use to solve the main classification problem. In consultation with our medical collaborators, we had decided upon several categories of markers that are suited to the nature of our problem. Three different types of markers are extracted from the segmented liver tumors to provide a quantitative discrimination between different types and grades of liver tumors, namely: (i) morphological markers based on spherical harmonics (SH) that have the ability to describe the morphology complexity of the liver tumors, (ii) functional markers based on the calculation of the wash-in/wash-out slopes to quantify the enhancement characteristics across different phases, and (iii) textural markers, namely; the first-order histogram markers, novel rotation invariant second-order markers based on gray-level co-occurrence matrix (GLCM) and gray-level run-length matrix (GLRLM), to capture texture differences between different types and grades of liver tumors.
Imaging markers. In order to enhance the performance of extracting/estimating morphological, textural, and functional imaging markers, all liver tumors were manually and accurately segmented using in-house software by two expert radiologists with more than 10-years of hands-on experience in medical image analysis, and consequently, 3D liver tumors objects were constructed (Fig. 2). To provide a precise discrimination between different types and grades of liver tumors, we characterized liver tumors objects by three different types of distinguishing image markers, namely; morphological markers, textural markers, and functional markers. These markers are described below in detail. Morphological Markers: To improve the sensitivity and specificity of early liver cancer diagnosis, new parametric morphology markers that can describe the complexity of the detected liver tumor were identified. The motivation for using morphological markers relies on the hypothesis that malignant tumors have greater growth rates and more complex shapes than benign tumors. As demonstrated in Fig.3, the morphology and surface www.nature.com/scientificreports/ complexity of liver tumors vary based on the malignancy status and its corresponding grade. The utilization of the morphology description will enhance the automated diagnosis capabilities. However, accurate modeling is critical in achieving such enhancement. In the proposed framework, we used the state-of-the-art spectral analysis employing spherical harmonics (SH) 22 to extract morphological markers for diagnosing liver tumors. Choosing a point inside the tumor as the origin of a spherical coordinate system, the tumor's surface may be considered a function of polar and azimuthal angle, which can be expressed as a linear combination of basis functions Y τβ defined on the unit sphere. The SH modeling builds a triangulated mesh approximating the tumor's surface, then maps it to the unit sphere. The mapping approach, using an attraction-repulsion technique 23 , provides precise modeling, as it keeps unit distance between each re-mapped node and the origin, while preserving distances between neighboring nodes. Let C α,i , with C α,i = 1 , be the coordinates of node i on iteration α of the attraction-repulsion algorithm, where i ∈ {1, . . . , I} . Let d α,ji = C α,j − C α,i denote the displacement from node i to node j, so the Euclidean distance between nodes i and j is d α,ji = d α,ji . Finally, let J i denote the index set of neighbors of node i in the triangulated mesh. Then the attraction step updates the position of each node to keep it centered with respect to its neighbors: where attraction factors C A,1 and C A,2 are parameters of the algorithm. The repulsion step subsequently inflates the whole mesh to ensure that it does not become degenerate, as the attraction step by itself would allow nodes to become arbitrarily close to one another.
where, repulsion factor C R is once again a parameter of the algorithm. Finally, the points are projected back onto the unit sphere, C α+1,i = C ′′ α+1,i /�C ′′ α+1,i �. At the terminal iteration α f of the Attraction Repulsion algorithm, the surface of the liver nodule is in a oneone correspondence with the unit sphere. Each node C i = (x i , y i , z i ) of the original mesh has been mapped to a corresponding point C α f ,i = (sin θ i cos φ i , sin θ i sin φ i , cos θ i ) with polar angle θ i ∈ [0, π] and azimuthal angle φ i ∈ [0, 2π) . It then becomes possible to describe the nodule by an SH series. In this representation, lower order harmonics give the rough extent of the nodule, while higher order harmonics provide the finer details of the surface. The SHs are generated by the solving an isotropic heat equation for the nodule surface considered as a function on the unit sphere. The SH Y τβ of degree τ and order β is defined as: where c τβ is the SHs factor and G |β| τ is the associated Legendre polynomial of degree τ and order β. Finally, the liver tumor object is reconstructed/approximated from the SHs of Eq. 3. Benign tumors are represented using a lower order combination of SHs as their morphology are less complex, while malignant tumors are represented using higher-order combination of SHs as their morphology are more complex. Therefore, the total number of markers quantifying the morphological complexity of the detected tumors is the number of the SHs www.nature.com/scientificreports/ used to reconstruct the original tumor. In this study, the sufficient number (70) is used to correctly reconstruct any tumor, and after which there are no significant changes in the approximations. For each approximation, the reconstruction error between the original mesh and the approximated shape is calculated. Due to the unit sphere mapping, for each approximation, the original mesh for the tumor is inherently aligned with the mesh of the approximate shape, and the sum of the Euclidean distances between the corresponding nodes gives the total error between both the mesh models. By calculating this for the 70 approximations of each tumor, 70 numerical values (reconstruction errors) are obtained, which quantitatively describe the morphology of the tumor. Figure 4 shows the morphology approximation for five liver tumors (two benign, two malignant, and one intermediate).
Summary of the Attraction-Repulsion algorithm is provided below. Initialization: • Triangulate the surface of the nodule.
• Smooth the triangulated mesh with Laplacian filtering.
• Initialize the spherical parameterization with an arbitrary, topology-preserving map onto the unit sphere.
Textural markers To improve the sensitivity and the specificity of early liver cancer diagnosis, a comprehensive textural analysis was performed. In particular, first and second order textural markers that can describe the inhomogeneity/homogeneity of the detected liver tumor were extracted from the four different phases/sequences, namely; pre-contrast, late arterial, portal venous, and delayed-contrast phase.
The motivation for using textural markers relies on the hypothesis that malignant tumors appearance is inhomogeneous compared to benign tumors [24][25][26][27][28][29][30] . Figure 5 demonstrated the differences in inhomogeneity between benign and malignant tumors which supported our hypothesis.
For the first order, a normalized empirical histogram ( Fig. 6) was used to estimate all the first-order textural markers that are shown in Table 2 31 . The mathematical formulas of these markers are summarized in Supplementary 1, Table A1.
Since the first order texture might be sensitive to noise, two types of second order textural markers (gray-level co-occurrence matrix (GLCM) and gray-level run-length matrix (GLRLM)) were used to capture the inhomogeneity in liver tumors 32,33 .
GLCM: is a matrix that considers the spatial relationships between voxels (the reference and the neighboring voxels) at a neighborhood block. Specifically, GLCM accounts for how frequently a pair of gray-level intensity values appears adjacently within the object. These frequencies are calculated for all gray-level possible pairs according to the gray-level range of the targeted object. The construction of the GLCM starts with specifying the range of gray-levels of the object and normalizing observed gray-level values to the desired range. Then all possible pairs are determined representing the matrix rows and columns (each element within the matrix is related to two gray-level values representing the row and the column of this element). Finally, the value of each element in the matrix is computed by examining how each voxel is different from its neighbors. The neighborhood block is defined by a distance ≤ √ 2 making the calculations rotation invariant as shown in Fig. 7. During analysis, gray-level values were normalized to the range of [0, 255], yielding a GLCM with size of 256×256.
After constructing the GLCM, the matrix is normalized such that the sum of all elements is 1 in order to extract the discriminating textural markers 31,32 . Table 2 shows these markers. The reader is referred to Supplementary 1, Table A2 for the equations used to obtain these markers. www.nature.com/scientificreports/ GLRLM: In addition to calculating the frequency of occurrence of voxel pairs represented by GLCM, GLRLM measure the voxels' connectivity by looking at voxel runs. It examines how many times each gray-level value appeared consecutively in a run of voxels. This matrix has its number of rows equal to the gray-level range and number of columns as the largest possible run which is the largest dimension of the object (typically appears in the XY-Plane). Hence, each element in the matrix indicates the frequency of a specific gray-level value (the element's row index) in a specific run length of consecutive voxels (the element's column index). Each structure had a matrix with 256 rows (normalized gray-level range) while the number of columns is different amongst objects. Here, we looked for runs of consecutive horizontal voxels in the XY-Plane (in the same layer) and vertical run of voxels is examined in the Z-Plane (among different layers). Then, distinguishing measures of the GLRLM describing the texture of our structures were computed 31,33 . These markers are shown in Table 2. The reader is referred to Supplementary 1, Table A3 for the equations used to obtain these markers.
Functional markers. Liver tumor's functionality can be quantified by hyperenhancement (wash-in) and
hypointensity (wash-out). The wash-in can be estimated in the late arterial phase while the wash-out is estimated in the portal venous phase and/or delayed phase 34,35 . To compute the functional markers, we studied the gray-level intensity changes across the post contrast phases extracting three features. These features are math- www.nature.com/scientificreports/ ematically expressed by the gray-level slope in each phase. These slopes are calculated by getting the gray-level intensity change rate over the time of each phase. Typically, positive slopes for wash-in and negative for washout. Malignant tumors have a higher and more rapid wash-in and wash-out slopes than those of intermediate or benign tumors. Figure 8 shows the wash-in and wash-out slopes, for a malignant, an intermediate, and a benign tumor during the three post-contrast phases.
Features/markers selection. Features/markers selection is a method of selecting the most desirable and appropriate characteristics from a large collection of potential markers. This process results in m markers chosen out of a set of n possibilities, where m < n , and m is the smallest set of significant and important markers. Two approaches were applied here, namely, Wrapper approach 36,37 and Gini impurity-based selection 38 .
Wrapper approach The selection process in wrapper methods is based on repeatedly running a particular machine learning algorithm on a given dataset. Comparing the results of the algorithm, provided various marker subsets on input, the wrapper method selects the combination of markers giving optimum performance. Note the specific performance criterion depends upon the problem being solved. The wrapper method follows a greedy search strategy through the space of possible markers. We performed two different wrapper approaches to find the optimal set of markers: (i) Forward Selection: Beginning with a null model, single-feature models are fitted one at a time, and the marker with the lowest p-value is chosen as optimal. Each of the remaining markers is combined with the one previously selected in a two-parameter model, and the additional marker with the lowest p-value is again chosen. Then each remaining marker is combined in turn with the previous two to find the third optimal marker, and so forth. Forward selection thus generates models with 1, 2, . . . , m markers, terminating when none of the remaining candidate markers have a p-value less than a predetermined threshold. Algorithm 1 summarizes the forward selection approach. Here, we applied the forward selection with two significance thresholds (0.05 and 0.1). (ii) Bi-directional elimination (Step-wise Selection): It is similar to forward selection, but the difference is that it also tests the importance of already added markers before introducing a new one, and if it considers any of the already selected markers irrelevant, this marker is simply eliminated. The Table 2. First and second order textural markers.
Textural marker Definition
First order
Mean ( µ)
Represents the gray-level values balance point of each object. It is calculated simply by getting the average gray-level value for each object.
Variance
Describes the gray-level distribution around our computed Mean.
Skewness Expresses how the gray-level values are asymmetrically distributed around the Mean of the object.
Kurtosis
Measures to what extent the gray-level values are concentrated towards the tails of the distribution.
Entropy
Expresses the amount of randomness within each structure gray-level values.
CDFs
Return the cumulative distribution function of the histogram density values. This is calculated along the whole object and getting the cumulative sum of the gray-level values (Normalized to [0 to 1] at multiple positions (from 0 to 100% of the object with a 10% step).
Percentiles
Calculate the percentiles of gray-level values for the corresponding CDFs.
Second order
Contrast
Measures the disparity in gray-level values between neighbors.
Dissimilarity
Finds to what extent voxels are different from their neighbors.
Homogeneity
Expresses the inverse difference moment among neighbors.
Energy
The square root of the ASM. Here, we also applied the bi-directional elimination with two thresholds of significance (0.05 and 0.1).
Scientific Reports
Gini impurity-based selection In a data science workflow, Random Forests are also used for features/markers selection. This resulted from the fact that the tree-based approaches used by random forests naturally rely on how well the purity of the node is enhanced. This suggests a drop in impurity over all trees, called Gini impurity. At the start of the trees, nodes with the largest decrease in impurity occur, while nodes with the least decrease in impurity occur at the end of the trees. Thus, we can build a subset of the most significant markers by pruning trees below a given node. Algorithm 3 shows the steps of applying this selection approach. To apply this algorithm, we performed the selection process in two different scenarios (combined and separate markers selection). For the combined selection, we applied the Gini impurity-based approach on the whole set of markers to find the optimal set of markers to use. While for the separate method, we performed the selection on the morphological, textural, and functional markers separately to find the optimal markers at each group. Then, we combined these limited markers sets to build the final, optimal marker set. ). First, classification performance was assessed using individual markers, namely, SHs morphological markers, the first order textural markers, the second order GLCM textural markers, the second order GLRLM textural markers, and wash-in/wash-out slopes functional markers. The categorized numbers and description of these discriminating markers is detailed in Table 3. Subsequently, all the markers were integrated by using concatenation methods obtaining combined markers. The aforementioned ML classifiers were used for the final diagnosis. A grid search algorithm along with the diagnostic accuracy as an optimization metric were employed to find the optimal set of different ML classifiers' hyper-parameters. The optimal sets of hyperparameters for each classifier are as follows: RFs (class weight='balanced' , criterion='gini' , max depth=30, min samples leaf=5, min samples split=2, n estimators=100), kNN Fine (leaf size=30, metric='minkowski' with power of 2, n neighbors=5, weights='uniform'), SVM (regularization parameter = 1, break ties=False, cache size=200, decision function shape=' ovr' , degree=3, gamma=0.001, max iter=-1, tol=0.001), NB (alpha=0.5, binarize=0.0, class prior=None, fit prior=True), and LDA (n components=1, priors=None, shrinkage=0.52, solver='lsqr' , store covariance=False, tol=0.0001). Given a liver tumor CE-MR series, one can obtain the final diagnosis (LR1, LR2, LR3, LR4, or LR5) of that tumor by applying the developed CAD system steps outlined in Algorithm 4 below.
Experimental results
The diagnostic accuracy of the proposed CAD system was evaluated using a leave-one-subject-out (LOSO), randomly stratified 10-fold, and randomly stratified 5-fold cross-validation approaches. LOSO relies on training the classification model with all observations except one subject set aside for testing purposes. The classification model is then reinitialized before the next iteration, and the observation previously left out is included in the training data, leaving the following subject out for testing purposes. This process is repeated for 95 times (i.e., the total number of subjects in our dataset), and at each iteration, the training and the testing samples are of size 94 and 1, respectively. For the stratified k-fold cross-validation, a fraction 1 k × 100% of the data are randomly selected and left for the testing purposes, while the remaining 1−k k × 100% part of data are used as the training data. The classification model is then reinitialized in the next iteration, and the subjects left in the previous iteration are included in the training, leaving the next 1 k × 100% part of subjects aside for testing purposes. This process is repeated for k iterations. To assure the robustness of the developed model, we performed the randomly stratified k-fold cross validation approach with two values of k, i.e. 10 and 5.
It is important to keep in mind that in the implementation of k-fold cross-validation, stratification was guaranteed to help reduce both bias and variance. The technique of stratification not only enables randomization, but also ensures that the training/testing sets would have the same proportion of each class as in the entire data set. In our case, stratification means that 40% of the training/testing sets will be derived from benign subjects (N = 38), 20% from intermediate (N = 19), and 40% from malignant cases (N = 38).
Two classification stages were performed to obtain the final diagnosis. In order to quantitatively express the classification performance, each classification process was repeated 10 times and the obtained results were reported in terms of mean±standard deviation. The first classification stage aimed to differentiate between benign (LR1-2), intermediate (LR3), and malignant tumors (LR4-5). The performance of the developed CAD system was first assessed using the obtained individual markers, namely; morphological markers, textural markers, and functional markers along with several ML classifiers. To highlight the advantage of integrating these individual markers, we compared the diagnostic performance of the combined model with these individual models using the following metrics: sensitivity, specificity, and F 1 score 39,40 , where TP is the number of correctly classified malignant subjects; TN is the number of correctly classified benign subjects; FP is the number of benign and intermediate subjects misclassified as malignant; and FN is the number of malignant and intermediate subjects misclassified as benign. The combined model achieved sensitivity of 91.8%±0.9%, specificity of 91.2%±1.9%, and F 1 score of 0.91±0.01 using the RFs classifier outperforming the performance of all individual models as shown in Table 4. This enhanced diagnostic performance due to the integration process enables the algorithm to account for different aspects of quantifying markers (morphological, textural, and functional).
To find the optimal classifier for the developed CAD system, we compared the obtained diagnostic results of the combined model using several ML classifiers (i.e., RFs, KNN Fine , SVM Cub,Quad , NB, and LDA) along with different validation approaches (LOSO, 10-fold, and 5-fold www.nature.com/scientificreports/ respectively, for the 10-fold, and 87.0%±1.8%, 89.3%±2.6%, and 0.88±0.02, respectively, for the 5-fold, the RFs proves itself as the best among the used different ML classifiers. Table 5 summarizes the comparison results between the performances of different ML classifiers and approaches. The classification performance obtained by RFs 41,42 can be justified by that they are well-known robust machine learning classification techniques that have been widely used in solving medical classification problems 43 . RFs is an example of an ensemble learner which is built on bagging a collection of decision trees and random subspace method. This bagging mechanism helps to find all possible correlations between the decision trees in an ordinary bootstrap sample. When some markers are found to be strong predictors to target output, these markers will be selected in many decision trees and become correlated. Once the training process is performed, the final results are normally obtained by majority vote or model averaging mechanism 41,42 . RFs classifier was selected for use in the proposed CAD system as it outperformed all other classifiers that were tested. For the second classification stage, grading for each class was performed: benign class (LR1 vs. LR2) and malignant class (LR4 vs. LR5). All markers were combined together and fed to an RFs classifier to obtain the final diagnosis using LOSO, 10-fold, and 5-fold cross-validation approaches. As shown in Table 6 (using LOSO approach), an overall accuracy of 89.47±2.35% was obtained for grading the benign tumors, while 88.95±1.58% overall accuracy was obtained for grading malignant tumors. Finally, the results from both stages were combined to obtain the final diagnosis result, and grading of the tumors into LR1, LR2, LR3, LR4, and LR5. It is worth mentioning that the developed CAD system using a two-stage RFs classification model (see Fig. 1) provided more enhanced diagnostic performance than applying a single stage RFs classification as evidenced by the final confusion matrices shown in Fig. 9.
To highlight the advantages of utilizing the integrated markers over the reduced markers, we compared the final diagnostic performance obtained by the developed CAD system with that obtained after applying six different scenarios of features/markers reduction, namely, Table 7. For a favorable comparison, the complete confusion matrix of the developed CAD system is shown in Fig. 9(a) and the confusion matrices of the aforementioned scenarios are shown in Fig. 10.
To appreciate the diagnostic performance obtained by the developed CAD system, we applied two different approaches from the literature 18,21 on our dataset (N = 95) and the intended classification problem of liver tumor grading (LR1. vs. LR2. vs. LR3. vs. LR. vs. LR5) for a fair comparison. Then, we compared the final diagnostic results obtained by the developed CAD system with those obtained by the two different approaches. As documented in Table 8 and shown in Fig. 11, the diagnostic performance of developed CAD system outperformed all the aforementioned approaches for liver tumor grading.
Discussion and conclusions
HCC has a high mortality at later stages. Effective identification of a comprehensive screening system at early stages is important and must be tailored to a broader algorithm for the management. Professional research groups have advocated recommendations to aid physicians and radiologists to handle HCC. LI-RADS aims to standardize the HCC-related lexicon and to create an image algorithm to boost the homogeneity of data collection and image reporting. The clinical gold standard for HCC diagnosis is image analysis performed by blinded Figure 9. The overall confusion matrix obtained for the developed CAD system using LOSO approach utilizing the integrated markers for grading the tumors into (LR1, LR2, LR3, LR4, and LR5) using (a) a two-stage RFs classifier (proposed classification approach) compared to (b) a one-stage RFs classifier. www.nature.com/scientificreports/ independent expert radiologists for arterial phase hyperenhancement, wash-out appearance, enhancing capsule appearance, and size [3][4][5][6][7][8][9]44 .
On the other hand, radiogenomics and novel imaging developments are designed to understand HCC's heterogeneity through imaging, and to facilitate individualized care for each tumor unique signature. Advanced algorithms and trends approved their ability to enable greater precision in diagnosis and grading, along with potential guidance on personalized health care [12][13][14][15][18][19][20][21][45][46][47] .
In this study, the extracted tumor lesions from the CE-MR images at different phases were combined in 3D objects. These 3D objects representing the subjects at different phases (4 phases per subject) consist of multiple voxels lying in the lesions and parenchyma of the surrounding liver. Each voxel displays a gray-scale value based on its signal strength which is influenced by the various histopathological factors. Therefore, in lesions, 3D arrays of gray-scale values may show complex geometric patterns that are distinctive to tumor forms, although they may be visually unrecognizable. For this reason, we performed texture analysis in our study. Texture analysis effectively describes how values of voxels depends on the gray-level of each voxel in a specific area. This texture information had proved itself to have great impact on the classification techniques performance in multiple studies [24][25][26][27][28][29][30] . In this study, we worked on first and second order texture analysis and extracted textural markers using different methods and algorithms. First order texture analysis explains how voxel intensities are distributed among tumor lesions at each phase. Thus, these descriptors depend basically on the independent value of each voxel. The computed first order markers are mean, variance, standard deviation, skewness, kurtosis, entropy, cumulative distribution functions, and gray-level percentiles 31 . Second order texture analysis algorithms vary from those first order algorithms in that they are essentially based on the neighborhood relationship between voxels. Such algorithms are spatially variant which implies that voxel arrangements relative to each other (neighbors) directly influence the analytical techniques of these algorithms. We have previously worked with both GLCM and GLRLM 32,33 .
These GLCM and GLRLM second order texture analysis has shown an ability to differentiate between benign, malignant, and intermediate liver tumors due to its sensitivity to spatial interrelationships. The developed neoangiogenesis, high neovascularity and aggressive growth patterns within malignant tumors can cause complex internal architectures. This leads to a significant variation in micro-environment and heterogeneity between liver lesions with different malignancy status. Thus, more subtle variations in tumor heterogeneity can be identified by examining the voxel attenuation and its spatial interrelationships. Malignant tumor lesions show increased texture heterogeneity compared to intermediate and benign lesions. GLCM can determine if the voxels are uniformly distributed (Benign) or segregated in groups (malignant) and the GLRLM shows how these voxels are connected together across the whole lesion; long runs (homogeneous) or short runs (heterogeneous). All of these discrepancies could be observed, interpreted, and quantified using these extracted second order textural markers.
Furthermore, functional markers demonstrated a potential in identifying the malignancy status of a given liver tumor. Thus, we studied the gray-level intensity changes across the post contrast phases extracting three markers (late arterial wash-in, portal venous wash-out, and delayed wash-out). These markers are mathematically expressed by the gray-level slope in each phase. These slopes catch the variations in the enhancement markers that exist. In this analysis, the findings obtained through the measurement curves of functionality are fair and illustrate the efficacy of these markers in differentiating between different liver tumors' grades.
A liver tumor's grade of malignancy determines the morphology of the tumor. Malignant tumors usually show a more complex morphology than that of benign ones. Thus, morphological markers were used to identify potential variations between benign, intermediate, and malignant HCC tumors.
Liver tumors' grades were identified by characterizing 3D objects structured from CE-MR images using morphological, textural, and functional markers. All markers were analyzed using machine learning models in the classification process. Although some of these markers showed substantial variations between different grades of liver tumors, there is still a large overlap. Such variation prevents the use of single markers class to better identify liver tumors, even though the most suitable CE-MR sequence has been used. Using a combination of markers www.nature.com/scientificreports/ provided a better approach to discriminating against malignant tumors from intermediate and benign ones. With significant diagnostic performance, the proposed system first distinguished between benign, intermediate, and malignant HCC tumors using the integration of all markers. Then using the same classification and validation processes, the LR1 benign tumors were classified from LR2, and LR4 malignant tumors were differentiated from LR5. Such findings reflect the accuracy of our methodology and the potential clinical utility of these approaches when used with CE-MR imaging in computer-aided diagnosis of liver tumors. These findings are documented in Tables 4, 5, and Fig. 9.
In conclusion, the developed CAD system demonstrated high diagnostic performance (sensitivity = 91.81%±0.88%, specificity = 91.17%±1.90%, and F 1 score = 0.91±0.01) by integrating morphological markers with textural markers and functional markers outperforming the diagnostic performance of each individual marker alone. In addition, the developed CAD system achieved overall accuracies of 88%±5%, 85%±2%, 78%±3%, 83%±4%, and 79%±3% in grading liver tumors into LR1, LR2, LR3, LR4, and LR5, respectively. These results demonstrates the feasibility of the integration process between different discriminating markers that account for different aspects of the liver tumor characteristics, namely; morphology, texture, and functionality. In the future, a larger subject cohort dataset will be used to further enhance the performance of the CAD system in distinguishing and grading multiple liver tumors. Additionally, other possible liver tumors with LRM will be added to our dataset to enhance the diagnostic abilities of the CAD system.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on a reasonable request. | 8,174.6 | 2021-06-23T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
A Taxonomy of Technologies for Human-Centred Logistics 4.0
: Following the spread of the Industry 4.0 paradigm, the role of digital technologies in manufacturing, especially in production and industrial logistics processes, has become increasingly pivotal. Although the push towards digitalization and processes interconnection can bring substantial benefits, it may also increase the complexity of processes in terms of integration and management. To fully exploit the potential of technology, companies are required to develop an in-depth knowledge of each operational activity and related human aspects in the contexts where technology solutions can be implemented. Indeed, analyzing the impacts of technology on human work is key to promoting human-centred smart manufacturing and logistics processes. Therefore, this paper aims at increasing and systematizing knowledge about technologies supporting internal logistics working activities The main contribution of this paper is a taxonomy of the technologies that may be implemented in the different internal logistics areas to support a Logistics 4.0 model. Such a contribution is elaborated in accordance with a deductive approach (i.e., reasoning from the particular to the general), and backed up by an analysis of the literature. The taxonomy represents a useful framework to understand the current and possible technological implementations to drive logistics processes towards Logistics 4.0, with specific attention to the relation between human operators and technologies.
Introduction
In recent years, in the aftermath of the substantial spread of Industry 4.0 and digitalization paradigms, digital technologies have played an increasingly important role in industrial production and logistics [1]. Connected machines, automated warehouses, Internet of Things (IoT) applications, autonomous vehicles, and drones, to name a few, can bring considerable benefits, especially in terms of waste and risks reduction, as well as productivity and safety increase in the working environment. However, the introduction of digital technologies in a production or logistics process always causes an increase in the complexity of overall system management [2]. A growth in complexity often affects negatively the workforce, who may have to adapt to a working environment that requires new skills, as well as the company, which may have to deal with new operational models, additional investments (e.g., cybersecurity), and with the need to apply standards not only in its departments but often also with suppliers and customers [3].
For these reasons, many companies are still in a transition phase from traditional production and logistics methods and management practices to more innovative ones [4]. There are still knowledge gaps regarding the technologies to be implemented in production and logistics processes, that can translate into resistance to technological innovation and fears of managing increased costs, impacts, complexity, and changes. Therefore, it becomes essential to bridge these knowledge gaps helping companies understand which technologies are the most appropriate to be implemented considering the company goals and the investments that can be afforded [5].
In particular, the logistics area has been affected by a rapid acceleration towards the necessity of technologies implementation. This acceleration can be attributed to the fact Appl. Sci. 2021, 11, 9661 2 of 13 that the market increasingly requires higher service levels and shorter lead and fulfillment times for the supply of mass customized productions [6]. Moreover, the phenomenon of digitalization is pushing for better communication and data sharing between suppliers and customers to increase supply chains flexibility and resilience, which are increasingly necessary as the supply crisis of some essential products (such as personal protective equipment) during the pandemic period have shown [7]. In order to meet these market requirements, changes are occurring both in internal logistics and in the whole supply chain: correspondingly, the concept of Logistics 4.0 has emerged to represent "the logistical system that enables the sustainable satisfaction of individualized customer demands without an increase in costs, and supports this development in industry and trade using digital technologies" [8], describing specific applications of Industry 4.0 in the area of logistics [9]. However, the transition from traditional logistics to Logistics 4.0, namely introducing technologies and new operational models in the management of logistics systems, will bring benefits but also possible adverse effects (e.g., increased complexity) for which it is necessary to be aware and prepared. Furthermore, the role of humans in the Logistics 4.0 will be crucial to properly exploit the potential of technologies [8].
Therefore, this paper aims at increasing and systematizing knowledge about Industry 4.0 technologies supporting industrial logistics working activities. In particular, in this paper we present a taxonomy-developed using a deductive approach and an extensive literature review-to classify the main Industry 4.0 technologies implemented in industrial logistics according to three main categories: internal logistics activities, flow types, and human-technology relations. The taxonomy, classifying or categorizing a body of knowledge, helps in sharing and communicating relevant information about the subject being studied, thereby representing a useful framework to understand and explore the current and possible technological implementations to drive logistics processes towards Logistics 4.0, with specific attention to the relation between human operators and technologies. This paper is organized as follows. First, the research methodology is presented, including the taxonomy construction rules (Section 2). Then, the definition of the main taxonomy categories and objects will be presented (Section 3). After that, the results obtained will be described and discussed (Section 4). An analysis of the leading research evidence, limitations, and future research developments concludes the paper (Section 5).
Methodology
The research methodology adopted to devise a proper taxonomy of technologies to guide practitioners in choosing the most suitable technological applications in industrial contexts is composed of several steps, that will be described in the following.
According to the Oxford Dictionary, a taxonomy represents "the scientific process of classifying things" or "a particular system of classifying things". Since the main objective of taxonomies is to classify something, they represent a qualitative research design method, able to adapt to novel and scantly explored topics since they bring order in complex domains, and can be used for ex post theory building [10]. In particular, concerning technologies, taxonomies offer definitions and distinctions, and help define specifications for implementation [11]. Although taxonomies are often developed ad hoc for specific topics, it is also possible to find in literature structured approaches to build them, including both inductive and deductive approaches [10,12]. Generally, developing a taxonomy is a multistep process that involves identifying a classification scheme, and evaluating the population according to the proposed scheme [13]. To meet the purpose of identify a classification scheme, designing a taxonomy includes (i) defining the units of classifications (i.e., categories); (ii) describing the different attributes (i.e., dimensions) of each category; (iii) assigning the subject instances (i.e., objects) to the defined categories and dimensions.
Following this scheme, in this paper, we develop a taxonomy based on a deductive approach; in Figure 1, the research workflow is presented, adapted from the conceptual-toempirical process proposed in [10,14]. each category; (iii) assigning the subject instances (i.e., objects) to the defined categories and dimensions.
Following this scheme, in this paper, we develop a taxonomy based on a deductive approach; in Figure 1, the research workflow is presented, adapted from the conceptual-to-empirical process proposed in [10,14]. The research starts from the definition of the three categories for the taxonomy; one-the internal logistics activities-has been derived directly from literature, while the other two-the flow types and the human-technology relation-have been conceptualized ex novo by the authors, based on both scientific literature and on the authors previous experience concerning these topics. At this point, a review of the extant literature concerning internal logistics activities, flow types and the human-technology relation has been performed (Step 1). Relevant literature has been retrieved from well-established databases such as Scopus, Web of Science, Google Scholar, and ScienceDirect to be inclusive concerning the publications types (e.g., journal papers, conference proceedings). This literature analysis, presented in Section 3.1, has two objectives: first, to identify and conceptualize the categories that will be used for the classification, specifying the different dimensions for each of them (Step 2.1); second, to identify the population of taxonomy objects to be classified (Step 2.2). The categories are generally chosen following the purpose of the study, and this choice is recognized as a crucial step in taxonomy development [10]. A deductive line of reasoning has been followed in this study to understand the domain of interest and propose the categories [15]. For each category, then, the dimensions of classification have been defined ( Figure 2). The explanation of all the categories and their dimensions is presented in Section 3.1. The research starts from the definition of the three categories for the taxonomy; onethe internal logistics activities-has been derived directly from literature, while the other two-the flow types and the human-technology relation-have been conceptualized ex novo by the authors, based on both scientific literature and on the authors previous experience concerning these topics. At this point, a review of the extant literature concerning internal logistics activities, flow types and the human-technology relation has been performed (Step 1). Relevant literature has been retrieved from well-established databases such as Scopus, Web of Science, Google Scholar, and ScienceDirect to be inclusive concerning the publications types (e.g., journal papers, conference proceedings). This literature analysis, presented in Section 3.1, has two objectives: first, to identify and conceptualize the categories that will be used for the classification, specifying the different dimensions for each of them (Step 2.1); second, to identify the population of taxonomy objects to be classified (Step 2.2). The categories are generally chosen following the purpose of the study, and this choice is recognized as a crucial step in taxonomy development [10]. A deductive line of reasoning has been followed in this study to understand the domain of interest and propose the categories [15]. For each category, then, the dimensions of classification have been defined ( Figure 2). The explanation of all the categories and their dimensions is presented in Section 3.1.
The following research step (Step 3) has been the examination of all the objects of the population (i.e., the technologies) according to the defined categories. In doing this, each technology has been analyzed according to the definitions and the application examples found in the literature (see also Table 5 in Section 3.2). The following research step (Step 3) has been the examination of all the objects of the population (i.e., the technologies) according to the defined categories. In doing this, each technology has been analyzed according to the definitions and the application examples found in the literature (see also Table 5 in Section 3.2).
At this point, the first draft of the taxonomy has been created (Step 4). After this first attempt in the classification of technologies, it appeared clearly that a minor revision of the category related to the logistics area was required. In particular, some areas have been merged into a unique dimension since there was complete overlap between them as stated in the introduction of Section 4. This revision phase led to the final taxonomy (Step 5). In addition, the evaluation and classification of technologies into the defined dimensions in order to build the taxonomy has also been guided by the expertise of the researchers, which has been acquired during multiple experiences and case studies research in collaboration with industrial companies. The resulting outcome of this research is presented in Section 4.1 and discussed in Section 4.2.
The Taxonomy Categories, Dimensions and Objects
In this section, the scientific literature concerning the three categories, the dimensions and the objects of the taxonomy is reported, in order to describe their different dimensions.
Internal Logistics Activities
Internal logistics can be defined as "planning, execution, and control of the company's physical flow and internal information, seeking to optimize the resources, processes, and services with the highest possible profit" [16]. From this definition, it is easy to understand the complexity of internal logistics, which is made up of many sub-activities and must necessarily integrate and interact with other departments in the company (e.g., production and purchasing), and use information from the supply chain (e.g., suppliers and customers). Because of this complexity, however, logistics can play a unique role as a boundary-spanning interface between marketing and production, becoming a potential source of competitive advantage [17]. The importance of the logistics role combining with the high costs of logistics explains why internal logistics is one of the critical areas for implementing new technologies. However, if the various activities that make up internal logistics have to be identified, a defining boundary must be set. The boundaries of in- At this point, the first draft of the taxonomy has been created (Step 4). After this first attempt in the classification of technologies, it appeared clearly that a minor revision of the category related to the logistics area was required. In particular, some areas have been merged into a unique dimension since there was complete overlap between them as stated in the introduction of Section 4. This revision phase led to the final taxonomy (Step 5). In addition, the evaluation and classification of technologies into the defined dimensions in order to build the taxonomy has also been guided by the expertise of the researchers, which has been acquired during multiple experiences and case studies research in collaboration with industrial companies. The resulting outcome of this research is presented in Section 4.1 and discussed in Section 4.2.
The Taxonomy Categories, Dimensions and Objects
In this section, the scientific literature concerning the three categories, the dimensions and the objects of the taxonomy is reported, in order to describe their different dimensions.
Internal Logistics Activities
Internal logistics can be defined as "planning, execution, and control of the company's physical flow and internal information, seeking to optimize the resources, processes, and services with the highest possible profit" [16]. From this definition, it is easy to understand the complexity of internal logistics, which is made up of many sub-activities and must necessarily integrate and interact with other departments in the company (e.g., production and purchasing), and use information from the supply chain (e.g., suppliers and customers). Because of this complexity, however, logistics can play a unique role as a boundary-spanning interface between marketing and production, becoming a potential source of competitive advantage [17]. The importance of the logistics role combining with the high costs of logistics explains why internal logistics is one of the critical areas for implementing new technologies. However, if the various activities that make up internal logistics have to be identified, a defining boundary must be set. The boundaries of internal logistics correspond to the physical boundaries of the company. Consequently, for the purposes of this research, we consider as internal logistics operations all the activities related to the movement and storage of raw materials, semi-finished and finished products within the company's boundaries [18]. These main activities of internal logistics that constitute the dimensions of the first category of the taxonomy and their definition are shown in Table 1. Table 1. Internal logistics activities.
Material Handling
Material handling is the movement of raw materials and products inside a factory throughout manufacturing, warehousing, distribution, consumption, and disposal. [19] Storage Storage is the activity of storing products at warehouses and logistics centers. Its role is to provide a steady supply of goods to the production line or to the market to fill the temporal gap between two different production lines or between producers and consumers. [20] Order picking The order picking or order preparation operation is one of a logistic warehouse's processes. It consists of taking and collecting articles in a specified quantity before shipment to satisfy internal production or customers' orders. [21]
Stowage
The stowage decision determines how arriving products are distributed in a storage system or warehouse. This activity is particucular important for large warehouses that are organized into distinct storage zones. [22] Packing Packing is a coordinated activity of preparing goods for safe, secure, efficient, and practical handling, transport, distribution, and storage. Packing activities also have to facilitate distribution, protect both products and the environment and provide information about products conditions and production information. [23] Labelling Labelling is the activity related to the product identification. It is a printed information that is bonded to the product for recognition and provides detailed information about it (e.g., content, origin, usage modes). [24] Kitting Kitting is the activity of compiling multiple components/products into a single "kit" for bring the materials to be processed on the production line or for directly shipping finish products to the customer. [25] Consolidation Consolidation is the process where a company combines several smaller shipments into one full delivery. [26] 3.1.
Flow Types
There are mainly three flows that intercede between suppliers and customers in logistics: flows of materials, information, and money [27]. Leaving aside the flows of money that usually go up the chains from the customers to the suppliers and are out of the scope of our analysis, the other two types of flow can go in both directions. Flows of materials and products usually go from the suppliers to the customers, but can include also inverse flows of returned products or products to dispose. In the same way, information flows can go both from supplier to customer and vice versa. The information flows from supplier to customer allow, in fact, correct management of the supplies (as an example, communicating in advance the eventual delays in supplying the raw materials is crucial for the production scheduling). In contrast, the information flows from the customers to the suppliers allow a better forecast of the demand and a better production planning (as an example, avoiding the so-called bullwhip effect).
Moreover, in the last years, the concept of digital supply chains (DSC) has been emerging as the next generation of designing, producing, and supplying goods, relying on interconnected systems, automatization of plants, and collaborative digital platforms [28]. In particular, the literature showed that the use of technologies is positively correlated to the joint performance of the suppliers and the focal company, and digitalization allows companies' openness to share core information with their suppliers and vice versa [29].
Horizontal integration is one of the Industry 4.0 pillars since it supports dynamic and flexible supply chains, which allows more intelligent and optimized internal logistics processes [28,30].
In this context, the efficiency of the Logistics 4.0 activities not only depends on the material flows and how they are managed, but also on data and information flows since they are essential to support operational and tactical decision-making for internal logistics management enabling horizontal integration. Consequently, information flows and material flows are the two dimensions included in the flow type category of the taxonomy and their definitions are reported in Table 2. Table 2. Flow types.
Material flows
The material flows are all the flows of raw materials, work-in-progress components and finshed products going from the suppliers to the customers. [27,31] Information flows The information flows are all data and information flows related to the production and the logistics processes useful for a better forecast of the demand and a better production planning. These flows go from the customers to the suppliers and vice versa. [27,31]
Human-Technology Relation
Logistics 4.0 is considered as the evolution of traditional logistics towards the Industry 4.0 paradigm. The main scientific literature on this topic suggests that, while the main logistics activities do not undergo variations, the introduction of 4.0 technologies has an important impacts on the human factors related to the logistics operators' tasks [8]. If we consider automation technologies or intelligent digital applications, logistics operators could be entirely replaced in some cases, especially in all those repetitive, risky, and heavy activities. Nevertheless, despite the supposed extensive use of technology, as suggested by the maturity models defined by [32] for Logistics 4.0, the human role will be crucial in supervision and controlling activities. For instance, the human-technology interactions will be determinant to improve decision-making.
For these reasons, if we consider the category of the human-technology relation, it is possible to identify two dimensions: (i) automation; ii) support (Table 3). Automation represents the situation in which operators are completely replaced by a machine/robot or an automatic computerized systems (such as AGVs) that are in charge of the tasks that were previously performed by a human worker. This case is more related to operators and individual tasks that are strongly requiring the use of physical force (e.g., material handling) and continuous/repetitive activities (e.g., manual packaging, picking operations) or that do not bring added value to the process, such as inventory control and scrap disposal [33]. Support dimension, conversely, refers to all the situations in which human and technology coexist in the performance of working tasks. Also in this case, there are several ways in which human and technology can interact. The support can consist in collaborative work (e.g., an operator working with a cobot) or the human worker can be augmented or assisted by technology. Augmentation is more referred to the technologies that are able to enhance human capabilities (e.g., exoskeleton), while assistance refers to the applications that provides workers with additional information and instructions (e.g., softbots). Table 3. Human-technology relation.
Human-Technology Relation Definition References
Automation Automation represents the situation in which operators are completely replaced by a machine/robot or an automatic computerized system that become in charge of the tasks that were previously performed by a human worker. [3,34] Support Support dimension refers to all the situations in which human and technology coexist in performing working tasks. [3,34] Finally, Table 4 summarize all the categories and the dimensions that have been identified during the second step of the research workflow and that will constitute the base of the taxonomy.
Taxonomy Objects Identification
As stated in Section 2, technologies have been chosen as the taxonomy objects. In the introduction and the previous section, technologies have always been referred to in a generic way. In reality, different types of technologies vary according to their domain (i.e., handling, management, traceability) and main features. Moreover, it is necessary to specify that some of the technologies analyzed in this section were born and developed in the logistics field, for example, to allow the traceability of products and processes (e.g., barcode, RFID) or to perform material handling (e.g., self-driving vehicles, automated warehouses). Other technologies, instead, were born and developed in a pure production field and were then applied to the field of internal logistics (e.g., collaborative or autonomous robots or software for the management of information flows). The classification reported below is mainly based on two literature reviews on technologies in logistics [8,35]. Table 5 shows the main technological domains applied in the logistics field with a brief description of their features and functionalities. Moreover, Table 5 reports the detailed technologies/applications that emerged from literature chosen as taxonomy objects and analyzed in Section 4.1 in accordance with what was explained in Section 2 regarding the research methodology adopted.
Traceability
Barcodes are codes consisting of a group of printed bars, spaces and numbers designed to be scanned and read into computer memory and that contains information (such as identification) about the object on which they are attached. The Radio-Frequency IDentification (RFID) sensors are key technologies viewed as a prerequisite or essential element in the IoT. THEY are based on unique codes or tags that are read by electromagnetic devices. The peculiarities of RFID are the non-necessity of a line of sight between the reader and the tag, the simultaneous high-speed reading of multiple tags. Beacon sensors are small, always-on transmitters, which use Bluetooth Low Energy (BLE) technology to broadcast signals containing several information to nearby portable devices (tablets and smartphones).
Technological Domains Description Taxonomy Objects References
IT/ICT/CC Information Technology (IT) or Information and communications technology (ICT) in logistics is identified as all applications used to plan, implement and control procedures to transport and store goods and services from origin to destination. Cloud Computing (CC) figuratively refers to a bundle of virtualized and distributed resources shaped in a diffuse, all-pervasive way, similar to a cloud. This type of technology allows access to software applications and data storage without a significant investment in infrastructure but investment in software functionality and services. Handheld Computers (picking orders information) Voice-Direct Headsets (voice picking) Smart Glasses (pick-put to light) Activity Trackers (steps, heart-rate) Exoskeletons (lifting and moving) Wearable Scanners [41][42][43] AVS/RS Automated vehicle storage and retrieval systems (AVS/RSs) are used to achieve greater operational efficiency and competitive advantage, especially in operating environments with a high altitude. Autonomous vehicles provide horizontal movement (x-axis and y-axis) within a tier using rails or laser guides, while lifts provide vertical movement (z-axis) between tiers.
AGVs (picking and moving) Smart Fast Rotation
Storage Systems Smart Trasloelevators Smart Mini-Loaders Smart Lifts and Forklifts [44] Drones Drones could collect data from shelves doing autonomous inventory control and handle small and light parcels.
Drones (inventory, picking and moving) Collaborative Robots (picking) [45] Logistics Robots Logistics robots are robots with one or more grippers to pick up and move items within a logistics operation such as warehouses, sorting centers, or last-mile fulfillment centers.
Taxonomy of Technology
This paper aims to provide a taxonomy of technologies for Logistics 4.0, with a specific focus on the relation between technology and human factors. Indeed, in the taxonomy development, along with the main category related to the logistics activities (Internal Logistics Activities), two new categories have been conceptualized to take into consideration the relevant aspects of Logistics 4.0: (i) the improvement of information flows thanks to the availability of a considerable amount of data (Flow types), and (ii) the necessity of combining the new technological application with the human work (Humantechnology relation). After the choice of the categories and dimensions as described in Section 3.1, during the taxonomy development, it was possible to highlight that some internal logistics areas shared exactly the same objects, equally classified according to the other dimensions.
For this reason, some areas have been merged in a unique dimension: Storage and Stowage have been considered in the same dimension, while Packing, Labelling, Kitting and Consolidation have been merged in the Packing and Delivering dimension.
Taxonomy
At this point, it was possible to proceed with creating the actual taxonomy of the technologies adopted in the various activities of internal logistics, based on the categories and dimensions illustrated in the previous sections. The taxonomy results are reported in Table 6.
Discussion
From an initial analysis of the taxonomy, two main results became immediately apparent. In some activities, there is a more significant number of technologies adopted, and, in general, they are applied more for the management of material flows than for information flows. The first result is not surprising, as picking and material handling activities are, in fact, primarily repetitive and dangerous activities (especially material handling, when materials are handled at high altitudes) for which human intervention is limited [34]. This motivation explains the presence of a large number of automationrelated technologies in both areas. However, in some subtasks related to picking (e.g., item identification and recording) and material handling (e.g., rapid and precision movements, loading-unloading), human intervention is still needed for its peculiar characteristics (e.g., articulated movements, cognitive skills). Indeed, the technologies are not yet mature enough to carry out specific operations with precision, speed and (above all) with a minimum space requirement (compared, for example, to the requirement for a collaborative or autonomous robot to carry out the same activities).
As regards the preponderance of applications for the management of material flows compared to the management of information flows, we can say that the cause is mainly to be found in the level of maturity of the technologies themselves. It is thought that in the future, with the development of enabling accessory and infrastructure technologies (such as 5G and blockchain) there will be an increasing number of applications technologies for the support of information flows in storage. It is also necessary to highlight that not all the technologies in Table 4 have reached the same level of maturity. For some technologies, such as drones and collaborative robots, there are ways of use still not fully explored. Another element that puts a brake on developing other technologies for flow management is still a low level of integration between different technologies. Adopting standards is still a barrier to integration between different management software with different functionalities, between management systems and hardware technologies, and between hardware technologies themselves [47]. Even if an increasing number of important steps are being taken to evolve the traditional factory into a smart factory, there are still some problems to solve, especially, as mentioned above, at the level of integration between different technologies. However, the investments in research, the continuous monitoring of pilot applications, the existence of pioneering companies that have already begun to invest in cutting-edge technologies, and the great importance that is increasingly given to the construction of digital supply chains (to create increasingly flexible and efficient supply chains) suggest that soon the technologies adopted for flow management will be more numerous [35].
Finally, it should be noted that the taxonomy does not show differences between support and automation technologies. There is no preponderance of one dimension over the other. This result happens because, as already found in literature, the twodimensional typologies are not alternatives to each other but proceed in parallel. The choice of automating or supporting the execution of a particular activity does not depend so much on the degree of maturity reached by the technology that one wishes to apply but on the intrinsic characteristics of that activity and on the impact that these characteristics have on the operator (i.e., human factors) who carries out that activity.
Conclusions
The present research work aims at realizing a taxonomy of the leading technologies implemented in the activities related to internal logistics. The taxonomy was based on the identification of three main categories (i.e., internal logistics activities, flow types, human-technologies relation) and the classification of the taxonomy objects (i.e., the different specific technological applications) according to them. The taxonomy has shown how the penetration of technologies is more evident in activities linked to picking and material handling. These activities are notoriously more repetitive, and can involve a higher risk factor for the safety of operators but, at the same time, they include activities that cannot be automated entirely due both to some peculiar characteristics that require human intervention and to the low level of technological maturity reached so far. It has also been observed that there is a large number of technological applications in material flow management compared to information flow management. This result is due to the technological evolution still in progress. At the time this research was carried out, we began to see the first applications of enabling infrastructure technologies for the management and exchange of information flows such as 5G or blockchains. Finally, no substantial differences emerged between the implementation of automation technologies versus support technologies, the number of applications being equivalent because the two dimensions are not alternatives but proceed in parallel.
The work has limitations, mainly because the categories of the taxonomy have been exclusively identified inductively from the scientific literature. Another limitation concerns the identification of the taxonomy objects that have been identified only by reviewing works and applications published by the academic community. Indeed, it would be necessary to undertake a review of the existing additional technologies applied in the industry and by suppliers so that the list of objects of the taxonomy could be updated regularly and frequently.
It is assumed that these limitations can be overcome in later phases of the research. First, performing a joint inductive-deductive approach could be crucial to refining the taxonomy and validating the classification performed with cases. In doing this, industrial stakeholders could be involved, through surveys, case studies and interviews, in order to enlarge or better define the categories and dimensions of classification, along with the taxonomy objects. Moreover, the taxonomy could be used as a reference framework to support systematic literature reviews concerning the topics of the relation between Industry 4.0 technologies, logistics, and the operator's role. Furthermore, testing the taxonomy based on industrial application could be useful to identify unexpected utilization of the technologies can suggest further modifications and improvements in the currently proposed taxonomy. The work presented in this paper is, in fact, part of a larger research project aimed at analyzing the relationship between technology evolution and the impact it has on operators and managers working in the field of logistics in order to have both a greater understanding of logistics activities and to support managers and companies in a correct choice of the types of technology to adopt according to different needs. | 7,547.4 | 2021-10-16T00:00:00.000 | [
"Engineering",
"Business",
"Computer Science"
] |